CS 301 Lecture, Dr. Lawlor

This lecture is pure bonus material--it will NOT appear on any homeworks or the final exam.

 Ordinary Computer Adiabatic Circuits Quantum Computer Goal Get 'er done! Substantially lower power use, especially at low clockrate. Speedups up to exponential: e.g., search n values in sqrt(n) time Data storage 1's and 0's (bits) 1's and 0's (bits) Vector with axes 1 and 0 (qubits) Not just 1 or 0: both at once! Assignments? Yes No (uses energy = kT ln 2) No (violates laws of physics) Reversible? No Yes Yes, except for "collapse" operation Swap? Yes Yes Yes Logic gates AND, OR, NOT NOT, CNOT, CCNOT CNOT, Hadamard rotate 45 degrees Programming Model Instructions Reversible Instructions Reversible Quantum Operations, and Irreversible Collapse Clock Square wave Two trapezoidal waves Limited by coherence time When? Now Slowly, over next ten years ??? Limits Heat/power, hard problems Only helps at low clockrate How many bits can you keep coherent?

Adiabatic circuits are based on a few interesting fundamental observations about modern circuit efficiency:
• Slamming a wire from a 1 down to a 0 means dumping the 1's energy directly into the ground.  It's more efficient to shuffle the 1 off somewhere else than to erase it entirely.
• Writing a zero into a register actually reduces entropy: the universe has lost a disordered register value, and now has an orderly zero-filled register.  You can't reduce entropy in one place without increasing it (even more) somewhere else, so writing a zero actually consumes energy, no matter how the circuitry manages the write.
Saed Younis' 1994 MIT PhD Thesis outlines the basic adiabatic circuit model and its inherent power advantage, which is up to several hundredfold at sufficiently low clock rates.  These design principles have slowly been trickling into CPU designs piece by piece; still, most circuits are non-reversible and non-adiabatic today.  The path to a fully adiabatic computation, assuming we ever get there, will have to change the instruction set at some point, because tons of operations are destructive, like "mov" (which irrevocably destroys the destination), and will need to be replaced with "swap" (which doesn't destroy information).  Some future fully-adiabatic CPU will need reversible instructions, and in fact the compiler will probably need to generate entire "antifunctions" to undo the operation of each function.  To really be 100% reversible, the display would have to suck the final answer back out of your brain, so some degree of irreversibility is usually considered acceptable in designing real systems.

The energy kT ln 2 needed to erase one digit near room temperature is about 10-20 joules, which is... small.  If your machine is chewing through 10 billion bits per clock cycle, and does 10 billion clocks per second, this is one joule/second, or one watt of irreducible "bit erasure cost".  A typical desktop CPU is much smaller than this, and uses 30-100 watts for other things, so this isn't a very big effect yet.  But depending on how silicon fabrication scales up, it could become a show-stopper in the next ten years, and drive us toward reversible programs.

Once your program is fully reversible, you're actually halfway toward writing a quantum computer program!

## Quantum Physics

Small things, like electrons, display several very odd mechanical properties with mystical sounding "quantum" names:

• Superposition: an electron can be in several places at once, or several states at once.  The terminology here is that the electron's "wave function" has spread over space.  The probability of observing an electron at any given location is the square magnitude of the wave's amplitude.
• Entanglement: electrons can interact.  Electrons in a superposition of states can interact.  Interactions between superpositioned electrons always collapse to consistent values, that would have made sense even without superposition.  This can create surprisingly complex interactions, since the interaction of n binary unknowns represents 2n total interactions.  (See: ripple tank applet.)
• Collapse: if something big (e.g., a human being, a microscope, or any macroscopic measuring device) stares at the electron, it appears in exactly one place.  This is called "wavefunction collapse", because the spread-out wave function bunches up again; or "state reduction", since you start with many states and end up with one state.  Worse yet, it's looking less like collapse is somehow tied to a "big observer", which would be merely creepy. Delayed quantum erasure means it's probably entangling your wavefunction with the electron's, which would be freaky: just looking at the plots splits your own wavefunction into several pieces.  This would mean collapse is just an illusion,
A quantum computer is based on "qubits", which you can think of as a dot in 2D space: the X axis means a "0", the Y axis means a "1".  Normal bits must lie on one axis or another, but qubits can live between the axes.  For example,
• (X,Y) coordinates (0.7,0.7) means "might be a 0, or might be a 1".  Said more mystically, this is the "quantum superposition of 0 and 1".  If you measure the qubit, it will randomly "collapse" to either 0 or 1.
• (0.0,1.0) means "definitely a 1".  If you measure the qubit, you'll always get a 1.
• (0.3,0.95) means "it's probably a 1, but might still be 0".  If you measure the qubit, you'll get a 0 10% of the time (10% = 0.3*0.3), and a 1 90% of the time (90%=0.95*0.95).
Since coordinates (0,0) means "not a 0, and not a 1", so we usually require the qubit to live on a unit sphere--it's either going to be a zero or going to be a one, so the probabilities (equal to the square of the amplitude) must add up to one.  Tons of research groups have built single-bit quantum computers, but the real trick is entangling lots of bits together without premature "collapse", and that seems to be a lot harder to actually pull off.

## Quantum Programming

Just like a classical computer uses logic gates to manipulate bits, the basic instructions in a quantum computer will use quantum logic gates to manipulate individual wavefunctions.  Because you don't want to destroy any aspect of the data (this causes decoherence), you can represent any quantum logic gate with a rotation matrix.

Again, think of a qubit like a little vector.  One qubit is 2-float vector representing the amplitudes for zero and one:
 a=0 a=1

A "Pauli-X" gate is represented by this 2x2 rotation matrix:
 0 1 1 0

Plugging in the input and output probabilities, we have:
 a=0 a=1 output a=0 0 1 output a=1 1 0

The amplitude for a=1 on the input becomes the amplitude for a=0 on the output, and vice versa--this is just a NOT gate!

### CNOT gate

A controlled NOT takes two bits as input.  Two qubits makes a 22=4 float vector with these amplitudes:
 a=0 && b=0 a=0 && b=1 a=1 && b=0 a=1 && b=1

The CNOT gate's matrix is basically a 2x2 identity, and a 2x2 NOT gate:

 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0

Again, putting in the input and output vectors, we can see what's going on:

 a=0 a=1 b=0 b=1 b=0 b=1 a=0 && b=0 1 0 0 0 a=0 && b=1 0 1 0 0 a=1 && b=0 0 0 0 1 a=1 && b=1 0 0 1 0

If a=0, nothing happens--b's probabilities are exactly like before.  If a=1, then b gets inverted, just like a NOT gate.

The basic programming model with a quantum computer is:
1. Initialize your quantum registers with a superposition of 0 and 1: these hence contain every possible answer.
2. Run a series of instructions to selectively amplify the answer you're looking for, or attenuate the answers you're not looking for.  For example, you can arrange "wrong answers" so they cancel each other out.  Each instruction must be a reversible operation, but in theory can be arbitrarily complex and there's no theoretical limit on the number of instructions.   However, in practice, the machine only works if you can keep the whole register entangled in a coherent superposition: accidental collapse or "decoherence" is currently the limiting factor in building big quantum computers.
3. Finally, look at the register.  The act of looking will "collapse" to a particular set of 1's and 0's, hopefully representing the right answer.  If there are several right answers, physics will pick one randomly.
People started to get really interested in Quantum Computers when in 1994 Peter Shor showed a quantum computer could factor large numbers in polynomial time.  The stupid algorithm for factoring is exponential (just try all the factors!), and though there are smarter subexponential algorithms known, there aren't any non-quantum polynomial time algorithms known (yet).  RSA encryption, which your web browser uses to exchange keys in "https", relies on the difficulty of factoring large numbers, so cryptographers are very interested in quantum computers.  In 1996, Lov Grover showed an even weirder result, that a quantum search over n entries can be done in sqrt(n) time.

## Quantum Hardware

At the moment, nobody has built a useful quantum computer, but there are lots of interesting experimental ones.  The biggest number a quantum computer has factored is 15 (=3*5, woo hoo).  The largest quantum computers actually built so far have only 8 bits, and only one register, which means the maximum theoretical speedup is 28=256 times faster than a normal machine.  But *if* the hardware can be scaled up, a quantum computer could solve problems that are intractable on classical computers.    Or perhaps there is some physical limitation on the scale of wavelike effects--the "wavefunction collapse"--and hence quantum computers will always be limited to a too-small number of bits or too-simple circuitry.  At the moment, nobody knows.

Roger Penrose has a theory that the human brain is actually a quantum computer.  This could explain how we're able to do some of the really amazing things we do, like recognize pictures of objects.  The deal is that it's easy for a computer to *simulate* a picture of an object (that is, object->picture is easy), but to *recognize* a picture of an object means searching over all possible objects, orientations, lighting, and so on (that is, picture->object is hard).  A quantum computer with sufficiently large registers (big enough to generate a complete image, so thousands of qubits) could in principle start with a superposition of all possible objects, and use a series of instructions to cancel out objects that are inconsistent with the current picture, finally collapsing out a plausible object that could have generated that picture.

There's a different and controversial "Adiabatic Quantum Computer" design used by British Columbia based quantum computer startup D-Wave that they hope will scale to solve large problems.  It is not well accepted whether this new design is workable, and there are serious doubts whether the 128-bit superconducting niobium hardware the startup has built is "really" a quantum computer.  They got a huge amount of press in 2007 and 2008, but have been quieter lately.  They were panned in IEEE Spectrum as "Does Not Quantum Compute"; but they recently got a big optimization contract from Lockheed-Martin.

The future of quantum computers is currently in a superposition between two outcomes:
• Y axis: Quantum computers will replace all other computers, reach and then exceed human-level intelligence, and then things will start to get really interesting.
• X axis: Quantum computers will remain an interesting laboratory curiosity.  Indefinitely.
This superposition may collapse sometime in the next few years.  Or maybe not.