rotor replacement

Paul Rubin http
Sat Jan 22 17:51:36 EST 2005


jjl at pobox.com (John J. Lee) writes:
> > Building larger ones seems to
> > have complexity exponential in the number of bits, which is not too
> 
> Why?

The way I understand it, that 7-qubit computer was based on embedding
the qubits on atoms in a large molecule, then running the computation
procedure on a bulk solution containing zillions of the molecules,
then shooting RF pulses through the solution and using an NMR
spectrometer to find a peak at the most likely quantum state (i.e. the
state which had the most of the molecules in that state).  To do it
with 8 qubits instead of 7, you'd have to use twice as much solution,
so that particular technique doesn't scale.  What we want is a way to
calculations on single molecules, not bulk solutions.  But no one so
far has managed to do even 7 qubits that way.

> > It's not even known in theory whether quantum computing is
> > possible on a significant scale.
> 
> Discuss. <wink>

The problem is maintaining enough coherence through the whole
calculation that the results aren't turned into garbage.  In any
physically realizeable experiment, a certain amount of decoherence
will creep in at every step.  So you need to add additional qubits for
error correction, but then those qubits complicate the calculation and
add more decoherence, so you need even more error correcting qubits.
So the error correction removes some of your previous decoherence
trouble but adds some of its own.

As I understand it, whether there's a quantum error correcting scheme
that removes decoherence faster than it adds it as the calculation
gets larger, is an open problem in quantum computing theory. 

I'm not any kind of expert in this stuff but have had some
conversations with people who are into it, and the above is what they
told me, as of a few years ago.  I probably have it all somewhat garbled.



More information about the Python-list mailing list