The other day I mentioned that D-Wave wasn't yet a "real" quantum computer but that they were closer than anyone currently in production. Here's one of the up-and-comers....And more on the current state of the art from Ars Technica:
Quantum computing: Fast, rough, and repeat, or slow, precise, and right?
When it comes to quantum computing, mostly I get excited about experimental results rather than ideas for new hardware. New devices—or new ways to implement old devices—may end up being useful, but we won't know for sure when the results are in. If we are to grade existing ideas by their usefulness, then adiabatic quantum computing has to be right up there, since you can use it to perform some computations now. And at this point, adiabatic quantum computing has the best chance of getting the number of qubits up.
But qubits aren't everything—you also need speed. So how, exactly, do you compare speeds between quantum computers? If you begin looking into this issue, you'll quickly learn it's far more complicated than anyone really wanted it to be. Even when you can compare speeds today, you also want to be able to estimate how much better you could do with an improved version of the same hardware. This, it seems, often proves even more difficult.
It's fast, honest
Unlike classical computing, speed itself is not so easy to define for a quantum computer. If we just take something like D-Wave's quantum annealer as an example, it has no system clock, and it doesn't use gates that perform specific operations. Instead, the whole computer goes through a continuous evolution from the state in which it was initialized to the state that, hopefully, contains the solution. The time that takes is called the annealing time.
At this point, you can all say, "Chris ur dumb, clearly the time from initialization to solution is what counts." Except, I used the word hopefully in that sentence above for good reason. No matter how a quantum computer is designed and operated, the readout process involves measuring the states of the qubits. That means there is a non-zero probability of getting the wrong answer.
This does not mean that a quantum computer is useless. First, for some calculations, it is possible to check a solution very efficiently. Finding prime factors is a good example. I simply multiply the factors together; if the answer doesn't come to the number I initialized the computer with, I know it got it wrong. In case of a wrong answer, I simply repeat the computation. When you can't efficiently check the solution, you can rely on statistics: the correct answer is the most probable outcome of any measurement of the final state. I can just run the same computation multiple times and determine the correct answer from the statistical distribution of the results.
So for an adiabatic quantum computer, this means speed is the annealing time multiplied by the number of runs required to determine the most probable outcome. While not the most satisfactory answer, it's still better than nothing.
Unfortunately, these two factors are not independent of each other. During annealing, the computation requires that all the qubits stay in the ground state. However, fast changes are more likely to disturb the qubits out of the ground state—so decreasing the annealing time increases the probability of getting an incorrect result. Do the work faster, and you may need to perform the computation more times to correctly determine the most probable outcome. And as you decrease the annealing time, wrong answers will eventually become so probable that they are indistinguishable from correct answers.
So determining the annealing time of an adiabatic quantum computer has something of a trial-and-error approach to it. The underlying logic is that slower is probably better, but we'll go as fast as we dare. A new paper published in Physical Review Letters shows that, actually, under the right conditions, it might be better to throw caution to the wind and speed up even more. However, that speed comes at the cost of high peak power consumption....MORE