Davies QUANTUMCOMPUTING 2017
Davies QUANTUMCOMPUTING 2017
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Australian Strategic Policy Institute is collaborating with JSTOR to digitize, preserve and
extend access to this content.
The concept of quantum computing dates to 1981, when Nobel Prize winning physicist Richard Feynman observed
that classical computers couldn’t efficiently deal with the complex dynamics of quantum systems. Rather than
seeing that as a problem, Feynman turned the situation on its head, observing that setting up a quantum system
and performing a measurement was therefore equivalent to executing many calculational steps on a classical
computer. In principle at least, by carefully designing a quantum system and making a measurement, an answer that
was practically unobtainable from a classical computer could be generated.
Once physicists and mathematicians started exploring the possibilities of such an approach, they found that there
are some practically interesting problems for which suitable quantum systems can be designed. Shor’s algorithm
(1994) for factoring numbers into their prime factors (15 = 3 × 5, for example) can be shown to be substantially
more efficient than any known classical algorithm—and it’s immediately applicable to cryptographic attacks on
prime-number-based encryption, such as the RSA algorithm widely used in internet security. Grover’s algorithm
(1996) is a quantum technique for finding a specific record in an unstructured and unsorted database. The time
taken by any classical search technique grows in proportion to the size of the database, while the quantum search
grows only as the square root of the number of entries. Both take longer as the number of entries increases, but
going from one to a trillion entries will take a trillion times longer for a classical machine, but only a million times
more for a quantum computer. That said, it’s not clear how useful that will be in practice. Every search of the
trillion-entry database will require it to be reloaded, as observing the result of the previous search will collapse the
superposition of database states that provides the quantum speed-up.
The discovery of those algorithms and the mathematical proof of their efficiency showed that useful quantum
computing devices are possible—in principle. Perhaps of greater interest is the possibility of a universal quantum
computer—a device that can be programmed to perform any computational task. That such a device is theoretically
possible was established in 1985 by David Deutsch, who essentially generalised the classical theory of computers
developed by Alan Turing. That’s important because it means that a quantum computer can theoretically do
everything that a classical computer can do, and potentially do it many times faster. There are also calculations
and simulations that no classical computer can practically do but that are possible on a quantum computer. For
example, being able to model quantum systems on a quantum computer could lead to breakthroughs in the design
of exotic materials.
It’s hard to overestimate the potential impact of large-scale quantum computing on virtually every aspect of modern
life. The rapid growth of processing power in classical computing over the past few decades has been underpinned
by technical advances in the ability to pack processors onto circuit boards (parameterised by Moore’s Law, which
says that the density of transistors doubles roughly every two years) and to move information between them. The
net result has been an increase in the number of calculations that can be performed per second by a factor of a
million in the past 30 years. Given the impact of that change on modern life, the potentially transformational nature
of a sudden increase in computing power of a similar magnitude becomes immediately obvious.
Practical difficulties
But the potential of quantum computing might never be realised. Designing an algorithm and calculating its
theoretical efficiency or proving that there’s no fundamental law of nature preventing universal quantum computing
is quite different from implementing it on practical hardware. Despite having over 30 years of theory and many
person-years of experimental research and development effort behind it, there’s still no commercial quantum
computing technology on the market, nor any obvious sign of that happening soon.
Opinion remains divided regarding the prospects of a significant breakthrough to large-scale quantum computing.
Optimists point to there being no fundamental impediment (in the sense that it’s compatible with all known laws of
nature), the existence of some promising prototype systems that have already performed some simple calculations,
and a diverse range of technical approaches being pursued in labs around the world. Pessimists note that the laws
of physics don’t just have to allow something in theory: they must also align to make it possible in practice. They
point to decoherence as the elephant in the room.
A programmable quantum computer needs to be able to interact strongly with the external world to allow the input
of instructions and the output of results of computations. But, in between those interactions, it’s necessary to
quarantine the device from the external environment as much as possible, to prevent decoherence from spoiling the
computation. The more steps in the computation, the greater the chance of decoherence intervening, so it could be
that proof-of-concept demonstrations will be difficult to translate into systems for solving real problems. Because
of unavoidable interactions with the environment, errors due to decoherence will need to be removed before they
accumulate to prevent accurate longer calculations. Again, there’s a theoretical proof that provides in-principle
support in the form of what’s called the ‘threshold theorem’. The theorem says that if environmental noise can be
kept below a certain level, it will always be possible to correct noise-induced errors faster than they’re created. That
means that fault-tolerant quantum computers of almost arbitrary size should be possible. It’s a powerful finding,
but error correction is notoriously difficult to implement. Current small-scale quantum computer demonstrations
can get by without needing much error correction, but the amount required will grow sharply for more ambitious
computers. There are experimental approaches designed to minimise the need for error correction (it can never be
entirely avoided), or to confine it to subsystems small enough to make the process fast enough and reliable enough
for practical applications. Ironically, it could be that the classical computing power needed to track and control the
accumulation of errors will be a bottleneck for quantum computing.
We think that there’s enough promise in current work on quantum computing to make its pursuit worthwhile (we
review some important progress below), but we also note that there are no guarantees of success. For example, the
generation of useful and controllable power from nuclear fusion is well understood from a theoretical point of view.
There are no laws of nature that rule out the construction of a fusion reactor, and there have been proof-of-concept
demonstrations of (fleeting) net energy generation in laboratories. But, despite a lot of effort and investment, it has
proven to be prohibitively difficult to implement as a practical means of energy production. Fusion power has been
‘a few decades away’ for much more than a few decades now, and no fusion reactor has yet been able to generate
more energy than is required to run it on a sustained basis. Despite some recent claims of breakthroughs, they
remain unproven as commercially viable technology. Some engineering problems are just really hard. Quantum
computing might be one of those.
Recent progress
We finish this section by noting that quantum computing has recently been nudging some important milestone
achievements. One such is the practical implementation of ‘quantum supremacy’. That means a clear
demonstration of a ‘quantum speed-up’—a quantum computer that outperforms the best classical supercomputer
on a specific task. But ‘outperforms’ here is a qualified term. It doesn’t mean that there’s a quantum computer that
can perform a useful real-world calculation faster than a classical one—just that the time required increases less
steeply with the size of the inputs. Grover’s algorithm (described above) is an example of a calculation for which
a quantum speed-up is possible. Doubling the size of a database doubles the classical computational time, but a
quantum computer will take only 40% longer. The quantum computer could even be slower than a classical one and
still pass the test, provided the 40% scaling is achieved.
That’s still a significant step in computer science, even if the name tends to oversell it, and effectively shows that
there are some things a quantum computer can do that a classical computer can’t—at least on a practical timescale.
A team working for Google expects to soon be able to demonstrate quantum supremacy.
If those developments were to lead to successful large-scale devices, then that would enable a range of practical
applications, not just Grover’s algorithm, but also the ‘HHL algorithm’ for linear equations, and quantum simulation.
Most scientific fields have problems in which the ability to efficiently perform large calculations of those sorts would
be very helpful. The HHL algorithm offers an even better performance boost than Grover’s, providing an exponential
speed-up of runtime compared to the best classical algorithm for solving a system of linear equations. That could
allow for faster (and therefore more detailed) modelling in everything from weather forecasts to radar system
simulations. And a quantum simulator would let us model atomic-scale interactions efficiently—something that a
classical computer can’t do. Areas such as medicine, chemistry and engineering now use advanced supercomputers
to approximate the behaviour of drugs, organics and materials. Faster calculations of models with greater fidelity
would be of great utility.
And certainly not least, most machine-learning techniques also involve linear equations. Using classical devices and
the fruits of Moore’s law, machines are already number one in chess, go and poker. And they’re encroaching on jobs
in industries traditionally occupied by highly trained (and well-paid) humans. A literal quantum leap in processing
speed, especially in calculations of direct application to machine learning, could be a technological force multiplier
of extraordinary impact.
The potential impact of large-scale quantum computing is huge, but there remain many unknowns in its practical
implementation. That said, quantum computing experiments aren’t especially expensive to support (and look
positively cheap compared with ‘big science’ such as particle physics and large-array astronomy) and there are
even some potentially useful small-scale devices that could operate on just a handful of qubits. We come back to
the subject in a section on geopolitical implications at the end of this paper, but if a single nation were to make a big
breakthrough first and establish a clear lead, catching up might not be so easy. For all those reasons, we think that
continued investment in quantum computing research makes sense. But that shouldn’t happen at the expense of
research into other quantum technologies. As we see below, quantum sensing is very promising and is getting some
runs on the board in terms of producing useful devices. That’s not yet true of quantum computing.