Borrador TFG
Borrador TFG
Introduction
Quantum computation and quantum information are defined as the study of information
processing tasks that can be accomplished using quantum mechanical systems [1]. Their
birth cannot be understood as the evolution of just one field, but as the merging of several
branches of science, such as computer science, quantum mechanics, information theory
and cryptography.
It uses phenomena such as superposition, which establishes that if a physical system may
be in one of many configurations—arrangements of particles or fields—then the most
general state is a combination of all of these possibilities [2]; or entanglement, that occurs
when a pair or group of particles interact in such way that the quantum state of each
particle cannot be described independently of the state of the others. [3]
Although quantum computers are mostly theoretical constructs at the present time, it has
been proved that quantum computing will eventually outperform classical computing in
many purposes [4]. A good example of this is quantum cryptography, which has the
potential to encrypt data for longer periods of time than classical cryptography [5].
Quantum simulation is also one of the most relevant potential applications, since
simulations of quantum systems that are carried out numerically on classical computers
are subject to the exponential growth of required resources as the size of the quantum
system increases. And it has been claimed that quantum computers will be able to mimic
these systems efficiently in the polynomial scale [6].
As a result, interest and investment in the field of quantum computing has increased
dramatically in recent years. The market of quantum computing is projected to reach
$64.980,0 million by 2030 from just $507,1 million in 2019. In the public sector, China
has remained at the forefront of technological advances, launching the first quantum
satellite in 2016. In the U.S., the Trump administration authorized in 2018 $1.200,0
million to be spent on the quantum science over the next five years, while in 2020 India
set a budget of $1.120,0 million for the same period. Europe, on the other hand, has a
total initiative of €1.000,0 million providing funding for the next ten years. [7]
The history of computation possibly begins with the appearance of the abacus in 500-300
BC, but its main development did not come until the 19th and 20th centuries, with
Boolean algebra (1854), the theory of computation (Turing, 1936) and information theory
(Shannon and Weaver, 1940s). It was in that 40s decade that John Eckert and John
Mauchly developed the ENIAC (Electronic Numerical Integrator and Calculator), the
first fully electronic computer, in 1946.
Parallel to the development of computation, in the early 20th century, the world of physics
took a plot twist. The inadequacy of classical physics to the microscopic domain became
increasingly evident due to various empirical facts, such as blackbody radiation and the
photoelectric effect. Thus was born the theory of quantum mechanics, which differs from
classical physics in that energy, momentum, angular momentum, and other quantities of
a bound system are restricted to discrete values(quantization), objects have characteristics
of both particles and waves (wave-particle duality), and there are limits to how accurately
the value of a physical quantity can be predicted prior to its measurement, given a
complete set of initial conditions (Heisenberg's uncertainty principle, 1927). [2]
Computing and the quantum world did not come together until well into the 20th century.
In the 1970s, Paul Benioff began to research the theoretical feasibility of quantum
computing. His research culminated in a paper, published in 1980, that described a
quantum mechanical model of Turing Machines [8]. Shortly thereafter, Richard Feynman
and Yuri Manin suggested that quantum computing has the potential to simulate things
that classical computing cannot. [9] [10]
The origin of this idea can be found in the decision problem, posed by David Hilbert and
Wilhelm Ackerman in 1928:
In 1936, Alonzo Church and Alan Turing both independently demonstrated that the
existence of such an algorithm was impossible [12] [13]. The concept of the Turing
machine, an abstract notion of what we know as a programmable computer, appeared for
the first time. And further work led to the Church-Turing thesis.
In 1994, Peter Shor demonstrated that the problem of finding the prime factors of an
integer could be solved efficiently on a quantum computer – a problem that takes classical
computers an exponentially long time to solve for large numbers [15]. His algorithm
launched a huge interest in the field of quantum computing. And the Church-Turing
quantum thesis was formulated:
“A quantum Turing machine can efficiently simulate any realistic model of computation.”
[16]
Many companies have ventured into the quantum computing race since Paul Benioff
began his research. But among these companies, IBM and Google are the ones that have
achieved the best results. IBM launched in 2017 the first industry initiative to build
commercially available universal quantum computing systems [19]. The project, named
as IBM Quantum, is pioneer in providing quantum computing service, and it has led to
1
First-order logic is a particular formal system of logics, whose syntax consists only of finite expressions
as well-formed formulas (WFF) - a finite sequence of symbols of a given alphabet that is part of a formal
language -, while its semantics are characterized by the limitation of all quantified variables to a given
domain of discourse. In logic, the domain of discourse is the set of entities over which the variables range.
[25]
the design and construction in 2019 of the first integrated quantum computing system for
commercial use [20]. That same year, Google AI, in collaboration with NASA, claimed
they had realized a quantum computation that would be unfeasible on any classical
computer [21]
Although the history of quantum computing is still short and there are still many
discoveries to be made, there are already several quantum algorithms of great relevance.
One of the best known, apart from Shor's, is Grover's algorithm, invented by Lov K.
Grover in 1996 [22]. It is described as a database search algorithm which requires fewer
function evaluations than a normal search, thus substantially reducing the search time.
Inverting a function can be related to searching in a sequence, if we consider that this
function produces the value of y as the position occupied by the value x in this sequence.
Thus, if we have the function y=f (x), which can be evaluated in a quantum computer,
Grover’s algorithm allows us to calculate the value of x given the value of y as input.
In this context of increasing awareness of the potential of quantum computation for the
society, we have devised this Bachelor’s Thesis Report as an attempt to provide the reader
with a basic overview of the fundamentals of quantum computing from the point of view
of an undergraduate student in Chemistry.
To this end, this work starts by introducing ideas from classical information and classical
computing such as bit, byte, number encoding and the like. Logic gates will be described,
with a review of the operators on which most current computational algorithms rely. In
the next chapter, we will provide the backgrounds of quantum mechanics, emphasizing
the phenomena of superposition and entanglement. And we will also study why the wave
function collapses into a single (non-entangled) state when measuring, being that
normally in quantum mechanics it evolves deterministically according to the Schrödinger
equation as a linear superposition of different states. We call this the measurement
problem. Based on these ideas, we will be able to introduce the basics in quantum
information and quantum computing and to explose its fundamental differences with
classical computing. For this purpose, we will explain the qubit and the difficulties of its
physical implementation, review the quantum computing protocol and briefly describe
the usual quantum gates. In a last stage, we will provide a few examples of routines and
computational codes aimed for quantum computers, programmed with Mathematica,
which altogether constitute a complete, student-made implementation of Grover’s
algorithm. The latter exercise is probably the most challenging and self-instructing part
of this work.
2. Classical information and classical computing
2. 1 Information storage and encoding
This chapter introduces the basic concepts required to understand how classical
information is stored and manipulated, as well as its physical implementation in today's
computers. With the information contained in this chapter we intend to provide the reader
with some background on classical computing concepts that will be revisited in chapter
4 once we introduce quantum computing. In this way, hopefully, we will be able to
capture the conceptual breakthrough that quantum computation entails.
The smallest unit of classical information is the bit (binary digit). It is a binary variable,
generally represented as 0 or 1, where the number indicates one of two possible values or
states, such as true or false, open or closed, north or south, and so on. We can store one
bit in any electronic device or any other physical system that exists in either of two
possible distinct states. These two states can be, for example, two positions of an electrical
switch, two different directions of magnetisation or polarisation, or two voltage levels
allowed by a circuit. Generally, in today's digital equipment (PCs, smartphones, game
consoles, etc.) bits are implemented by using transistors. [23]
Bits are extremely useful because any discrete value such as numbers, words and images
can be encoded using sequences of bits. With a single bit, there are only two (21) possible
patterns to store information in, 0 or 1, which is of limited usefulness. However, every bit
we add double the possibilities. For example, with two bits, we have four (22) possible
patterns: 00, 01, 10 and 11. With three bits, we have eight (23) possible patterns: 000, 001,
010, 011, 100, 101, 110 and 111. In general, n bit yields 2n different patterns for storing
information, so we find that the number of messages (m) that can be delivered by n bits
is:
𝑚 = 2! (2.1)
ASCII, for example, is an encoding convention that represents each typed character by a
number. The code uses 8-bit sequences, so the numbers range from 0 to 255 and each is
stored in one byte to represent the upper and lower case letters of the English alphabet,
plus punctuation marks, digits 0 to 9 and some control information. Thus, if we insert a
message such as “Hello” in a computer that uses this code, it would be stored as follows:
Character H e l l o
Code 72 101 108 108 111
Byte 01001000 01100101 01101100 01101100 01101111
Table 2. 1. Example of correspondence between alphabetic letters and ASCII
However, the number of patterns available in the ASCII code is insufficient to represent
the alphabet of many Asian and some Eastern European languages, so Unicode had to be
designed. This encoding typically stores each character in 2 bytes, so it uses 16-bit
sequences, with which 65536 different patterns can be composed. Enough to be able to
write texts in languages such as Chinese, Japanese and Hebrew.
A byte works well for characters, but for computational purposes we are very much
interested in number manipulation too. Integer numbers can be easily encoded with bits
by writing them in binary (rather than decimal) base. As an example, the table below
shows the correspondence for numbers from 0 to 7, for which we need 3 bits, which offer
8 (23) different patterns to encode each digit.
000 0 100 4
001 1 101 5
010 2 110 6
011 3 111 7
Table 2. 2. example of correspondence between binary and decimal numbers
In general, to encode integers in binary notation, each position is associated with a weight,
just as in the decimal system (ones, tens, hundreds, etc.). In the case of binary notation,
the most right-hand digit is associated with 20, the next position to the left with 21, the
next with 22, and so on up to 2n, where n is the number of bits. To obtain the corresponding
value, as in base ten, we multiply the value of each digit by the weight associated with its
position and then sum the results. The following tables show an example, comparing
decimal and binary notation (with 8 bits).
Decimal pattern
2 3 3 Weight Result Sum
x1 3
x 10 30 233
x 100 200
Binary pattern
1 1 1 0 1 0 0 1 Weight Result Sum
x 1 (20) 1
x 2 (21) 0
x 4 (22) 0
x 8 (23) 8
233
x 16 (24) 0
x 32 (25) 32
x 64 (26) 64
x 128 (27) 128
As we have seen previously, with 8 bits we can only store 256 different numbers, so to
achieve a larger range, integers are usually stored in 8 bytes, that can store numbers
between -9223372036854775808 and 9223372036854775807.
2. 2 Information manipulation
With all of the above, we can now understand the basis of classical information, but how
are the individual bits manipulated in a computer?
To understand how they work, we provide a review of the eight most commonly used
logic gates, along with their symbol and their logic table or truth table.
INPUT OUTPUT
A Q
NOT: Its output is the opposite of the input. 0 1
1 0
INPUT OUTPUT
A B Q
AND: Both input values must be true to 0 0 0
obtain a true output. 0 1 0
1 0 0
1 1 1
INPUT OUTPUT
A B Q
OR: If one of the input values is true, we will 0 0 0
get a true output. 0 1 1
1 0 1
1 1 1
INPUT OUTPUT
A B Q
NAND: if one or both of the values are false, 0 0 1
it generates a true output. 0 1 1
1 0 1
1 1 0
INPUT OUTPUT
NOR: if one or both of the values are true, it A B Q
generates a false output. 0 0 1
0 1 0
1 0 0
1 1 0
INPUT OUTPUT
XOR: only generates a true output if the A B Q
values of the inputs are different. That is, 0 0 0
equal inputs generate a false output. 0 1 1
1 0 1
1 1 0
INPUT OUTPUT
XNOR: opposite to XOR, only generates a A B Q
true output if the inputs are the same. 0 0 1
0 1 0
1 0 0
1 1 1
Table 2. 3. List of logic gates
A good example to understand how logic gates work is their use for integer addition. Now
that we know how to encode integer numbers with bits, their sum in binary notation will
not be so different from the sum in base ten.
0 1 0 0 1 1 1 0
+ 0 1 0 0 1 0 1 1
Carry 1 0 0 1 1 1 0
1 0 0 1 1 0 0 1
We add the bits in the same position, starting from the right, and if the base value is
reached, we restart the addition from zero and carry one to the next bit. Therefore we will
need, for each pair of bits, an output for the sum and an output for the carried value. One
can check from the truth tables of the logic gates, that the XOR gate allows us to obtain
the result of the addition, while the AND gate determines whether or not we carry one to
the next position. The circuit formed by these two gates is called a half-adder.
By connecting a hall-adder circuit with 7 full-adder circuits, we can add 8-bit binary
numbers, see Fig.2.3. This is precisely the kind of computational circuit that enables your
ordinary calculator or computer to carry out simple additions. [23]
The aim of this chapter is to illustrate the reader with some of the key concepts and
phenomena on which quantum computing is based. We will introduce the concepts of
superposition and entanglement, discuss the probabilistic nature of quantum physics and
also the so-called measurement problem. Rather than providing a self-contained
introduction to quantum mechanics, which the reader can find in several textbooks (e.g.
Refs. [24], [25]), we will address these concepts by drawing analogies of classical physics
in the quantum limit and simple examples.
Some of these ideas may be difficult to understand, as they are suited to the microscopic
domain and are not applicable to our experience in everyday life. That is why we will
start this chapter by discussing the Dirac polariser experiment [2]. This is a good example
to introduce some quantum concepts because photons, being bosonic particles, can
accumulate to provide macroscopic effects we can observe directly.
Imagine shining a beam of unpolarised light through a vertical polariser. The intensity of
light observed on the other side will be lower, since only the fraction of light polarised
parallel to the filter will pass through it. If we then add a horizontally polarised filter, no
light would reach the other side at all, as all of it is vertically polarized.
If the light beam is not able to pass through two filters, it would be logical to think that
the addition of a third filter cannot change this fact. But, surprisingly, if between the two
aforementioned filters we place a polariser with an angle of 45º, we will see that part of
the light is able to pass through all three filters (Fig. 3.2).
The intensity is proportional to the emergent amplitude, and the squares of the
components at 0 and 90 are related to each other:
The fraction of light passing through the first polariser is only the vertical component
(𝐸%⃗" ′ = 𝐸" 𝑠𝑖𝑛𝜃 ). As there is no component with projection on the x-axis, no light reaches
the other side if a horizontal polarizer is added (𝐸%⃗"&& = 0).
Using this classical electromagnetism approach we can estimate what proportion of the
radiation passes through each polarizer. But if only vertically polarized light pass through
the first filter, how is it possible that it can also pass through two other filters with
different polarization? We need to approach the problem from the point of view of
quantum physics. To this end we keep in mind that, as revealed by the photoelectric effect
[24], light is make of collection of photons, each with its own frequency and polarization.
The photon is the indivisible unit of light and, therefore, it is not possible for only a
fraction of it to pass through the filter. There are only two possible outcomes: to pass or
not to pass. We can deduce that there will be a certain probability that a photon will be
polarised in one direction or the other. Thus, when a beam of light passes through a
polarizer, some photons pass and others do not, and therefore the intensity of the emerging
light is lower.
The quantum state of a photon with any polarisation can be written as:
In our case |𝑥⟩ would be the state fully polarized along the x-axis and |𝑦⟩ would be the
state polarized along the y-axis. So the photon state is a superposition of two orthogonal
states ( ⟨𝑥|𝑦⟩ = 0 ). Necessarily, the probability that the photon is polarised in any
direction must be 100%. Therefore, the wave function will be normalized,
The square of the coefficients 𝑐( and 𝑐) are related to the probability that the photon is
polarised in one direction or the other.
Where both 𝑐( ′ and 𝑐) ′ are finite. Now the photons can pass through the second polarizer
with probability 𝑐) % and collapse to |𝜓'* ⟩ = |𝑦′⟩ . (Fig. 3.4 (b))
As we can see, |𝑦′⟩ is no longer orthogonal to |𝑥⟩, or to |𝑦′′⟩ if we rewrite the axes for the
horizontal polarizer (Fig. 3.4 (c)). Consequently, photons will be able to pass through the
third filter with a finite probability.
Figure 3. 4. Polarization direction and new axes for the three filters
So far we have managed to introduce the concept of superposition and some other basic
ideas of quantum physics. Thus, let's leave aside Dirac's experiment and move on to
another important phenomenon: entanglement.
𝑃>+, 𝐻
@ =𝐻
@ 𝑃>+, ; A𝑃>+, , 𝐻
@C = 0 (3.8)
That is, the Hamiltonian (𝐻@ ) and the permutation operator (𝑃>+, ) commute. This in turn
implies that they share a complete set of eigenfunctions. Therefore, when applying the
operator on a wave function, we obtain the same function, multiplied by its eigenvalue,
𝜆.
Since 𝑃>+, is a hermitic operator, 𝜆 must be a real number. If we apply the operator twice:
But if we permute twice the coordinates of a wave function, it remains unchanged, then
𝜆% = 1. From this we can deduce that 𝜆 = ±1. Particles with 𝜆 = +1 are called bosons
(e.g., photons) and we say that they are symmetric with respect to the permutation. On
the other hand, particles with 𝜆 = −1 are called fermions (e.g., electrons) and these are
antisymmetric with respect to the permutation.
Let us imagine now that we have a system formed by two particles that do not interact
(perhaps because they are far apart in space). The Hamiltonian of the system could be
written as
@ = ∑- ℎ>- (3.11)
𝐻
where ℎ>- is the Hamiltonian acting on the i-th single particle. We could think that, since
there is separation of variables, the wave function could be written as the product of the
independent particles.
However, the wave function (3.12) does not have the required symmetry properties
because it does not fulfill Eq. (3.9). As we have seen above, for indistinguishable particles
it must be symmetric or antisymmetric with respect to the permutation. If our system
consists of two bosons, the required wave function would be:
𝜙/ (𝜏. ) 𝜙/ (𝜏% )
𝜓 = 𝑁[𝜙/ (𝜏. ) 𝜙0 (𝜏% ) − 𝜙/ (𝜏% ) 𝜙0 (𝜏. )] = P Q (3.14)
𝜙0 (𝜏. ) 𝜙0 (𝜏% )
Incidentally, we note from Eq. (3.14) that if 𝜙/ = 𝜙0 , the wave function vanishes. This
is a manifestation of the Pauli principle, which states that there cannot be in a system two
identical fermions occupying the same spinorbital.
One last fundamental aspect of quantum mechanics, which we will rely on in future
chapters when discussing quantum computing, is that of time-dependent perturbations in
a quantum system.
The evolution of the state of a system in time is given by the time-dependent Schrodinger
equation [26]:
ℏ 23
−- @ (𝑞, 𝑡) 𝜓
=𝐻 (3.15)
24
@ (𝑞, 𝑡) is the
where ℏ is the reduced Planck constant, 𝜓 the system's eigenfunction and 𝐻
Hamiltonian operator.
Up to this time and throughout the degree, when studying this equation we have focused
on systems described by time-independent Hamiltonians. Then, it is reasonable to make
a separation of variables and writte 𝜓- as a product of a function of space coordinates (𝑞)
and a function of time (𝑡):
@ (𝑞) 𝜓- (𝑞, 𝑡) = 𝐻
𝐻 @ (𝑞) 𝑓- (𝑞) · 𝜒- (𝑡) = 𝜒- (𝑡) 𝐻
@ (𝑞) 𝑓- (𝑞) (3.17)
In the case that we would need to manipulate the state of a quantum system, the
Hamiltonian becomes time-dependent, and the solutions are no longer stationary states,
but combinations of stationary states.
To illustrate how this changes the behavior of quantum systems,, we will use one of the
simplest examples: the particle in a 1D box [26].
! !
@ 𝜓 = 89 𝜓 = − ℏ ; ! 𝜓 = 𝐸 𝜓
𝐻 (3.19)
%: %: ;(
% !>(
𝜓! (𝑥) = Y = 𝑠𝑖𝑛 =
(3.20)
2 𝑛𝜋𝑥
|𝜓(𝑥)|% = sin%
𝐿 𝐿
Fig. (3.5) shows the functions and probability densities of the states corresponding to the
two lowest suffices needed to study the time-dependent example we want.
Figure 3. 5. Plots for ψ(x) and |ψ(x)|2 using Mathematica
As we can see, in any of both stationary states, there is a finite probability of finding the
particle on either side of the box at any time. This is not the case for non-stationary states.
Let suppose as an example that we induce a spectroscopic transition between the
fundamental state and the first excited state of the particle in the box.
If we compare (3.23) with (3.20) we see that, unlike in stationary states, the probability
density in non-stationary states depends on time. Plotting the evolution of |𝜓(𝑥, 𝑡)|% with
time, we can see that the location of the particle changes and it bounces from wall to wall.
Therefore, it is impossible to find the particle on either side of the box at certain times.