0% found this document useful (0 votes)
25 views13 pages

Experimental Evaluation of Quantum Machine Learnin

Uploaded by

micktyson0222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views13 pages

Experimental Evaluation of Quantum Machine Learnin

Uploaded by

micktyson0222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

This article has been accepted for publication in IEEE Access.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.DOI

Experimental Evaluation of Quantum


Machine Learning Algorithms
RICARDO DANIEL MONTEIRO SIMÕES1 , PATRICK HUBER1 , NICOLA MEIER1 , NIKITA
SMAILOV1 , RUDOLF M. FÜCHSLIN1,2 AND KURT STOCKINGER1
1
ZHAW Zurich University of Applied Sciences, Winterthur, Switzerland
2
European Centre for Living Technology, Ca’ Bottacin, Dorsoduro 3911 Calle Crosera, 30123 Venice, Italy
Corresponding author: Kurt Stockinger (e-mail: Kurt.Stockinger@zhaw.ch).

ABSTRACT Machine learning and quantum computing are both areas with considerable progress in recent
years. The combination of these disciplines holds great promise for both research and practical applications.
Recently there have also been many theoretical contributions of quantum machine learning algorithms
with experiments performed on quantum simulators. However, most questions concerning the potential of
machine learning on quantum computers are still unanswered such as How well do current quantum machine
learning algorithms work in practice? How do they compare with classical approaches? Moreover, most
experiments use different datasets and hence it is currently not possible to systematically compare different
approaches.
In this paper we analyze how quantum machine learning can be used for solving small, yet practical
problems. In particular, we perform an experimental analysis of kernel-based quantum support vector
machines and quantum neural networks. We evaluate these algorithm on 5 different datasets using different
combinations of quantum feature maps. Our experimental results show that quantum support vector
machines outperform their classical counterparts on average by 3 to 4% in accuracy both on a quantum
simulator as well as on a real quantum computer. Moreover, quantum neural networks executed on a
quantum computer further outperform quantum support vector machines on average by up to 5% and
classical neural networks by 7%.

INDEX TERMS Machine learning, quantum computing, experimental evaluation

I. INTRODUCTION to medium-sized research institutions or companies that do


Hardly any other field of research in computer science has not have the computing resources of large corporations.
made such rapid progress in recent years as machine learning. The field of quantum machine learning has gained consid-
It is used successfully in various areas both in research and erable attention in the last years [4], [7], [10], [14]. However,
in industry [19]. However, there are also limits and unsolved it is still relatively unclear what kind of problems can be
problems in practical applications due to the enormous com- solved practically today and which ones remain only of
puting resource requirements of large machine learning al- theoretical nature.
gorithms such as transformer-based language models [30]. In this paper we will perform an experimental evaluation
Moreover, machine learning methods are often complex and of quantum support vector machines (QSVM) as well as
based on large amounts of data. Therefore, depending on the quantum neural networks (QNN) and compare them against
task, the algorithms can become extremely computationally their classical counterparts. In our first set of experiments we
intensive. will evaluate kernel-based SVMs [14]. Classical kernel-based
A novel type of computer hardware, quantum computers, SVMs have been studied well and have been widely applied.
promises considerable speed-up so that these algorithms are However, the classical approaches suffer in situations where
useful for a broad class of users [5], [35]. Moreover, compa- the feature space becomes large and the kernel functions
nies such as IBM and Amazon already provide public access become computationally expensive to estimate. These limita-
to quantum computers via Python interfaces [1], [11]. This tions can be overcome by using quantum algorithms that en-
allows active research in quantum computing also for small able the exploitation of an exponentially large quantum state

VOLUME 4, 2022 1

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

space through controllable entanglement and interference. dimension 2n . The basis vectors can be written as:
In addition, we study various implementations of QNNs
based on different quantum circuits. One of the open re- |b1 ⟩ ⊗ ... ⊗ |bn ⟩ = |b1 b2 ...bn ⟩ (1)
search questions is how do design optimal quantum circuits
both for QSVMs as well as QNNs such that the quantum
with bk ∈ {0, 1}. Here, |b1 b2 ...bn ⟩ is a basis vector in
algorithm shows the best learning behavior for practical n
H 2 . A general vector (or state, as it is called in quantum
machine learning problems. In order to address this question,
mechanics) is a linear combination of these basis states:
we will perform an experimental evaluation of kernel-based
QSVMs as well as QNNs for classification problems using n X
five different datasets. |Q⟩ ∈ H 2 : |Q⟩ = cb0 b1 ...bn |b1 b2 ...bn ⟩ (2)
This paper makes the following contributions: bk ∈{0,1}

• We perform a detailed experimental evaluation of clas-


Quantum mechanics requires that the state |Q⟩ is normalized:
sical kernel-based SVMs and compare the accuracy
against QSVMs running both on a quantum simulator X
and a real, publicly available quantum computer. We |cb0 b1 ...bn |2 = 1 (3)
also compare the performance of classical neural net- bk ∈{0,1}
works against quantum neural networks.
• Our experimental evaluation on five different datasets These linear combinations are called superpositions. There
shows that QSVMs outperform their classical counter- are two types of processes one can apply on such quantum
parts on average by 3 to 4%. Moreover, QNNs further registers:
outperformed QSVMs by up to 5%.
• Further comparisons with classical neural networks • Quantum dynamics, which are unitary transformations
demonstrate that QNNs also outperform those whilst us- (rotations and reflections) of |Q⟩. These unitary trans-
ing far fewer parameters, which has also been confirmed formations are reversible and fully deterministic.
in comparable experiments [9]. • Measurements: These are projections combined with
• These results demonstrate that quantum computing can normalizations. For our purposes this means that a
be successfully applied for small-scale machine learning measurement M maps the state |Q⟩ of a quantum
problems in practice already today. register stochastically onto one of the basis states
|b1 b2 ...bn ⟩. The probability for this to happen is given
by |cb0 b1 ...bn |2 . A measurement is irreversible and sur-
II. QUANTUM COMPUTING FOUNDATIONS
jective (which implies that in a measurement, one looses
Quantum computing is a new computational paradigm based information).
on the fundamental principles of quantum mechanics. The
major concepts that can be leveraged for performing calcu- It helps to imagine a quantum computation as a series of
lations are superposition and entanglement which basically unitary operations (true quantum operations) finalized with
means that quantum computation does to information what one measurement (more involved schemes are used, though).
traditional quantum mechanics does to elementary particles Importantly, rotating |Q⟩ affects "all basis states at once", i.e.
and photons: it characterizes these fundamental entities by all basis vectors in Equation 2 are manipulated in parallel.
wave- and particle-like aspects. We will briefly explain these Thus, a quantum computer is a highly parallel supercom-
concepts below. puter.
In a classical computer, information is stored in bits whose The concept of entanglement implies that the combined
states can either be 0 or 1. In a quantum computer, the infor- state of qubits contains more information than the qubits have
mation is stored as qubits (quantum bits) to either represent independently. This is easy and worthwhile to understand.
the values 0 and 1 or a linear combination of both. Expressed We explain it in some detail, because a widespread mis-
in mathematical terms, a qubit is a vector of length 1 in a conception of quantum computing sees its advantages solely
two-dimensional complex Hilbert space H 2 (basis |0⟩, |1⟩). with respect to the mentioned "quantum parallelism".
A general qubit |q⟩ is given by |q⟩ = c0 |0⟩ + c1 |1⟩ with We start with the observation that a single qubit is deter-
c0,1 ∈ C and |c0 |2 + |c1 |2 = 1. The normalization means that mined by three real numbers. A collection of n independent
a qubit is characterized by three real numbers (cn = an +ibn , qubits is therefore characterized by 3n real values. A super-
|cn |2 = |an |2 + |bn |2 ). position is given by 2n complex numbers being subject to the
Classical bits can be combined into registers, which con- normalization condition in Equation 3. This implies that the
tain bit sequences. In contrast, the state of a quantum register state of |Q⟩ is determined by 2 · 2n − 1 real numbers. For
of length n can be a linear combination of all possible bit the case of n = 2, this means that two independent qubits
sequences of length n. Expressed in mathematical terms, a are characterized by six parameters, whereas the state of a
quantum register is a state in the tensor product of n two- quantum register of length n = 2 is determined by seven
n
dimensional Hilbert spaces H 2 , i.e. a Hilbert space of numbers.
2 VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

Mathematically, this means that in general, the state of a


quantum register |Q⟩ cannot be written as a tensor product of
two independent qubits q A , q B . The equation

qA ⊗ qB = (cA A B B
0 |0⟩ + c1 |1⟩) ⊗ (c0 |0⟩ + c1 |1⟩)
= (cA B A B
0 c0 |0⟩ ⊗ |0⟩ + c0 c1 |0⟩ ⊗ |1⟩ +
FIGURE 1: Example of a simple quantum circuit with three
cA B A B
1 c0 |1⟩ ⊗ |0⟩ + c1 c1 |1⟩ ⊗ |1⟩ (4)
qubits. "H" refers to the Hadamard gate and "+" to a con-
describes the tensor product of two independent qubits. The trolled X-gate. Finally, the states of qubits q0 and q1 are
state of a two-qubit register is given by measured.

|Q⟩ = (c00 |00⟩ + c01 |01⟩ + c10 |10⟩ + c11 |11⟩) (5)
(QSVMs) [14], [25], while others are based on neural net-
For a general choice of c00 , c01 , c10 , c11 with |c00 |2 +|c01 |2 + works [2], [6], [12], [18], [22], [28], [34] and can be referred
|c10 |2 + |c11 |2 = 1 the equations c00 = cA B
0 c0 , c01 = to as quantum neural networks (QNNs).
A B A B A B
c0 c1 , c10 = c1 c0 , c11 = c1 c1 cannot be solved un- Let us start with analyzing QSVMs in more detail. [14]
der the normalization constraints |cA 2 A 2
0 | + |c1 | = 1 and propose a variational quantum circuit to classify data sim-
|cB
0 |2
+ |cB 2
1 | = 1. ilar to SVMs. This approach uses a variational circuit that
Physically, there is in general no way to interpret the generates a separating hyperplane in the quantum feature
entangled state of a quantum register in terms of a collection space. A further approach proposed by the same authors is
of individual qubits. called quantum kernel estimator, which is used to estimate
In order to manipulate qubits, quantum circuits are used. the kernel function and optimize a classifier. To evaluate the
These circuits are similar to their classical counterparts but approach, a synthetic data set is used that contains 20 data
they contain additional logical operators and gates. One of points per label. In our experiments we also use this data set.
the gates is the Hadamard gate which brings qubits in a One of the first study to demonstrate that quantum neural
superposition. networks show an advantage over their classical counter parts
Another important type of operator are the controlled is presented in [2]. The authors evaluate two different feature
Pauli-gates. Single qubits can be visualized as points on maps for the quantum neural network. One feature map is
a two-dimensional sphere, the so called Bloch-sphere. The based on the circuit introduced in [14]. The second circuit
Bloch-sphere is embedded into a three-dimensional space uses parameterized RY-gates, which are followed by CNOT-
with coordinate axis x, y, z. Note well that these coordinates gates that are applied between every pair of qubits in the
have no direct relation to the actual physical space, but circuit. Finally, another set of parameterized RY-gates are
are primarily a consequence of a specific representation of used. The QNN is evaluated both on a quantum simulator
qubits (a more detailed historical analysis of the origin of the as well as on a real quantum computer using the Iris data set
Bloch-sphere would tell a somewhat different story, which is, that we also use in our experiments.
however, not of relevance in the context here). A quanvolutional neural network architecture is proposed
Controlled manipulation of qubits can then be understood in [16]. The basic idea is to replace a convolutional filter of a
as rotations around the x, y, z-axis, and consequently, these CNN with a quanvolutional layer that transforms input data
gates are also called controlled X-, Y- and Z-gates. As it turns with a random quantum circuit. The approach is evaluated
out, since these rotations of one qubit depend on the state of with image data on a quantum simulator. Our approach,
another qubit, the application of such a controlled gate leads however, is evaluated on various numerical data sets both on
to quantum entanglement. Mathematically, all quantum gates a quantum simulator as well as on real quantum hardware.
can be considered as unitary matrix operations. [8] introduce a hybrid classical-quantum approach called
An example of a simple quantum circuit is given in Figure Quantum Short Long-Term Memory. The idea is to replace
1. The circuit consists of three qubits q0 , q1 and q2 . First, parts of classical RNN with a variational quantum circuit.
all qubits are initialized with the ground state 0. Then, the The approach is evaluated on a quantum simulator but not
Hadamard-gate is applied on qubit q1 , followed by controlled on real quantum hardware.
X-gate operation with qubit q2 and a controlled X-gate oper- In this paper we evaluate existing approaches based on
ation between qubits q0 and q1 , followed by a Hadamard- QSVMs and QNNs. Previous algorithms have mostly been
gate applied on qubit q0 . Finally, the qubits q0 and q1 are evaluated either only on a qunatum simular or on a single
measured. data set. Hence, comparability of the algorithms as well as
generalizability of the approaches to other data sets has not
III. RELATED WORK been demonstrated in depth. In our paper, we evaluate various
Over the last years, several different approaches of quantum quantum machine learning approaches on 5 different data
machine learning algorithms [7] have been proposed. Some sets both on a quantum simulator as well as on real quantum
approaches are based on quantum support vector machines hardware.
VOLUME 4, 2022 3

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

The major question we study in this paper is how well corresponding decision function uses the kernel function to
these algorithms perform on small, yet real machine learning classify ⃗x, as shown in Equation 6.
problems using publicly available quantum hardware.
M
!
X D E
IV. MACHINE LEARNING APPROACHES QSV M (⃗x) = sign ai yi ϕ(⃗x(i) ), ϕ(⃗x) +b
H
A. KERNEL-BASED SVMS: CLASSICAL APPROACH i=1
A feature function ϕ(⃗x) is a mapping of a data point ⃗x into (6)
feature space of higher dimension. This is advantageous for The quantum analogue kernel function equation – with an
classification because it opens up more possibilities for a exponentially large space of density matrices S(2q ) spanned
hyperplane to separates data point of different classes. across q qubits as the feature space – can be seen in Equation
The so-called kernel trick allows re-writing of a linear 7 [31].
decision function used by SVMs in terms of a dot product be-
tween data points. In combination with a feature function, it ψ : RS → S(2q )
can be further substituted with a kernel function k(⃗x, ⃗x(i) ) = ED
ϕ(⃗x)T · ϕ(⃗x(i) ), for a given training data point ⃗x(i) and ⃗x 7→ ψ(⃗x(i) ) ψ(⃗x) (7)
a data point ⃗x for which the decision is made P [13]. The
D E 2
decision function in its final form f (⃗x) = b+ i αi k(⃗x, ⃗x(i) ) → k(⃗x, ⃗x(i) ) = ψ(⃗x(i) ) ψ(⃗x)
introduces a shortcut to the explicit calculation of the dot As previously stated, the quantum circuit, which performs
product between feature vectors, which can be of infinite the transformation into the quantum space, is called quantum
dimension. Furthermore the resulting function is linear in the feature map. Typical quantum feature maps are the Z-feature-
feature space. map, the ZZ-feature-map and the Pauli-feature-map [14].
The part i αi k(⃗x, ⃗x(i) ) of the function is called kernel
P
An example of the Pauli-feature-map, which is the most
matrix and represents the similarity values between each generic feature map, is shown in Figure 2. It consists of two
training data point. different quantum gates, namely the Hadamard gate, which
puts qubits in superposition, and a parameterized P-gate
B. KERNEL-BASED SVMS: QUANTUM APPROACH (phase gate). In addition, we can see the controlled X-gate
Let us now discuss how classification can be implemented ("+") which enables entanglement between qubits.
on a quantum computer. In principle, we need to following
steps:
• Transform the classical data points into quantum data
points with a quantum circuit.
• Use a parameterized quantum circuit to classify the data.
• Measure the output.
• Send the results of the quantum kernel to a classical
SVM for final classification.
These steps comprise a so-called variational quantum FIGURE 2: Example of a quantum kernel based on the Pauli-
classifier [32] leveraging parameterized quantum circuits. feature-map. For simplicity, the figure only shows parts of the
Since current quantum computers are still quite error-prone, a circuit.
common approach is to implement one part of the end-to-end
process on a quantum computer and the remaining parts on The circuit can also be stacked and thus made wider in
a classical computer. In particular, [14] suggest a quantum order to design even more complex feature maps resulting
kernel estimator, where the kernel function is implemented in a quantum circuit with a larger depth. However, due to
as a quantum kernel, i.e. a quantum circuit, which translates the limitations of current quantum devices, larger quantum
classical data into quantum states via a quantum feature map, circuits often lead to a higher error rate. Hence, designing
and then builds the inner product of these quantum states. optimal quantum kernels for SVMs is still an unsolved re-
The inner product is used for further processing by the search problem. The goal of this paper is to evaluate various
classical SVM. As a final step, the classification is performed feature maps for solving small, yet practical machine learning
by a kernel-based SVM on a conventional computer using problems using QSVMs.
the calculated kernel. In summary, the calculation of the
kernel matrix is performed by a quantum algorithm, whereas C. QUANTUM NEURAL NETWORK
the classical SVM algorithm is executed on a conventional The design of the quantum neural network is inspired by
computer. previous work of Havlicek et al. [15] and Thomsen et al. [31].
Let us describe the QSVM approach more formally. Ac- The general architecture of the quantum circuit is shown in
cording to Thomsen et al. [31], using an already classified Figure 3a and consists of three parts. The first part is the
data point ⃗x(i) and a to be classified data point ⃗x, the feature map UΦ(⃗x) which is used to encode the input features
4 VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

|ψ(⃗x)⟩ |φ(⃗x, θ)⟩

|0⟩ z0

|0⟩ z1
UΦ(⃗x) W (θ)
.. ..
. .
|0⟩ zn

feature map variational model measurement


⃗x → UΦ(⃗x) W (θ) f (z) = y →

(a) A fixed input ⃗x is encoded into the quantum state by applying UΦ(⃗x) to the reference state |0⟩n for n qubits: UΦ(⃗x) |0⟩n = |ψ(⃗ x)⟩.
The state then is evolved via the variational unitary W (θ) with trainable parameters θ: W (θ) |ψ(⃗
x)⟩ = |φ(⃗x, θ)⟩. The resulting bit string
z ∈ {0, 1}n is post-processed to an associated label in y := f (z) with y as output.

W (θ) optional n layers


|0⟩ ··· ···

|0⟩ ··· ···


UΦ(⃗x) layer 0 layer n
.. ··· ··· ..
. .
|0⟩ ··· ···

(b) The variational model W (θ) can be repeated n times which can thus be considered as layers of the QNN.
FIGURE 3: Architecture of the variational quantum circuits used in the QNN experiments.

of the used dataset into quantum states. The second part is the 1) Feature Map
variational model W (θ) which evolves the quantum states The main goal of the feature map is to encode classical
of the system using trainable parameters θ. The final layer features of our dataset into the Hilbert space H the quantum
consists of the measurement of the final states. system acts in. We apply the circuit UΦ(⃗x) to the zero state
Figure 3b shows that the variational model can be repeated |0⟩, which defines the feature map as described in Equation 8
n times, which is similar to the layers of a classical neural
network. The larger the quantum circuit, the better a function
|ψ(⃗x)⟩ := UΦ(⃗x) |0⟩ (8)
can be approximated and hence the better the generalization
of the machine learning algorithm should be. At the same where ⃗x is defined according to Equation 7 previously
time, current quantum hardware is limited in its size and introduced in Section IV-B.
stability. Available systems do not allow for the creation Among a multitude of embedding techniques [26], we
of longer circuits with repeated, variational models. At the have chosen angle encoding to encode the classical data into
same time the length of a circuit should be minimized - quantum states. Whilst angle encoding is not optimal as it
noise affecting the quantum system during calculations leads requires n qubits to represent n-dimensional data, it is effi-
to instabilities and influences the resulting measurements, cient regarding operations and directly useful for processing
which can falsify results. With this in mind, the circuits in this data in quantum neural networks [20]. Weigold et al. [33]
paper are limited to using a single instance of the variational state that only single-qubit rotations are needed for the state
model. preparation routine, which is highly efficient and can be done
The parameters θ of the variational model are optimized in parallel for each qubit.
using classical optimizers leveraging classical hardware. Figure 4 shows a circuit with one qubit per feature. For
Note that there is a fundamental difference between the instance, qubit q0 represents feature x0 which is encoded with
QSVM we discussed previously and the QNN we described a rotation around the y-axis where the angle is proportional
here. In the case of the QSVM, only the kernel is imple- to the value of feature x0 .
mented using a quantum circuit while the SVM itself is
implemented classically. As for the QNN, the whole neural 2) Variational Model
network is implemented using a quantum circuit and only the After the classical features are encoded as quantum states,
optimization of the parameters is implemented classically. these quantum states can be further evolved in the variational
We will now describe the implementation of the feature model according to Equation 9.
map and the variational model in more detail.
|ψ(⃗x, θ)⟩ := W (θ)UΦ(⃗x) |0⟩ (9)
VOLUME 4, 2022 5

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

q0 : RY (2x0 ) V. EXPERIMENTS
In our first set of experiments we evaluate the performance of
q1 : RY (2x1 ) classical kernel-based support vector machines and compare
.. them against QSVMs. First, we execute the QSVMs on
. qasm_simulator, a Python-based quantum simulator of IBM
qn : RY (2xn ) Qiskit [3] accessed via BasicAer1 . Afterwards, we execute
the experiments on ibmq_belem, a real quantum system
FIGURE 4: Quantum circuit demonstrating angle encoding
providing 5 qubits [17], accessed via IBMQ2 .
with RY-gates.
In our second set of experiments we compare the ac-
curacy of classical neural networks against quantum neural
The trainable weights are embedded in the variational networks.
The major questions we address with these experiments
model W (θ) which can be grouped into n layers, where each
are as follows:
layer consists of RY -, RX-, and RZ-rotation gates.
• Which quantum circuit yields the best performance for
An example of a variational model with three parameters
is shown in Figure 5. Note that first qubit q0 is rotated a given dataset?
• Can we establish a clear strategy for designing quantum
by angle 2θ1 around the y-axis. Next, qubit q0 and qubit
q1 are entangled with a controlled Ry -gate with the angle circuits?
• Does the quantum implementation of the algorithm have
parameter 2θ2 followed by a rotation around the y-axis with
the angle parameter 2θ3 . an advantage over the classical counterpart?

A. DATASETS
We will now describe the datasets that we used for our
RY (2θ1 ) •
experiments. In particular, we used five datasets with varying
RY (2θ2 ) RY (2θ3 ) degrees of difficulty, which was estimated from the order
of the separating hyperplane in the origin space. Each of
FIGURE 5: Variational model with three parameters. these datasets has one hundred data points and two classes
containing the same number of data points.
The chosen encoding strategy assumes one qubit per fea-
3) Decision Function ture. Since our quantum computer provides a maximum of
Next, the resulting state of Equation 9 needs to be measured. five qubits, we have reduced the number of features to a
Since we use our quantum circuit as a binary classifier, a maximum of five, if necessary. For training and testing we
bitstring z ∈ {0, 1}q is calculated which is associated with performed an 80:20 percent split. Moreover, we performed a
a class membership via the following Boolean function. 10-fold cross validation for all experiments and report on the
average results. An overview of the datasets is given in Table
f : {0, 1}q → {−1, +1} 13 .
(10)
z 7→ ỹ Dataset #Features #Records #Classes
Iris 4 100 2
The classification is re-run multiple times (R shots) where Rain 5 100 2
R is the number of re-runs or shots. Thus, the resulting Vlds 5 100 2
Custom 2 100 2
measurement outcome z is probabilistic and we need to Adhoc 3 100 2
assign the label to the bitstring with the largest probability.
Hence the probability of measuring either label y ∈ {+1, 1} TABLE 1: Characteristics of the five datasets used for our
is given by experiments.

Iris dataset. This widely used flower dataset was loaded


1
py = (1 + y ⟨ψ(⃗x)| W (θ)† FW (θ) |ψ(⃗x)⟩) (11) via the Python library scikit-learn4 . For the experiment,
2 the data points were selected from the Iris-Setosa and Iris-
where F is a diagonal operator Virginica classes. A data point has four numerical features.
Rain dataset. This dataset is taken from kaggle.com5
X
F= f (z) |z⟩ ⟨z| (12)
1 https://qiskit.org/documentation/apidoc/providers_basicaer.html
2 https://qiskit.org/documentation/apidoc/ibmq_provider.html
z∈{0,1}n
3 Note that currently we use relatively small datasets consisting of 100
Since F only has eigenvalues of −1 or +1, the expectation records. One of the reason is that we use a publicly available quantum
value is as follows: computer with access limitations where users are put in a task queue. When
a single user uses too many resources, the access is blocked
4 https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_

QN Nθ (⃗x) = ⟨ψ(⃗x)| W (θ)† FW (θ) |ψ(⃗x)⟩ ∈ [−1, +1] iris.html


5 https://www.kaggle.com/jsphyg/weather-dataset-rattle-package
(13)
6 VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

and contains about ten years of daily weather observations kernel estimator approach chosen in [14]. Qiskit provides
from many locations in Australia. The incomplete data en- an Adhoc dataset generator8 , allowing the generation of data
tries were removed, and the following five features were points with three features. The characteristics of the features
selected for this purpose: MinTemp, Humidity9am, Wind- are shown in Figure 8.
Speed3pm, Pressure9am, WindDir9am. The attribute Rain-
Tomorrow serves as a class label. Its categorical values No
and Yes were mapped to the numbers 0 and 1, respectively.
Vlds dataset. This dataset was generated using a dataset
generator provided by scikit-learn6 . The characteristics of the
features are shown in Figure 6.

FIGURE 8: The pairwise bivariate distribution of the ad hoc


dataset indicates a higher order partition function, indicating
the difficulty of the dataset.

B. CLASSICAL KERNEL-BASED SUPPORT VECTOR


FIGURE 6: From the pairwise bivariate distributions of the MACHINES
Vlds dataset, it is apparent that a higher order function is We first evaluate the accuracy of classical kernel-based
required to separate them, indicating the difficulty of the SVMs provided by sklearn.svm.SVC with default hyperpa-
dataset. rameter configurations, using four different kernels, namely
linear, polynomial, radial basis function and sigmoid. We
Custom dataset. The dataset consists of data points with have used these kernels of different complexity to be able
two features, generated using the function numpy.random. to adapt to the datasets, that are also of different complexity.
default_rng7 . In a 2D representation, the data points are part The results are shown in Table 2. As we can see, for
of a square. The points on the diagonals are labelled as 0.0 the Iris dataset, all kernels have a perfect accuracy score
(see Figure 7). of 1.00. For the Rain dataset, the rbf kernel performs best
with an accuracy of 0.77. For the Vlds dataset, all kernels
behave similarly with a slight advantage for the linear kernel.
For the custom dataset, again the rbf kernel performs best.
Finally, for the Adhoc dataset, which is the most complex
one, the linear kernel performs best with an accuracy score of
0.56 followed by sigmoid and rbf. In general, the rbf kernel
appears to be the most robust one across all five datasets.

C. QUANTUM SUPPORT VECTOR MACHINES ON


QUANTUM SIMULATOR
Let us now analyze the performance of QSVMs on a quantum
simulator. Figure 9 shows the accuracy of QSVMs using
FIGURE 7: From the pairwise bivariate distributions of the three different feature maps and four different entanglement
custom dataset, it is apparent that a higher order function strategies (none, linear, circular, full). To analyze the effect
is required to separate them, indicating the difficulty of the of the circuit depth on the accuracy, we set the circuit depth
dataset. to either 1, 2, 4 or 8.
We first consider the results for the Iris dataset. Figure 9a
Adhoc dataset. This dataset is artificially generated and shows that with the Z-feature-map of depth 1 and 4, as well as
described in [14]. It provides a complete classification of data with the ZZ-feature-map and the Pauli-feature-map of depth
points using the feature map configurations of the quantum 1 the accuracy of 100% can be achieved. For all other feature
maps with a depth above 2 we can observe a relatively high
6 https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_
multilabel_classification.html?highlight=make_multilabel_classification 8 https://qiskit.org/documentation/stable/0.26/_modules/qiskit/ml/
7 https://numpy.org/doc/stable/reference/random/generator.html datasets/ad_hoc.html#ad_hoc_data.

VOLUME 4, 2022 7

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

(a)

(b)

(c)

(d)

(e)
FIGURE 9: Accuracy of different quantum Support Vector Machines using three different feature maps (quantum kernels) with
8
four different entanglement strategies on a quantum simulator for five different datasets. VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

Dataset Kernel Function Accuracy


Iris linear 1.00
run quantum machine learning algorithms on real quantum
poly 1.00 computers that outperform their classical counterparts is a
rbf 1.00 very promising step in quantum space.
sigmoid 1.00
Rain linear 0.70 In Table 3 we compare the best results of the classical
poly 0.75 kernel-based SVMs with the QSVMs on the quantum sim-
rbf 0.77 ulator as well as on the real quantum computer. As we can
sigmoid 0.64
Vlds linear 0.89
see, the QSVMs have a higher average accuracy over all five
poly 0.87 datasets than the classical counterparts and thus outperform
rbf 0.86 them by 4% and 3%, respectively.
sigmoid 0.88
Custom linear 0.48
Dataset Classical SVM QSVM QSVM
poly 0.48
(Quantum Simulator) (Quantum Computer)
rbf 0.64
Iris 1.00 1.00 1.00
sigmoid 0.59
Rain 0.77 0.77 0.70
Adhoc linear 0.56
Vlds 0.89 0.86 0.90
poly 0.50
Custom 0.64 0.76 0.76
rbf 0.54
Adhoc 0.56 0.65 0.60
sigmoid 0.55
Average 0.77 0.81 0.80
TABLE 2: Accuracy of classical Support Vector Machines TABLE 3: Comparison of classical kernel-based support
with four different kernel functions. The best results are vector machines with quantum support vector machines on
marked in bold. a quantum simulator as well as on a real quantum computer.
The table shows the accuracy of the best approach for each
variance which might be due to the characteristics of the of the three algorithm types.
different data samples due to the 10-fold cross validation.
We also notice that introducing entanglement harms the
performance of the algorithm. E. QUANTUM NEURAL NETWORKS
For the Rain dataset 9b we can clearly see that the Z- We will now evaluate the performance of quantum neural net-
feature-map of depth 1 outperforms all other feature maps. works. Recall that we use a hybrid approach where the neural
The same pattern is observed with the Vlds dataset. network is implemented as a variational quantum circuit and
For the Custom and the Adhoc datasets, which are consid- the optimizer is implemented using classical hardware.
ered to be the most complex datasets, when can again see In our experiments we use the following 5 quantum cir-
that the Z-feature-map performs best. Moreover, it can be cuits. To increase expressiveness in different ways [29], all
recognized, that increasing the depth slightly improves the circuits use at least one or a combination of the RY , RX,
accuracy. RZ gates. There is only one circuit without entanglement
In summary, we can observe that the Z-feature-map per- which is q_circuit_04. All other circuits use entanglement by
forms best across all five datasets. We can also see that a including either a CX, CZ or a CRZ gate in their variational
greater depth of the feature map circuits can have a positive model inspired by [27].
effect on the accuracy for more complex datasets. Finally,
the more complex feature maps ZZ-feature-map and Pauli- 1) q_circuit_01
feature-map have a negative impact on the accuracy. The lat- The circuit in Figure 10 is built using circular entanglement
ter also suggests that these algorithms cannot take advantage with RY-gates followed by parameterized entangled CRY-
of entanglement. Hence, additional studies are required to gates consisting of 1 layer.
understand these phenomena in more detail.

D. QUANTUM SUPPORT VECTOR MACHINES ON q0 : RY (x0 ) RY (w0 ) • RY (cw2 )

QUANTUM COMPUTER q1 : RY (x1 ) RY (cw0 ) RY (w1 ) •

Let us now evaluate the performance of quantum support vec- q2 : RY (x2 ) RY (cw1 ) RY (w2 ) •

tor machines on a real, publicly available quantum computer. FIGURE 10: QNN circuit variant with 3 qubits and 1 layer
The major question is if the algorithms still perform well on used in our experiments. Referred to as q_circuit_01.
a quantum device or if the failure rates of the underlying
quantum computer render these types of quantum machine
learning algorithms impractical.
2) q_circuit_02
Our experimental results on a real quantum computer
showed similar results to the ones on a quantum simulator. The circuit in Figure 11 is built using RY-gates followed by
These results are extremely promising since current quantum circular entangled parameterized CRY-gates depicted with 1
computers are still very error-prone especially for circuits layer.
with a large number of quantum gates. Hence, being able to
VOLUME 4, 2022 9

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

6) Classical Optimizers
q0 : RY (x0 ) RY (w0 ) • RY (cw3 )
Inspired by the work of Pellow-Jarman et al. [24], we selected
q1 : RY (x1 ) RY (w1 ) RY (cw0 ) •
the same four optimizers: AMSGRAD, SPSA, BFGS and
q2 : RY (x2 ) RY (w2 ) RY (cw1 ) •
COBYLA. For all optimizers we tuned the parameter settings
q3 : RY (x3 ) RY (w3 ) RY (cw2 ) • ranging from 100 to 1500 iterations.

FIGURE 11: QNN circuit with 4 qubits and 1 layer used in 7) Results of Neural Networks
our experiments. Referred to as q_circuit_02. For the classical approach we have implemented various
fully-connected neural networks with 1, 2 and 3 hidden layers
using PyTorch [23]. These neural networks were then passed
3) q_circuit_03 to Ray Tune [21], which is used to facilitate the search for
The circuit in Figure 12 is built using circular entanglement good hyperparameters when optimizing neural networks. The
with RY-gates followed by entangled CZ-gates and is de- best validation accuracy was selected out of 10 runs with
picted with 1 layer. the best hyperparameters. The input data was normalized in
the same way it was for the quantum neural network, as to
eliminate the normalization as a deciding factor. As we can
q0 : RY (x0 ) RY (w0 ) • •
see in Table 4, the average accuracy of the best classical
q1 : RY (x1 ) • RY (w1 ) •
neural networks over all 5 datasets is 78%, which is similar
q2 : RY (x2 ) • RY (w2 ) • to the performance of the classical SVM shown in Table 3.
q3 : RY (x3 ) • RY (w3 ) •
Dataset Classical NN QNN QNN
(Quantum Simulator) (Quantum Computer)
FIGURE 12: QNN circuit with 4 qubits and 1 layer used in Iris 1.00 1.00 1.00
Rain 0.70 0.83 0.79
our experiments. The qubit or layer count may vary depend- Vlds 0.94 0.93 0.95
ing on the input feature count or layer count setting. Referred Custom 0.64 0.74 0.75
to as q_circuit_03. Adhoc 0.61 0.80 0.75
Average 0.78 0.86 0.85

TABLE 4: Comparison of classical neural networks with


4) q_circuit_04 quantum neural networks on a quantum simulator as well as
The circuit in Figure 13 is built using RX-gates followed on a real quantum computer. The table shows the accuracy of
by RY-gates and final RZ-gates. This circuit is without the best approach for each of the three algorithm types.
entanglement.
We will now evaluate the performance of the quantum
neural networks. Figure 15 shows the accuracy of the quan-
q0 : RY (x0 ) RX (wx0 ) RY (wy0 ) RZ (wz0 ) tum neural networks with different classical optimizers on
q1 : RY (x1 ) RX (wx1 ) RY (wy1 ) RZ (wz1 ) 5 datasets. Let us first analyze the performance on the Iris
q2 : RY (x2 ) RX (wx2 ) RY (wy2 ) RZ (wz2 ) dataset. On the simulator we can see that except for QNNs
q3 : RY (x3 ) RX (wx3 ) RY (wy3 ) RZ (wz3 )
using the SPSA-optimizers, all others receive a perfect score
of 100% accuracy. On the quantum computer we can see a
similar behavior.
FIGURE 13: QNN circuit with 4 qubits and 1 layer used in
For the rain dataset, the highest accuracy of 82% can be
our experiments. Referred to as q_circuit_04.
achieved with the AMSGRAD-optimizer on the quantum
simulator. On the quantum hardware, the SPSA-optimizer
shows a slight advantage followed by AMSGRAD.
5) q_circuit_05
For the Vlds dataset, the BFGS-optimizer shows the high-
The circuit in Figure 14 is built using circular entanglement est accuracy. For the custom dataset and the adhoc dataset
with RX-gates followed by RY-gates and final RZ-gates the winner is COBYLA. In short, for all the different experi-
entangled by CX-gates and is depicted with only 1 layer. ments there is no clear winning optimizer.

q0 : RY (x0 ) RX (wx0 ) RY (wy0 ) RZ (wz0 ) •


q1 : RY (x1 ) RX (wx1 ) RY (wy1 ) RZ (wz1 ) •
q2 : RY (x2 ) RX (wx2 ) RY (wy2 ) RZ (wz2 ) •
q3 : RY (x3 ) RX (wx3 ) RY (wy3 ) RZ (wz3 ) •

FIGURE 14: QNN circuit with 4 qubits and 1 layer used in


our experiments. Referred to as q_circuit_05.

10 VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

FIGURE 15: Accuracy comparison of 10 runs on a quantum simulator and a quantum computer over 5 datasets using 5 different
quantum circuits and 4 different optimizers.

VOLUME 4, 2022 11

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

When looking at the quantum circuits, it also turns out that supremacy using a programmable superconducting processor. Nature,
there is no clear winner and there is a relatively high variance 574(7779):505–510, 2019.
[6] M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini. Parameterized
between the different circuits. The reason might be due to the quantum circuits as machine learning models. Quantum Science and
variations of the dataset and the relatively small number of Technology, 4(4):043001, 2019.
data records. [7] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd.
Quantum machine learning. Nature, 549(7671):195–202, 2017.
Table 4 shows the performance of the best combination [8] S. Y.-C. Chen, S. Yoo, and Y.-L. L. Fang. Quantum long short-term mem-
of quantum circuits and optimizers per dataset. On average, ory. In ICASSP 2022-2022 IEEE International Conference on Acoustics,
the accuracy over all 5 datasets is 85.8% on the quantum Speech and Signal Processing (ICASSP), pages 8622–8626. IEEE, 2022.
[9] E. A. Cherrat, I. Kerenidis, N. Mathur, J. Landman, M. Strahm, and Y. Y.
simulator and 84.7% on the quantum computer. The results of Li. Quantum vision transformers, 2022.
the QNN are 5% better than the results of the QSVM which [10] C. Ciliberto, M. Herbster, A. D. Ialongo, M. Pontil, A. Rocchetto, S. Sev-
demonstrates the advantage of quantum implementations erini, and L. Wossnig. Quantum machine learning: a classical perspective.
Proceedings of the Royal Society A: Mathematical, Physical and Engi-
over a hybrid quantum-classical implementation. Moreover, neering Sciences, 474(2209):20170551, 2018.
the quantum neural network executed on the quantum com- [11] A. Cross. The ibm q experience and qiskit open-source quantum com-
puter outperforms the classical neural network by 7% - even puting software. In APS March Meeting Abstracts, volume 2018, pages
though the classical neural network is vastly more complex. L58–003, 2018.
[12] Y. Du, M.-H. Hsieh, T. Liu, S. You, and D. Tao. Learnability of quantum
In the case of the vlds dataset, hyperparameter optimization neural networks. PRX Quantum, 2(4):040337, 2021.
resulted in a neural network with 69,402 parameters, whereas [13] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press,
as the biggest quantum neural network has 15 parameters. 2016. http://www.deeplearningbook.org.
[14] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M.
Chow, and J. M. Gambetta. Supervised learning with quantum-enhanced
VI. CONCLUSIONS feature spaces. Nature, 567(7747):209–212, 2019.
In this paper we performed a detailed experimental evalua- [15] V. Havlicek, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M.
Chow, and J. M. Gambetta. Supervised learning with quantum enhanced
tion of quantum support vector machines and quantum neural feature spaces. Nature, 567(7747):209–212, 2019.
networks. Our experimental evaluation showed that QSVMs [16] M. Henderson, S. Shakya, S. Pradhan, and T. Cook. Quanvolutional neural
outperform their classical counterparts on average by 3 to 4% networks: powering image recognition with quantum circuits. Quantum
Machine Intelligence, 2(1):1–9, 2020.
in terms of accuracy. We could also show that the quantum [17] IBM Quantum team. ibmq_melbourne v2.3.24, https://quantum-
neural networks further outperformed the QSVMs by up to computing.ibm.com, 2022. Accessed January 2022.
5%. [18] S. Jeswal and S. Chakraverty. Recent developments and applications in
quantum neural network: a review. Archives of Computational Methods in
Even though our experiments were only performed on Engineering, 26(4):793–807, 2019.
relatively small datasets, these results demonstrate that quan- [19] M. I. Jordan and T. M. Mitchell. Machine learning: Trends, perspectives,
tum computing can be successfully applied for small-scale and prospects. Science, 349(6245):255–260, 2015.
[20] F. Leymann and J. Barzen. The bitter truth about gate-based quantum algo-
machine learning problems in practice already today. Given rithms in the NISQ era. Quantum Science and Technology, 5(4):044007,
the tremendous progress in the development of quantum Oct. 2020.
hardware, we expect that also larger problem sizes can be [21] R. Liaw, E. Liang, R. Nishihara, P. Moritz, J. E. Gonzalez, and I. Stoica.
Tune: A research platform for distributed model selection and training.
tackled in the near future. Whilst only usable for problems
arXiv preprint arXiv:1807.05118, 2018.
of a limited size, they outperform classical solutions on the [22] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven.
same problems, whilst being, comparatively, less complex. Barren plateaus in quantum neural network training landscapes. Nature
communications, 9(1):1–6, 2018.
Our current experiments showed that the best quantum
[23] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen,
kernel is based on the Z-feature-map which does not use Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang,
quantum entanglement. One of the open research questions Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang,
is how to design quantum circuits such that they can take J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance
deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer,
advantage of entanglement and thus harness of the full power F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural
of quantum computing. Another open research question is Information Processing Systems 32, pages 8024–8035. Curran Associates,
to find out how the analyzed algorithms perform on larger Inc., 2019.
[24] A. Pellow-Jarman, I. Sinayskiy, A. Pillay, and F. Petruccione. A compari-
datasets. Larger, less error-prone quantum hardware might son of various classical optimizers for a variational quantum linear solver.
give more insights. Quantum Information Processing, 20(6):202, 2021.
[25] P. Rebentrost, M. Mohseni, and S. Lloyd. Quantum support vector ma-
chine for big data classification. Physical review letters, 113(13):130503,
REFERENCES 2014.
[1] Amazon bracket. https://aws.amazon.com/braket/, Accessed: Jan. 2022. [26] M. Schuld. Supervised quantum machine learning models are kernel
[2] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner. methods. 2021.
The power of quantum neural networks. Nature Computational Science, [27] M. Schuld, A. Bocharov, K. Svore, and N. Wiebe. Circuit-centric
1(6):403–409, 2021. quantum classifiers. Physical Review A, 101(3):032308, Mar. 2020.
[3] H. Abraham, AduOffei, R. Agarwal, I. Y. Akhalwaya, G. Aleksandrowicz, arXiv:1804.00633 [quant-ph].
T. Alexander, and M. Amy. Qiskit: An open-source framework for [28] M. Schuld, I. Sinayskiy, and F. Petruccione. The quest for a quantum
quantum computing, 2019. neural network. Quantum Information Processing, 13(11):2567–2586,
[4] S. Arunachalam and R. de Wolf. Guest column: A survey of quantum 2014.
learning theory. ACM SIGACT News, 48(2):41–67, 2017. [29] S. Sim, P. D. Johnson, and A. Aspuru-Guzik. Expressibility and entangling
[5] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, capability of parameterized quantum circuits for hybrid quantum-classical
R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell, et al. Quantum algorithms. 2(12):1900070.

12 VOLUME 4, 2022

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3236409

[30] D. So, Q. Le, and C. Liang. The evolved transformer. In International


Conference on Machine Learning, pages 5877–5886. PMLR, 2019.
[31] A. Thomsen. Comparing quantum neural networks and quantum support
vector machines. Master’s thesis, ETH Zurich, Zurich, 2021-09-06.
[32] D. Wecker, M. B. Hastings, and M. Troyer. Progress towards practical
quantum variational algorithms. Physical Review A, 92(4):042303, 2015.
[33] M. Weigold, J. Barzen, F. Leymann, and M. Salm. Encoding patterns for
quantum algorithms. IET Quantum Communication, 2(4):141–152, 2021.
[34] C. Zhao and X.-S. Gao. Analyzing the barren plateau phenomenon in
training quantum neural networks with the zx-calculus. Quantum, 5:466,
2021.
[35] H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo,
J. Qin, D. Wu, X. Ding, Y. Hu, et al. Quantum computational advantage
using photons. Science, 370(6523):1460–1463, 2020.

VOLUME 4, 2022 13

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy