Three Novel Quantum Inspired Swarm Optimization Algorithms Using Different Bounded Potential Fields
Three Novel Quantum Inspired Swarm Optimization Algorithms Using Different Bounded Potential Fields
com/scientificreports
Gradient-based and Hessian-based algorithms are widely use in the literature to solve optimization problems
in engineering. This is due to their computational efficiency, which typically require little problem-specific
parameter tuning1. The Gradient-based and Hessian-based algorithms employ iterative process that involves
multivariate scalar function that packages all the information of the partial derivatives of the objective function
(i.e., Gradient function or Hessian matrix) to reach the solution. Some of the most common methods in this
category are: interior point, quasi-Newton, descent gradient, and conjugate gradient1. The main drawback with
such algorithms includes volatility to reach local optima solutions, difficulty in solving discrete optimization
problems, intricacy in the implementation for some complex optimization multivariable problems, and suscep-
tibility to numerical noise2. To tackle the presented deficiencies, some authors suggest the use of metaheuristics
optimization techniques.
In the last two decades, the use of metaheuristics optimization techniques to solve complex, multimodal, high
dimensional and nonlinear engineering problems have become very popular. This is attributed to their simplicity
of implementation, straightforward adaptability to solve optimization problems, and robust search capability to
achieve effective global optima3. Even though there are different metaheuristics optimization techniques, these
can be classified into four main groups as presented in Fig. 1. The first group is called evolutionary algorithm
(EA). The theory behind the EA is based on the evolution in nature. In this field, Genetic Algorithm (GA) emerge
as the most popular. GA was proposed by Holland in 1 9924. GA is inspired by the Darwin evolution t heory4,
in which a population (solution candidates) evolves (best solution) by crossover and mutation processes. In
this way, the solution to the global optima in every iteration (next generation) is assured. Recently, an extensive
recompilation of GA applications can be found in5. Other advanced EAs are Biogeography-Based O ptimizer6,
Clonal Flower P ollination7, Fuzzy Harmony S earch8, Mutated Differential E
volution9, Imperialist Competitive
(2017)10, and Deep Learning with Gradient-based o ptimization11.
1
Faculty of Electrical and Computer Engineering, Escuela Superior Politécnica del Litoral, EC090112 Guayaquil,
Ecuador. 2Faculty of Natural Science and Mathematics, Escuela Superior Politécnica del Litoral,
EC090112 Guayaquil, Ecuador. 3Solar Energy Research Institute of Singapore (SERIS), National University of
Singapore (NUS), Singapore 117574, Singapore. *email: mansalva@espol.edu.ec
Vol.:(0123456789)
www.nature.com/scientificreports/
The second metaheuristic group corresponds to swarm intelligence (SI). These algorithms incorporate math-
ematical models that describe the motion of a group of creatures in nature (swarms, schools, flocks, and herds)
based on their collective and social behavior. The most well-known SI algorithm presented in the literature is
the Particle Swarm Optimization (PSO), and was proposed by Kennedy and Eberhart in 1 99512. In general, SI
algorithms initialize with multiple particles located in random positions (solution candidates). The particles look
to enhance their position based their position based on their own best positions obtained so far and best particle
of the swarm. The motion process is repeated (iterations) until most of the particle converge to the same position
(best solution). The theory behind SI have been exploited, resulting in novel innovative optimization techniques
(i.e. Dynamic Ant Colony Optimization (2017)13, Bacterial Foraging (2016)14, Fish School Search (2017)15, Moth
Firefly (2016)16, and Chaotic Grey W olf17) for different engineering applications.
The third metaheuristic group is physics-based (PB). Their formulation involves any physical concept used
to describe the behavior through space and time of the matter. PB algorithms can be classified into two main
classes: classical and modern. The term ‘classical’ refers to the optimization techniques that employs classical
physics in their formulations to reach the optima global. In this branch, fits the Greedy Electromagnetism like18,
Improved Central Force19, Multimodal Gravitational Search20, Exponential Big Bang-Big Crunch (BB-BC)21,
and Improved Magnetic Charged System S earch22 algorithms. On the other hand, the term ‘modern’ refers to
the algorithms that employs quantum physics to determine the optima global. Some of the recent algorithms
that fit in this branch are Neural Network Quantum S tates23, Adiabatic Quantum24, and Quantum A nnealing25
optimizations. PB algorithms’ optimization process starts with a random initialization of the matter’s position
(solution candidates). Then, depending on the physical interaction (i.e. kinematic, dynamic, thermodynamic,
hydrodynamic, momentum, energy, electromagnetism, quantum mechanics, etc.) defined in the search space, the
particles improve their position (best solution) and the process is repeated until certain physical rules are satisfied.
The last metaheuristic group is Hybrid algorithm (HA). This group combines the characteristics of the previ-
ous metaheuristics groups to bring new optimization techniques. These algorithms can be classified into four
main groups: EA-SI (i.e. Evolutionary Firefly26), EA-PB (i.e. Harmony Simulated Annealing Search27), SI-PB
(i.e. Big Bang-Big Crunch Swarm O ptimization28) and EA-SI-PB (i.e. Electromagnetism-like Mechanism with
Collective Animal behavior search29). Particularly, the research scope lies on the SI-PB group since it combines
the quantum physics concepts and swarm particle behavior.
The recent literature presents new approaches that use the concepts of quantum mechanics and swarm intel-
ligence for different applications. For instance30, exhibits a Quantum-inspired Glow-worm Swarm Optimisa-
tion (QGSO) to minimize the array style with maximum relative sidelobe level of array (discrete optimization
problem). The algorithm employs the concepts of quantum bits combined with the mathematical behaviour
of social glow-worm swarm to determine the best solution in terms of the position of the best quantum glow-
worm. Authors in31, propose a novel Accelerated Quantum Particle Swarm Optimization (AQPSO) that uses
the concept of quantum mechanics to derive an expression that deals with the position of the quantum particle
trapped in a delta potential well. In order to accelerate the convergence process, the inclusion of an odd number
of observers greater than unity is incorporated to the model. The AQPSO shows high performance resulting in
the application of different power systems application such as optimal placement of static var compensators for
maximum system r eliability31, maximization of savings due to electrical power losses r eduction32, and optimal
maintenance schedule to minimize the operational risk of static synchronous g enerators33 and power g enerators34.
In35–37, quantum ant colony algorithm is used to solve path optimization problems. In this algorithm, every ant
Vol:.(1234567890)
www.nature.com/scientificreports/
carries a group of quantum bits to represents the position of its own. The ants move according through quantum
rotation gates, which lead to an improvement in their position. As presented, there is plenty of evidence that
demonstrate the computation effective robustness of the quantum SI-PM algorithms. Most of them are driven
by quantum b its30,35–37, and quantum potential w
ells31,38,39. Nevertheless, to the best of our knowledge, there is no
quantum SI-PM in the literature mimicking a quantum particle swarm bounded by Lorentz, Rosen–Morse, and
Coulomb-like Square Root potential fields. This fact encourages the attempt to propose three novel algorithms
and investigate its abilities in solving benchmark optimization problems.
The motivation of this research lies on the No Free Lunch (NFL) theorem, which states that “any two algo-
rithms are equivalent when their performance is averaged across all possible problems”40. The given statement
infers that there is not a best algorithm able to solve any optimization problem. Some algorithms may show effec-
tive performance for a set of problems; however, the same algorithms can result inefficient for a different set of
problems. Therefore, NFL open a pathway to improvement of the existing approaches. Given the foregoing, this
paper proposes three novel hybrid metaheuristics optimization techniques inspired by the movement behaviour
of quantum particle swarm bounded in three different potential fields: Lorentz, Rosen-Morse, and Coulomb-
like Square Root. The given potentials fields are considered due to their simplicity in the analytical solution to
the Schrödinger equation, which are widely studied in the Physics literature41–43. Moreover, these potentials
fields offer certain features that allow to predict the qualitative behaviour of the proposed algorithms in terms
of exploitation and exploration. The base for this statement lies in the probability density function (solution to
the Schrödinger equation), which presents two regime behaviour. The first regime is in between the limits of the
quantum well, while second regime is related to the asymptotic trend at (z → ±∞), as presented in Fig. 2. The
local amplitude of the probability density function (local “height” or probability) represents the strength of the
potential in a region of space. In this sense, the behaviour of the probability density function in between the limits
of the quantum well illustrates the probability of the particle to be near the local attractor that is associated with
the exploration capabilities of the search a lgorithm44. It is important to highlight that the exploration is defined
as the ability of examining a promising area(s) as broadly as possible. By defining a “promising area” as the region
near the local attractor, then more points are expected to be searched in the promising area if more probability
weight is given locally. This means that a higher amplitude (near zero) of the probability density function leads to
a more thoroughly/broadly search in that region, giving rise to more exploration. Hence, the probability density
function that is expected to produce the most exploration in the search algorithm is the Rosen Morse probability
density function, followed by the Lorentz and the Coulomb like Square Root probability density functions. The
behaviour of the probability density function at (z → ±∞) is associated with the global search capabilities of the
algorithm, known in this manuscript as exploitation. Therefore, a slowly decaying probability density function
(weak potential at (z → ±∞)) will be expected to exhibit high exploitation, i.e. further values from the local
attractor will be searched with non-negligible probability44. In this sense it is expected the Lorentz probability
density function to present the best exploitation, followed by the Rosen Morse and the Coulomb-like Square
Root probability density function.
Vol.:(0123456789)
www.nature.com/scientificreports/
To verify the computational robustness of the proposed approach, several benchmark functions are solved.
Then, the results are compared with the ones obtained by particle swarm optimization, genetic algorithm, and
firefly algorithms. The rest of the paper is organized as follows: “Quantum Particle Swarm Optimization gen-
eral formulation” section presents quantum concepts that describe the scenario of the particle swarm. “Time
independent Schrödinger equation and particle position” section describes the nature of a quantum particle in
a bounded potential field. “Quantum-inspired optimization algorithms” section exhibits the quantum-inspired
proposed optimization algorithms. “Case study” section describe the case study used to test the efficacy of the
proposed algorithms. In “Results” section, the results are analysed and discussed. Finally, “Discussion and con-
clusion” section incorporates the conclusions.
Methodology
Quantum particle swarm optimization general formulation. Quantum particle swarm optimiza-
tion (QPSO) is an advanced heuristic optimization technique that employs the concept of quantum particle
motion to reach the optimal solution. QPSO follows the process described in Fig. 3. The process starts defining
the initial population of the particles SS and total number of iterations MaxIt. The position of the particle (x) rep-
resents a solution candidate to the optimization problem; thus, it can be used to evaluate the objective function.
The next step is to identify the positions called ‘personal best’ and ‘global best’. In this step is relevant to
consider two specific attributes of the particle, which are related to memory and communication. The memory
attribute refers to the ability to save the best position of the particle by comparing its actual position with the
position after the motion. For instance, Fig. 4a shows two scenarios of particle motion. In scenario 1, the particle
has the possibility to move close to the optimum position, therefore, it proceeds to move and saves this posi-
tion as its best position. In scenario 2, the particle has the possibility to move far from the optimum position,
therefore, it will not move and saves its actual position as its best position. The memory attribute is known as
‘personal best’ and denoted by q45,46. The communication attribute refers to the ability to save the particle with
the best position among the swarm. Figure 4b shows a swarm with three particles, resulting in ‘particle 3’ as the
best particle since is the one nearest to the optimum position. The communication attribute is known as ‘global
best’ and is denoted by g45,46.
The process continues with the update of the particles position. For the given purpose, the general mathemati-
cal topology of the swarm intelligence optimization is employed. This is as follows47:
xℓ(k+1) = z xℓ(k) , V + f1 (w1 , u1 ) qℓ(k) − xℓ(k) + f2 (w2 , u2 ) g (k) − xℓ(k) , (1)
Vol:.(1234567890)
www.nature.com/scientificreports/
(a)
Scenario 1 Scenario 2
Iteration: k Iteration: k
ql ql
(b) Iteration: k
Solution space
Particle 1 Particle 2 Particle 3
Global best Optimum position
Figure 4. (a) Memory attribute of the particle. (b) Communication attribute of the particle.
(k)
where z represent the displacement of a particle and is a function that depends on xℓ and V, which represent the
actual position x of the particle ℓ at iteration k and the physics phenomena that drives the movement of the par-
ticle, respectively. The functions f1 and f2 correspond to the nature of the swarm intelligence, in which f1 drives
the new position of the particle towards the local optima particle position q, while f2 associates the new position
of the particle with the global optima particle position g. To avoid traps (‘local optima’) that may appear in the
objective function, the authors of the first swarm i ntelligence12 introduced random numbers u and coefficients of
acceleration w such that 0 ≤ w ≤ 2. The suffix ‘1’ and ‘2’ is used to refer to local and global position, respectively.
A simplification of the general formulation of the swarm intelligence optimization, is proposed by authors
in47. The model is based on a trajectory analysis in which the best position of the particle is localized in the search
(k)
space that lies in between the best local and global position. Authors in31 refer to this term as local attractor Dℓ ,
and is used to guide the particle towards a better position. Such model has the f orm47:
(k+1) (k) (k)
xℓ = z(xℓ , V ) + Dℓ ,
(2)
Dℓ(k) = ((r1 u1 )/(r1 u1 + r2 u2 ))qℓ(k) + (1 − (r1 u1 )/(r1 u1 + r2 u2 ))g (k) .
Reference48 presents that for the scenario of a quantum particle trapped in a bounded field, the phenomena
that drives the movement of the particle is given by the relative width a and a function that depends on the
potential well being used, such that48
z xℓ(k) , V = af (V ),
(k) 1 (k) ,
SS (3)
a = xℓ − qℓ
SS
ℓ=1
where SS is the total number of particles. This last formulation is employed to estimate the new position of the
particle. The function f(V) is analyzed in the following sections of the manuscript.
The last step is to verify the termination criterion using the total number of iterations MaxIt, and convergence
tolerance value ξ . The process finishes if one of the conditions given in Eq. (4) is s atisfied49.
k = MaxIt
Convergence criteria : SS (k)
ℓ=1 qℓ − SSg (k) ≤ ξ
. (4)
Time independent Schrödinger equation and particle position. QPSO algorithm can be described
through the quantum behaviour of motion of particles. In particular, the case of a particle moving in a bounded
potential field is considered of great interest. Under the given context, particles’ states are depicted by the prob-
ability density function |ψ(�r , t)|2, in the search space r , at time t. The probability density function must s atisfy49.
Vol.:(0123456789)
www.nature.com/scientificreports/
+∞
|ψ(�r , t)|2 d�r = 1. (5)
−∞
In order to determine the position of the particle, a measurement is required. For this purpose, a random
number generation simulation is performed u = rand(0, 1). In this sense, the wave function squared must be
normalized with respect to its maximum value, such that
|ψ(�r , t)|2
= u. (6)
max |ψ(�r , t)|2
On the other hand, the time evolution of the wave function ψ(r , t) of a quantum system is generally described
by the Schrödinger e quation49–51:
r , t) = i ∂ ψ(�r , t).
Hψ(� (7)
∂t
The formulation presented in Eq. (7) is also called the time-dependent Schrödinger equation. In this equa-
is the Hamiltonian operator and is the reduced Planck Constant. Nevertheless, for the purpose of this
tion, H
research, a slowly varying process (adiabatic) in which the eigen estate at E = 0 of the particle change accordingly
with the evolution of the potential V is considered. Then, the one-dimension stationary Schrödinger equation
can be used and written as f ollows50,51:
2 ∂ 2
ψ(z) − V (z)ψ(z) = 0. (8)
2m ∂z 2
The last formulation is relevant for the derivation of the term f(V) required in Eq. (3).
Quantum‑inspired optimization algorithms. The following analysis considers wave functions that
belongs to the Hilbert space (ψ(z) ∈ H), that is, the wave function squared |ψ(z)|2 must be normalized with the
boundary condition stablished in Eq. (9)50,51.
lim ψ(z) = 0. (9)
z→±∞
The proposed quantum-inspired optimization algorithms follow the methodology described in Fig. 1. In the
methodology it is required to determine the term f(V), and complete the formulation presented in Eq. (3). In
the following, f(V) is derived for each proposed potential field, which are shown in Fig. 5.
Lorentz potential field (QPSO‑LR). The following algorithm is inspired in the localization of electrons in bonds
formation. This occurs when there is an interaction between electron–donor and electron–acceptor atoms, as
presented in Fig. 5a. The potential field that describes this phenomenon is mathematically described as given in
Eq. (10)52.
2 (2z 2 − a2 )
V (z) = . (10)
2m(z 2 + a2 )2
The displacement of the particle under the Lorentz potential field is obtained by replacing Eq. (10) in Eq.
(8), resulting in Eq. (11).
2 d2 ψ(z) 2 (2z 2 − a2 )
− = 0. (11)
2m dz 2 2m(z 2 + a2 )2 ψ(z)
The solution for the Second Order Linear Ordinary Differential Equation (SLODE) presented in Eq. (11) has
the form given in Eq. (12).
1 3a2 z + z 3
ψ(z) = C1 √ + C2 √ . (12)
a2 + z2 3 a2 + z 2
By applying the boundary condition given in Eq. (9), the result is:
a
C1 = ; C2 = 0. (13)
π
Then,
a
ψ(z) = . (14)
π(a2 + z 2 )
By replacing Eq. (14) in Eq. (6) and solving for z, the result is as follows:
Vol:.(1234567890)
www.nature.com/scientificreports/
(a) interaction
Potential field
1
-1
-8 -6 -4 -2 0 2 4 6 8
z - displacement
(b)
2
Potential field
1
0
-1
-2
-3 -2 -1 0 1 2 3
z - displacement
(c)
4
Potential field 2
0
-2
-4
-3 -2 -1 0 1 2 3
z - displacement
Figure 5. (a) Lorentz potential field to model donor acceptor interaction. (b) Rosen–Morse potential field to
model molecular vibrations. (c) Coulomb-like square root potential field to model the electron confinement in
graphene.
1−u
z=a . (15)
u
By replacing Eq. (15) in Eq. (3), the conclusion is:
1−u
f (V ) = . (16)
u
Rosen–Morse potential field (QPSO‑RM). The Rosen-Morse potential field can be used to model the vibration
energy spectra produced by the interaction of atoms in a diatomic m olecule53,54, as presented in Fig. 5b. A gen-
eralized form of the Rosen-Morse potential field is given by Eq. (17).
2 tan h2 (z/a) − sec h2 (z/a)
V (z) = . (17)
a2 2m
The displacement of the particle under the Rosen–Morse potential is obtained by replacing Eq. (17) in Eq.
(8), resulting in Eq. (18).
2 d2 ψ(z) 2 tan h2 (z/a) − sec h2 (z/a)
− ψ(Z) = 0. (18)
2m dz 2 2m a2
Using Eq. (9) to normalize the solution to Eq. (18) and solving the SLODE, the result is given by (19).
1
ψ(z) = √ sec h(z/a). (19)
2a
Plugging in Eq. (19) back into Eq. (6) and solving for z, the result is as follows:
√
z = a sec h−1 ( u). (20)
Substituting Eq. (20) in Eq. (3), the conclusion is
Vol.:(0123456789)
www.nature.com/scientificreports/
√
f (V ) = sec h−1 ( u). (21)
Coulomb‑like square root field (QPSO‑CS). A potential that contains an inverse square root and a linear sym-
metric potential is evaluated. The Coulomb-like square root potential field is commonly employed to model
the electron confinement in g raphene55, as presented in Fig. 5c. Mathematically, the Coulomb-like square root
potential field can be written as given in Eq. (22).
2 0.4 0.6
V (z) = − 3/2 |z|−0.5 + 3 |z| . (22)
2m a a
Similarly, the displacement of the particle under the Coulomb-like Square Root potential is obtained by
replacing Eq. (22) in Eq. (8), which leads to Eq. (23):
2 d2 ψ(z) 2 0.4 −0.5 0.6
− − |z| + |z| ψ(Z) = 0. (23)
2m dz 2 2m a3/2 a3
Using Eq. (9) to normalize the solution to Eq. (23) and solving the SLODE, the result is given by Eq. (24).
1 |z| 3/2
−
ψ(z) = √ e a . (24)
1.69 a
Plugging in Eq. (24) back into Eq. (6) and solving for z, the result is as follows:
2/3
1
z = a ln . (25)
u
Substituting Eq. (25) in Eq. (3), the conclusion is:
2/3
1
f (V ) = ln . (26)
u
The implementation of the proposed optimizations techniques is presented in Algorithm 1. Notice that the
main difference among the proposed algorithms lies in step 11, in which the particle updates its position. This
is because the particle follows a trajectory that mainly depends on the boundary potential field defined in f(V).
Case study. To show the performance of the proposed algorithms (QPSO-LR, QPSO-RM and QPSO-LR),
24 benchmark functions are u sed56. The benchmark functions are categorized by unimodal, multimodal and
fixed-dimension multimodal that are mathematically described in Table 7, 8 and 9, respectively. Unimodal func-
tions are used to analyze the impact of the algorithms when there is one minimum value in a certain interval.
In contrast, multimodal functions are utilized to analyze the algorithms in the presence of several local minima
through the search space. The simulation incorporates for the unimodal and multimodal functions a dimension
(total of variables) of 30, while for fixed-dimension multimodal is as shown in Table 9. Concerning the popula-
Vol:.(1234567890)
www.nature.com/scientificreports/
Table 1. Accuracy and precision metrics for unimodal benchmark functions with N = 30, Tmax = 1000 and
Texp = 30.
tion size and total number of iterations, these are 50 and 1000, respectively. To corroborate the significance of the
results, a total of 30 experiments (simulations) are conducted. All the algorithms are tested in MATLAB R2020a
and numerical experiment is set up on Intel Core (TM) i7-6500 Processor, 2.50GHz, 8 GB RAM.
Results
The performance of QPSO-LR, QPSO-RM and QPSO-LR is measured in terms of exploitation (accuracy and
precision), exploration (search speed and acceleration) and simulation time. In addition, to explore the advan-
tages of the proposed algorithms, the same optimization problems are solved using particle swarm optimization
(PSO), genetic algorithm (GA), and firefly optimization (FFO). The results are shown as follows.
Exploitation: accuracy and precision. The exploitation refers to the local search capability around the
promising regions. This can be quantified based on two statistics metrics: accuracy (δ) and precision (φ). The
term accuracy is defined as the absolute value of the difference between the average value and the true value
(reference value) of the quantity being measured, that is, the closeness of the measurements to the true value. On
the other hand, the term precision indicates the closeness of the measurements to each other. The introduced
terms can be mathematically obtained using the true value (xopt ), (x̄)), and standard deviation (σ ) of a set of data,
as given in Eq. (27) and Eq. (28), respectively57.
δ = xopt − x̄ , (27)
φ = |σ/x̄| . (28)
Using the data obtained from the experiments performed with each optimization technique, the mean and
standard deviation can be obtained. Then, by the employment of (27) and (28), the accuracy and the precision
of each optimization technique are obtained. Tables 1, 2 and 3 shows the statistical metrics for unimodal, mul-
timodal, fixed-multimodal benchmark function, accordingly (best values are highlighted in blue). The results
reveal that for each algorithm there is a better match in terms of accuracy and precision depending on the
function. Focusing on the unimodal benchmark functions (f1 − f8 ), QPSO-LR presents the best accuracy for
f1 , f2 , f4 , f6 , f7 , followed by GA in functions f1 , f2 , f6 , f8 and PSO for the functions f4 and f7 . Likewise, if only
precision is considered, there is considerable variation in the algorithms with respect to the functions, e.g., GA
is the most precise for f1 , f2 , f7 , while for f4 and f6 the QPSO-RM and QPSO-LR results to be the most precise,
respectively.
Analyzing the algorithms for the multimodal benchmark functions (f9 − f15 ), there is considerable variation
in accuracy and precision. However, by considering these features separately, the most accurate, but not neces-
sarily the most precise algorithms in descendant order are: QPSO-LR, GA, PSO, FFO, QPSO-CS, QPSO-RM.
In contrast, the algorithms that are more precise, but not necessarily the most accurate in descendant order are:
QPSO-RM, FFO, GA, QPSO-LR, QPSO-CS, and PSO.
For fixed multimodal functions (f16 − f24 ), the results expose that QPSO-LR and QPSO-RM show excellent
results for functions f16 and f19 in terms of accuracy and precision, while PSO responds better to f17 . For the
rest of the functions, all algorithms present acceptable accuracy and precision.
Vol.:(0123456789)
www.nature.com/scientificreports/
Table 2. Accuracy and precision metrics for multimodal benchmark functions with N = 30, Tmax = 1000 and
Texp = 30.
Table 3. Accuracy and precision metrics for fixed-multimodal benchmark functions with Tmax = 1000 and
Texp = 30.
Exploration: speed and acceleration. The exploration is defined as the ability of examining the promis-
ing area(s) of the search space as broadly as possible56. The exploration is closed related with the convergence
behaviour, which is shown in Figs. 6, 7 and 8. These graphs represent the evolution of the best solution through
every iteration performed. It can be appreciated that depending on the type of function on which the algorithms
are applied, presents certain patterns. For instance, functions f1 , f2 , f3 , f4 , f5 (independently of the employed
algorithm) present a linear convergence behaviour, while for the rest of functions the behaviour is exponential.
To quantify the exploration, the average search speed and acceleration of each algorithm is calculated using
the Allan v ariances58. The first Allan variance index measures search speed, i.e. the distance variation of best
search agent C in every iteration k, which is mathematically described in Eq. (29). The second Allan variation
index measures the search acceleration, i.e. the search speed variation, which is mathematically described in
Eq. (30).
kmax C(k + 1) − C(k)
k=1
ω = , (29)
�k
Vol:.(1234567890)
www.nature.com/scientificreports/
kmax ω(k + 1) − ω(k)
α = k=1 . (30)
�k
Tables 4, 5 and 6 shows the search speed and acceleration for each optimization technique grouped by uni-
modal, multimodal, and fixed multimodal functions, respectively (best values are highlighted in blue). For all
types of functions, GA algorithm exhibits the highest degree of average search speed and acceleration. However,
speed and acceleration attributes does not assure good precision, accuracy, or simulation time. Therefore, a more
scrutiny analysis is required to develop a proper comparison between optimization methods, as presented in
the next sections.
Simulation time. Another important aspect to evaluate is the simulation time performance. Figure 9
shows the average execution time for each optimization algorithm. The results reveal that FFO and GA present
higher time simulation (exceeds in approximately between 34 to 48 times) compared with the rest of the optimi-
zation techniques. Therefore, QPSO-LR, QPSO-RM, QPSO-CS and PSO present better performance in terms of
simulation time than FFO and GA.
Overall performance. The performance in terms of accuracy, precision, search speed, search acceleration,
and simulation time of each optimization technique is quantitative defined by the grade rules presented in (31)
to (35), respectively. These rules are developed to facilitate the comparison between optimization techniques
under the following criterion: ‘+ 3’ excellent performance, ‘+ 2’ good performance, ‘+ 1’ fair performance, ‘+ 0’
low performance. Once the values are assigned, the average of the function by groups (unimodal, multimodal,
fixed multimodal) is taken. Then, the values are normalized based on the highest average.
+3 if δ < 1 × 10−6
+2 if 1 × 10−6 ≤ δ < 1 × 10−3
ruleδ = −3 , (31)
+1 if 1 × 10 ≤ δ < 1
0 if δ ≥ 1
Vol.:(0123456789)
www.nature.com/scientificreports/
+3 if σ < 1 × 10−3
ruleσ = +2 if 1 × 10−3 ≤ σ < 1 ,
(32)
+1
if 1≤σ <3
0 if δ≥3
+3 if ω > 1 × 106
+2 if 1 × 103 < ω ≤ 1 × 106
ruleω = 1 , (33)
+1 if 1 < ω ≤ 1 × 103
0 if ω≤3
+3 if α > 1000
+2 if 100 < α ≤ 1000
ruleα = 1 , (34)
+1
if 1 < α ≤ 100
0 if α≤1
+3 if τ <2
+2 if 2≤τ <4
ruleτ = , (35)
+1
if 4≤τ <6
0 if τ ≥6
Finally, the results are integrated into a spider-chart as shown in Fig. 10 to show the overall performance of
each optimization technique.
Vol:.(1234567890)
www.nature.com/scientificreports/
Table 4. Convergence speed and acceleration metrics for unimodal benchmark functions with N = 30, Tmax
= 1000 and Texp = 30.
Vol.:(0123456789)
www.nature.com/scientificreports/
Table 5. Accuracy and precision metrics for multimodal benchmark functions with N = 30, Tmax = 1000 and
Texp = 30.
Table 6. Accuracy and precision metrics for fixed-multimodal benchmark functions with N = 30, Tmax =
1000 and Texp = 30.
As a general trend, it can be seen in Fig. 10 that for the three types of functions the algorithm that is closest
on average to 100% performance in all five indexes is the QPSO- LR, followed by the PSO, the QPSO- CS and
finally the GA and FFA method. A closer examination reveals:
• With respect to accuracy, the QPSO-LR, PSO and QPSO-CS go first, second and third respectively with an
average performance of 97%, 91% and 88%.
• In terms of precision the FFO, the QPSO-LR and GA can be ranked as first, second and third, respectively
with an average performance over the three type of functions of 94%, 91%, and 87%.
• Referring to speed of convergence, GA technique can be ranked first with 100% average performance, fol-
lowed by the QPSO-LR, 83% and the QPSO-RM, 80%.
• In terms of acceleration of convergence GA has 100%, followed by QPSO-RM and QPSO-LR with 80% and
74% of average performance, respectively.
• Regarding the simulation time, PSO can be ranked at the top with an average performance of 100% being
followed by the QPSO-CS, QPSO-RM and the QPSO-LR with an average performance between 85% and
80%.
Vol:.(1234567890)
www.nature.com/scientificreports/
To get a better understanding of the performance in terms of exploitation, the accuracy and precision are aver-
aged for each type of function (Unimodal, Multimodal and Fixed multimodal). The same process is done with
search speed and acceleration to quantify exploration (see Table 10). As a result, the exploitation performance
of QPSO-LR is ranked first, followed by the QPSO-RM and the QPSO-CS (just considering the proposed
approaches). This is expected since the Lorentz potential field is the weakest as (z → ±∞) and the Coulomb
like potential diverges as (z → ±∞), being the strongest. Regarding exploration the QPSO-RM is ranked first,
Vol.:(0123456789)
www.nature.com/scientificreports/
followed by the QPSO-LR and the QPSO-CS. Again, this is also expected since the Rosen Morse potential field
is the strongest in between the limits of the quantum well and the Coulomb like potential diverges towards −∞,
being the weakest (steepest too).
An increase in exploration and a slightly decrease in exploitation is observed in the multimodal functions
compared to the unimodal, due to the high number of traps that may exist in the hypersurface being searched.
Also, an increase in exploitation and decrease in exploration are observed in the fixed multimodal functions
compared to unimodal, attributed to irregular hypersurface formed. While the behaviour of the search algorithm
may change for each type of function, the QPSO-LR always exhibited higher exploitation and moderate-high
exploration. Being the combination of weak potential at (z → ±∞) and moderate steepness between the limits
of the quantum well that are responsible of such trend.
Vol:.(1234567890)
www.nature.com/scientificreports/
Vol.:(0123456789)
www.nature.com/scientificreports/
Vol:.(1234567890)
www.nature.com/scientificreports/
Vol.:(0123456789)
www.nature.com/scientificreports/
References
1. Venter, G. Review of optimization techniques. In Encyclopedia of Aerospace Engineering (ed. Venter, G.) (Wiley, 2010).
2. Shrestha, A. & Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 7, 53040–53065 (2019).
3. Hussain, K., Salleh, M. N. M., Cheng, S. & Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 52, 2191–2233
(2019).
4. Chambers, L. D. Practical Handbook of Genetic Algorithms: Complex Coding Systems, Vol. 3 (CRC Press, 2019).
5. Saini, N. Review of selection methods in genetic algorithms. Int. J. Eng. Comput. Sci. 6, 22261–22263 (2017).
6. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 12, 702–713 (2008).
7. Sayed, S.A.-F., Nabil, E. & Badr, A. A binary clonal flower pollination algorithm for feature selection. Pattern Recogn. Lett. 77,
21–27 (2016).
8. Pandiarajan, K. & Babulal, C. Fuzzy harmony search algorithm based optimal power flow for power system security enhancement.
Int. J. Electr. Power Energy Syst. 78, 72–79 (2016).
9. Cui, L., Li, G., Lin, Q., Chen, J. & Lu, N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-
populations. Comput. Oper. Res. 67, 155–173 (2016).
10. Xu, S., Wang, Y. & Lu, P. Improved imperialist competitive algorithm with mutation operator for continuous optimization problems.
Neural Comput. Appl. 28, 1667–1682 (2017).
11. Dalgaard, M., Motzoi, F., Sørensen, J. J. & Sherson, J. Global optimization of quantum dynamics with alphazero deep exploration.
npj Quantum Inf. 6, 1–9 (2020).
12. Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proc. ICNN’95-International Conference on Neural Networks, Vol. 4,
1942–1948 (IEEE, 1995).
13. Olivas, F. et al. Ant colony optimization with dynamic parameter adaptation based on interval type-2 fuzzy logic systems. Appl.
Soft Comput. 53, 74–87 (2017).
14. Devabalaji, K. & Ravi, K. Optimal size and siting of multiple dg and dstatcom in radial distribution system using bacterial foraging
optimization algorithm. Ain Shams Eng. J. 7, 959–971 (2016).
15. de Albuquerque, I. M. C., Neto, F. B. D. L. et al. Fish school search algorithm for constrained optimization. Preprint at http://a rXiv.
org/1707.06169 (2017).
16. Zhang, L., Mistry, K., Neoh, S. C. & Lim, C. P. Intelligent facial emotion recognition using moth-firefly optimization. Knowl.-Based
Syst. 111, 248–267 (2016).
17. Kohli, M. & Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 5,
458–472 (2018).
18. Zeng, Y., Zhang, Z., Kusiak, A., Tang, F. & Wei, X. Optimizing wastewater pumping system with data-driven models and a greedy
electromagnetism-like algorithm. Stoch. Environ. Res. Risk Assess. 30, 1263–1275 (2016).
19. Liu, J. & Wang, Y.-P. An improved central force optimization based on simplex method. J. Zhejiang Univ. (Eng. Sci.) 48, 2 (2014).
20. Yazdani, S., Nezamabadi-pour, H. & Kamyab, S. A gravitational search algorithm for multimodal optimization. Swarm Evol.
Comput. 14, 1–14 (2014).
21. Hasançebi, O. & Azad, S. K. An exponential big bang-big crunch algorithm for discrete design optimization of steel frames. Comput.
Struct. 110, 167–179 (2012).
22. Kaveh, A., Mirzaei, B. & Jafarvand, A. An improved magnetic charged system search for optimization of truss structures with
continuous and discrete variables. Appl. Soft Comput. 28, 400–410 (2015).
23. Glasser, I., Pancotti, N., August, M., Rodriguez, I. D. & Cirac, J. I. Neural-network quantum states, string-bond states, and chiral
topological states. Phys. Rev. X 8, 011006 (2018).
24. Young, K. C., Blume-Kohout, R. & Lidar, D. A. Adiabatic quantum optimization with the wrong Hamiltonian. Phys. Rev. A 88,
062314 (2013).
25. Tanaka, S., Tamura, R. & Chakrabarti, B. K. Quantum Spin Glasses, Annealing and Computation (Cambridge University Press,
2017).
26. Lieu, Q. X., Do, D. T. & Lee, J. An adaptive hybrid evolutionary firefly algorithm for shape and size optimization of truss structures
with frequency constraints. Comput. Struct. 195, 99–112 (2018).
27. Assad, A. & Deep, K. A hybrid harmony search and simulated annealing algorithm for continuous optimization. Inf. Sci. 450,
246–266 (2018).
28. Erol, O. K., Eksin, I., Akdemir, A. & Aydınoglu, A. Coordinate exhaustive search hybridization enhancing evolutionary optimiza-
tion algorithms. J. AI Data Mining 8, 439–449 (2020).
29. Gálvez, J., Cuevas, E., Avalos, O., Oliva, D. & Hinojosa, S. Electromagnetism-like mechanism with collective animal behavior for
multimodal optimization. Appl. Intell. 48, 2580–2612 (2018).
30. Gao, H., Du, Y. & Diao, M. Quantum-inspired glowworm swarm optimisation and its application. Int. J. Comput. Sci. Math. 8,
91–100 (2017).
Vol:.(1234567890)
www.nature.com/scientificreports/
31. Alvarez-Alvarado, M. S. & Jayaweera, D. A new approach for reliability assessment of a static v ar compensator integrated smart
grid. In 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS) 1–7 (IEEE, 2018).
32. Toapanta, P. I., Alvarado, M. A. & Urbano, F. V. Optimización por enjambre de partículas cuánticas para la reducción de pérdidas
eléctricas. Rev. Tecnol. ESPOL 31, 86 (2018).
33. Alvarez-Alvarado, M. S. & Jayaweera, D. A multi-stage accelerated quantum particle swarm optimization for planning and opera-
tion of static var compensators. In 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM) 1–6 (IEEE, 2018).
34. Alvarez-Alvarado, M. S. & Jayaweera, D. Operational risk assessment with smart maintenance of power generators. Int. J. Electr.
Power Energy Syst. 117, 105671 (2020).
35. Liu, M., Zhang, F., Ma, Y., Pota, H. R. & Shen, W. Evacuation path optimization based on quantum ant colony algorithm. Adv. Eng.
Inf. 30, 259–267 (2016).
36. Yong, Q., Cheng, B. & Xing, Y. A novel quantum ant colony algorithm used for campus path. In 2017 IEEE International Confer‑
ence on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing
(EUC), vol. 2, 161–165 (IEEE, 2017).
37. Lin, C., Wang, H., Yuan, J. & Fu, M. An online path planning method based on hybrid quantum ant colony optimization for AUV.
Int. J. Robot. Autom. 33, 435–444 (2018).
38. Ko, C.-N. & Lee, C.-I. Identification of nonlinear systems with outliers using modified quantum particle swarm optimization. In
2016 International Conference on System Science and Engineering (ICSSE) 1–4 (IEEE, 2016).
39. Yu, J., Mo, B., Tang, D., Liu, H. & Wan, J. Remaining useful life prediction for lithium-ion batteries using a quantum particle swarm
optimization-based particle filter. Qual. Eng. 29, 536–546 (2017).
40. Adam, S. P., Alexandropoulos, S.-A.N., Pardalos, P. M. & Vrahatis, M. N. No free lunch theorem: A review. In Approximation and
Optimization (eds Adam, S. P. et al.) 57–82 (Springer, 2019).
41. Okorie, U., Ikot, A., Rampho, G. & Sever, R. Superstatistics of modified Rosen–Morse potential with dirac delta and uniform
distributions. Commun. Theor. Phys. 71, 1246 (2019).
42. Ebomwonyi, O., Onate, C., Onyeaju, M. & Ikot, A. Any-states solutions of the Schrödinger equation interacting with Hellmann-
generalized Morse potential model. Karbala Int. J. Mod. Sci. 3, 59–68 (2017).
43. Yu, Q., Guo, K., Hu, M. & Zhang, Z. Second-harmonic generation investigated by topless potential well with inverse square root.
IEEE Photonics Technol. Lett. 31, 693–696 (2019).
44. Sun, J., Feng, B. & Xu, W. Particle swarm optimization with particles having quantum behavior. In Proc. 2004 Congress on Evolu‑
tionary Computation (IEEE Cat. No. 04TH8753), vol. 1, 325–331 (IEEE, 2004).
45. Rodríguez-Gallegos, C. D. et al. Placement and sizing optimization for PV-battery-diesel hybrid systems. In 2016 IEEE International
Conference on Sustainable Energy Technologies (ICSET) 83–89 (IEEE, 2016).
46. Rodríguez-Gallegos, C. D. et al. A siting and sizing optimization approach for PV-battery-diesel hybrid systems. IEEE Trans. Ind.
Appl. 54, 2637–2645 (2017).
47. Clerc, M. & Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE
Trans. Evol. Comput. 6, 58–73 (2002).
48. Alvarez-Alvarado, M. Power System Reliability Enhancement with Reactive Power Compensation and Operational Risk Assessment
with Smart Maintenance for Power Generators. Ph.D. thesis, University of Birmingham (2020).
49. Zielinski, K. & Laur, R. Stopping criteria for a constrained single-objective particle swarm optimization algorithm. Informatica
31, 51 (2007).
50. d’Espagnat, B. Conceptual Foundations of Quantum Mechanics (CRC Press, 2018).
51. Griffiths, D. J. & Schroeter, D. F. Introduction to Quantum Mechanics (Cambridge University Press, 2018).
52. Muccini, M. et al. Effect of wave-function delocalization on the exciton splitting in organic conjugated materials. Phys. Rev. B 62,
6296 (2000).
53. Nasser, I., Abdelmonem, M., Bahlouli, H. & Alhaidari, A. The rotating Morse potential model for diatomic molecules in the
tridiagonal j-matrix representation: I. Bound states. J. Phys. B At. Mol. Opt. Phys. 40, 4245 (2007).
54. Udoh, M., Okorie, U., Ngwueke, M., Ituen, E. & Ikot, A. Rotation-vibrational energies for some diatomic molecules with improved
Rosen–Morse potential in d-dimensions. J. Mol. Model. 25, 1–7 (2019).
55. Schulze-Halberg, A. The symmetrized square-root potential: Exact solutions and application to the two-dimensional massless
dirac equation. Few-Body Syst. 59, 1–11 (2018).
56. Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014).
57. Isinger, M. et al. Accuracy and precision of the rabbit technique. Philos. Trans. R. Soc. A 377, 20170475 (2019).
58. Guerrier, S., Molinari, R. & Stebler, Y. Theoretical limitations of allan variance-based regression for time series model estimation.
IEEE Signal Process. Lett. 23, 597–601 (2016).
Author contributions
M.S.A.-A.: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing—
Original Draft. F.E.A.-C.: Validation, Formal analysis, Investigation, Writing—Original Draft, Resources. E.A.L.:
Validation, Formal analysis, Investigation, Writing—Original Draft, Visualization, Resources. C.D.R.-G.: Inves-
tigation, Validation, Data Curation, Writing—Review & Editing, Resources. W.V.: Investigation, Data Curation,
Writing—Original Draft, Visualization, Resources.
Competing interests
The authors declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to M.S.A.-A.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Vol.:(0123456789)
www.nature.com/scientificreports/
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Vol:.(1234567890)