0% found this document useful (0 votes)
73 views21 pages

Mirjalili2016 Article DragonflyAlgorithmANewMeta-heu 2

This document proposes a new optimization algorithm called the Dragonfly Algorithm (DA) inspired by the swarming behavior of dragonflies. DA is designed to balance exploration and exploitation during optimization through modeling the social interactions of dragonflies when swarming and searching for food. The paper presents the mathematical models underlying DA and evaluates its performance on benchmark problems, finding it provides competitive results compared to other algorithms. Binary and multi-objective versions of DA are also proposed.

Uploaded by

ZaighamAbbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views21 pages

Mirjalili2016 Article DragonflyAlgorithmANewMeta-heu 2

This document proposes a new optimization algorithm called the Dragonfly Algorithm (DA) inspired by the swarming behavior of dragonflies. DA is designed to balance exploration and exploitation during optimization through modeling the social interactions of dragonflies when swarming and searching for food. The paper presents the mathematical models underlying DA and evaluates its performance on benchmark problems, finding it provides competitive results compared to other algorithms. Binary and multi-objective versions of DA are also proposed.

Uploaded by

ZaighamAbbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Neural Comput & Applic (2016) 27:1053–1073

DOI 10.1007/s00521-015-1920-1

ORIGINAL ARTICLE

Dragonfly algorithm: a new meta-heuristic optimization technique


for solving single-objective, discrete, and multi-objective problems
Seyedali Mirjalili1,2

Received: 1 October 2014 / Accepted: 30 April 2015 / Published online: 29 May 2015
 The Natural Computing Applications Forum 2015

Abstract A novel swarm intelligence optimization in solving challenging real problems with unknown true
technique is proposed called dragonfly algorithm (DA). The Pareto optimal front as well. Note that the source codes of
main inspiration of the DA algorithm originates from the the DA, BDA, and MODA algorithms are publicly available
static and dynamic swarming behaviours of dragonflies in at http://www.alimirjalili.com/DA.html.
nature. Two essential phases of optimization, exploration
and exploitation, are designed by modelling the social in- Keywords Optimization  Multi-objective optimization 
teraction of dragonflies in navigating, searching for foods, Constrained optimization  Binary optimization 
and avoiding enemies when swarming dynamically or sta- Benchmark  Swarm intelligence  Evolutionary
tistically. The paper also considers the proposal of binary algorithms  Particle swarm optimization  Genetic
and multi-objective versions of DA called binary DA algorithm
(BDA) and multi-objective DA (MODA), respectively. The
proposed algorithms are benchmarked by several mathe-
matical test functions and one real case study qualitatively 1 Introduction
and quantitatively. The results of DA and BDA prove that
the proposed algorithms are able to improve the initial Nature is full of social behaviours for performing different
random population for a given problem, converge towards tasks. Although the ultimate goal of all individuals and col-
the global optimum, and provide very competitive results lective behaviours is survival, creatures cooperate and inter-
compared to other well-known algorithms in the literature. act in groups, herds, schools, colonies, and flocks for several
The results of MODA also show that this algorithm tends to reasons: hunting, defending, navigating, and foraging. For
find very accurate approximations of Pareto optimal solu- instance, Wolf packs own one of the most well-organized
tions with high uniform distribution for multi-objective social interactions for hunting. Wolves tend to follow a social
problems. The set of designs obtained for the submarine leadership to hunt preys in different steps: chasing preys,
propeller design problem demonstrate the merits of MODA circling preys, harassing preys, and attacking preys [1, 2]. An
example of collective defence is schools of fishes in oceans.
Thousands of fishes create a school and avoid predators by
Electronic supplementary material The online version of this warning each other, making the predation very difficult for
article (doi:10.1007/s00521-015-1920-1) contains supplementary
material, which is available to authorized users.
predators [3]. The majority of predators have evolved to di-
vide such schools to sub-schools by attacking them and
& Seyedali Mirjalili eventually hunting the separated individuals.
seyedali.mirjalili@griffithuni.edu.au Navigation is another reason for some of the creature to
1 swarm. Birds are the best examples of such behaviours, in
School of Information and Communication Technology,
Griffith University, Nathan Campus, Brisbane, QLD 4111, which they migrate between continents in flocks conve-
Australia niently. It has been proven that the v-shaped configuration
2
Queensland Institute of Business and Technology, of flight highly saves the energy and equally distribute drag
Mt Gravatt, Brisbane, QLD 4122, Australia among the individuals in the flock [4]. Last but not least,

123
1054 Neural Comput & Applic (2016) 27:1053–1073

foraging is another main reason of social interactions of the social behaviour of honey bees when foraging nectar and
many species in nature. Ants and bees are the best exam- has been proposed by Karaboga [13]. The difference of this
ples of collective behaviours with the purpose of foraging. algorithm compared to ACO and PSO is the division of the
It has been proven that ants and bees are able to find and honey bees to scout, onlooker, and employed bees [14]. The
mark the shortest path from the nest/hive to the source of employed bees are responsible for finding food sources and
food [5]. They intelligently search for foods and mark the informing others by a special dance. In addition, onlookers
path utilizing pheromone to inform and guide others. watch the dances, select one of them, and follow the path
It is very interesting that creatures find the optimal si- towards the selected food sources. Scouters discover aban-
tuations and perform tasks efficiently in groups. It is obvi- doned food sources and substitute them by new sources.
ous that they have been evolved over centuries to figure out Since the proposal of these algorithms, a significant
such optimal and efficient behaviours. Therefore, it is quite number of researchers attempted to improve or apply them
reasonable that we inspire from them to solve our problems. in to different problems in diverse fields [15–20]. The
This is then main purpose of a field of study called swarm successful application of these algorithms in science and
intelligence (SI), which was first proposed by Beni and industry evidences the merits of SI-based techniques in
Wang in 1989 [6]. SI refers to the artificial implementation/ practice. The reasons are due to the advantages of SI-based
simulation of the collective and social intelligence of a algorithms. Firstly, SI-based techniques save information
group of living creatures in nature [7]. Researchers in this about the search space over the course of iteration, whereas
field try to figure out the local rules for interactions between such information is discarded by evolutionary algorithms
the individuals that yield to the social intelligence. Since (EA) generation by generation. Secondly, there are fewer
there is no centralized control unit to guide the individuals, controlling parameters in SI-based algorithm. Thirdly, SI-
finding the simple rules between some of them can simulate based algorithm is equipped with less operators compared
the social behaviour of the whole population. to EA algorithms. Finally, SI-based techniques benefit from
The ant colony optimization (ACO) algorithm is one of the flexibility, which make them readily applicable to prob-
first SI techniques mimicking the social intelligence of ants lems in different fields.
when foraging in an ant colony [8, 9]. This algorithm has Despite the significant number of recent publications in
been inspired from the simple fact that each ant marks its own this field [21–29], there are still other swarming behaviours
path towards to food sources outside of the nest by pher- in nature that have not gained deserved attention. One of
omone. Once an ant finds a food source, it goes back to the the fancy insects that rarely swarm are dragonflies. Since
nest and marks the path by pheromone to show the path to there is no study in the literature to simulate the individual
others. When other ants realize such pheromone marks, they and social intelligence of dragonflies, this paper aims to
also try to follow the path and leave their own pheromones. first find the main characteristics of dragonflies’ swarms.
The key point here is that they might be different paths to the An algorithm is then proposed based on the identified
food source. Since a longer path takes longer time to travel for characteristics. The no free lunch (NFL) [30] theorem also
ants, however, the pheromone vaporizes with higher rate supports the motivation of this work to propose this opti-
before it is re-marked by other ants. Therefore, the shortest mizer since this algorithm may outperform other algo-
path is achieved by simply following the path with stronger rithms on some problems that have not been solved so far.
level of pheromone and abandoning the paths with weaker The rest of the paper is organized as follows:
pheromone levels. Doringo first inspired from these simple Section 2 presents the inspiration and biological foun-
rules and proposed the well-known ACO algorithm [10]. dations of the paper. The mathematical models and the DA
The particle swarm optimization (PSO) algorithm is also algorithm are provided in Sect. 3. This section also pro-
another well-regarded SI paradigm. This algorithm mimics poses binary and multi-objective versions of DA. A com-
the foraging and navigation behaviour of bird flocks and prehensive comparative study on several benchmark
has been proposed by Eberhart and Kennedy [11]. The functions and one real case study is provided in Sect. 4 to
main inspiration originates from the simple rules of inter- confirm and verify the performances of DA, BDA, and
actions between birds: birds tend to maintain their fly di- MODA algorithms. Finally, Sect. 5 concludes the work
rection towards their current directions, the best location of and suggests some directions for future studies.
food source obtained so far, and the best location of the
food that the swarm found so far [12]. The PSO algorithm
simply mimics these three rules and guides the particles 2 Inspiration
towards the best optimal solutions by each of the indi-
viduals and the swarm simultaneously. Dragonflies (Odonata) are fancy insects. There are nearly
The artificial bee colony (ABC) is another recent and 3000 different species of this insect around the world [31].
popular SI-based algorithm. This algorithm again simulates As shown in Fig. 1, a dragonfly’s lifecycle includes two

123
Neural Comput & Applic (2016) 27:1053–1073 1055

Fig. 1 a Real dragonfly, b Life


cycle of dragonflies (left image
courtesy of Mehrdad Momeny
at www.mehrdadmomeny.com) Adult

Nymph Egg

(a) (b)

main milestones: nymph and adult. They spend the major • Separation, which refers to the static collision avoid-
portion of their lifespan in nymph, and they undergo ance of the individuals from other individuals in the
metamorphism to become adult [31]. neighbourhood.
Dragonflies are considered as small predators that hunt • Alignment, which indicates velocity matching of indi-
almost all other small insects in nature. Nymph dragonflies viduals to that of other individuals in neighbourhood.
also predate on other marine insects and even small fishes. • Cohesion, which refers to the tendency of individuals
The interesting fact about dragonflies is their unique and towards the centre of the mass of the neighbourhood.
rare swarming behaviour. Dragonflies swarm for only two
The main objective of any swarm is survival, so all of
purposes: hunting and migration. The former is called static
the individuals should be attracted towards food sources
(feeding) swarm, and the latter is called dynamic (migra-
and distracted outward enemies. Considering these two
tory) swarm.
behaviours, there are five main factors in position updating
In static swarm, dragonflies make small groups and fly
of individuals in swarms as shown in Fig. 2.
back and forth over a small area to hunt other flying preys
Each of these behaviours is mathematically modelled as
such as butterflies and mosquitoes [32]. Local movements
follows:
and abrupt changes in the flying path are the main char-
The separation is calculated as follows [34]:
acteristics of a static swarm. In dynamic swarms, however,
a massive number of dragonflies make the swarm for mi- X
N
Si ¼  X  Xj ð3:1Þ
grating in one direction over long distances [33].
j¼1
The main inspiration of the DA algorithm originates
from static and dynamic swarming behaviours. These two where X is the position of the current individual, Xj shows
swarming behaviours are very similar to the two main the position j-th neighbouring individual, and N is the
phases of optimization using meta-heuristics: exploration number of neighbouring individuals.
and exploitation. Dragonflies create sub-swarms and fly Alignment is calculated as follows:
over different areas in a static swarm, which is the main PN
j¼1 Vj
objective of the exploration phase. In the static swarm, Ai ¼ ð3:2Þ
N
however, dragonflies fly in bigger swarms and along one
direction, which is favourable in the exploitation phase. where Xj shows the velocity of j-th neighbouring
These two phases are mathematically implemented in the individual.
following section. The cohesion is calculated as follows:
PN
j¼1 Xj
Ci ¼ X ð3:3Þ
N
3 Dragonfly algorithm
where X is the position of the current individual, N is the
3.1 Operators for exploration and exploitation number of neighbourhoods, and Xj shows the position j-th
neighbouring individual.
According to Reynolds, the behaviour of swarms follows Attraction towards a food source is calculated as
three primitive principles [34]: follows:

123
1056 Neural Comput & Applic (2016) 27:1053–1073

Fig. 2 Primitive corrective Seperation Alignment Cohesion


patterns between individuals in
a swarm

Attraction to food Distraction from enemy

Fi ¼ X þ  X ð3:4Þ After calculating the step vector, the position vectors are
calculated as follows:
where X is the position of the current individual, and X?
shows the position of the food source. Xtþ1 ¼ Xt þ DXtþ1 ð3:7Þ
Distraction outwards an enemy is calculated as follows: where t is the current iteration.
 With separation, alignment, cohesion, food, and enemy
Ei ¼ X þ X ð3:5Þ
factors (s, a, c, f, and e), different explorative and ex-
where X is the position of the current individual, and X- ploitative behaviours can achieved during optimization.
shows the position of the enemy. Neighbours of dragonflies are very important, so a neigh-
The behaviour of dragonflies is assumed to be the bourhood (circle in a 2D, sphere in a 3D space, or hyper-
combination of these five corrective patterns in this paper. sphere in an nD space) with a certain radius is assumed
To update the position of artificial dragonflies in a search around each artificial dragonfly. An example of swarming
space and simulate their movements, two vectors are behaviour of dragonflies with increasing neighbourhood
considered: step (DX) and position (X). The step vector is radius using the proposed mathematical model is illustrated
analogous to the velocity vector in PSO, and the DA al- in Fig. 3.
gorithm is developed based on the framework of the PSO As discussed in the previous subsection, dragonflies
algorithm. The step vector shows the direction of the only show two types of swarms: static and dynamic as
movement of the dragonflies and defined as follows (note shown in Fig. 4. As may be seen in this figure, dragonflies
that the position updating model of artificial dragonflies is tend to align their flying while maintaining proper
defined in one dimension, but the introduced method can separation and cohesion in a dynamic swarm. In a static
be extended to higher dimensions): swarm, however, alignments are very low while cohesion is
DXtþ1 ¼ ðsSi þ aAi þ cCi þ fFi þ eEi Þ þ wDXt ð3:6Þ high to attack preys. Therefore, we assign dragonflies with
high alignment and low cohesion weights when exploring
where s shows the separation weight, Si indicates the
the search space and low alignment and high cohesion
separation of the i-th individual, a is the alignment weight,
when exploiting the search space. For transition between
A is the alignment of i-th individual, c indicates the co-
exploration and exploitation, the radii of neighbourhoods
hesion weight, Ci is the cohesion of the i-th individual, f is
are increased proportional to the number of iterations.
the food factor, Fi is the food source of the i-th individual,
Another way to balance exploration and exploitation is to
e is the enemy factor, Ei is the position of enemy of the i-th
adaptively tune the swarming factors (s, a, c, f, e, and
individual, w is the inertia weight, and t is the iteration
w) during optimization.
counter.

123
Neural Comput & Applic (2016) 27:1053–1073 1057

Fig. 3 Swarming behaviour of Enemy


artificial dragon flies
(w = 0.9–0.2, s = 0.1, a = 0.1,
c = 0.7, f = 1, e = 1)
10 10 10

5 5 5

0 0 0

-5 -5 -5

-10 -10 -10


-10 0 10 -10 0 10 -10 0 10

10 10

5 5

0 0

-5 -5

-10 -10
-10 0 10 -10 0 10

Fig. 4 Dynamic versus static dragonfly swarms

A question may rise here as to how the convergence of increased as well whereby the swarm become one group at
dragonflies is guaranteed during optimization. The drag- the final stage of optimization to converge to the global
onflies are required to change their weights adaptively for optimum. The food source and enemy are chosen from the
transiting from exploration to exploitation of the search best and worst solutions that the whole swarm is found so
space. It is also assumed that dragonflies tend to see more far. This causes convergence towards promising areas of
dragonflies to adjust flying path as optimization process the search space and divergence outward non-promising
progresses. In other word, the neighbourhood area is regions of the search space.

123
1058 Neural Comput & Applic (2016) 27:1053–1073

To improve the randomness, stochastic behaviour, and Initialize the dragonflies population Xi (i = 1, 2, ..., n)
exploration of the artificial dragonflies, they are required to Initialize step vectors ΔXi (i = 1, 2, ..., n)
fly around the search space using a random walk (Lévy while the end condition is not satisfied
flight) when there is no neighbouring solutions. In this Calculate the objective values of all dragonflies
Update the food source and enemy
case, the position of dragonflies is updated using the fol- Update w, s, a, c, f, and e
lowing equation: Calculate S, A, C, F, and E using Eqs. (3.1) to (3.5)
Xtþ1 ¼ Xt þ L
evyðdÞ  Xt ð3:8Þ Update neighbouring radius
if a dragonfly has at least one neighbouring dragonfly
where t is the current iteration, and d is the dimension of Update velocity vector using Eq. (3.6)
Update position vector using Eq. (3.7)
the position vectors.
else
The Lévy flight is calculated as follows [35]: Update position vector using Eq. (3.8)
r1  r end if
evyð xÞ ¼ 0:01 
L 1 ð3:9Þ Check and correct the new positions based on the
j r2 j b boundaries of variables
end while
where r1, r2 are two random numbers in [0,1], b is a constant
(equal to 1.5 in this work), and r is calculated as follows: Fig. 5 Pseudo-codes of the DA algorithm
0   11=b
Cð1 þ bÞ  sin pb 2 agents of DA are able to update their positions by adding
r¼@   b1
A ð3:10Þ the step vectors to the position vectors. In a binary search
C 2 b2 2 Þ
1þb ð
space, however, the position of search agents cannot be
updated by adding step vectors to X since the position
where Cð xÞ ¼ ðx  1Þ!.
vectors of search agents can only be assigned by 0 or 1.
Due to the similarity of DA and other SI techniques, the
3.2 The DA algorithm for single-objective problems
current methods for solving binary problems in the lit-
erature are readily applicable to this algorithm.
The DA algorithm starts optimization process by creating a
According to Mirjalili and Lewis [39], the easiest and
set of random solutions for a given optimization problems.
most effective method to convert a continuous SI technique
In fact, the position and step vectors of dragonflies are
to a binary algorithm without modifying the structure is to
initialized by random values defined within the lower and
employ a transfer function. Transfer functions receive ve-
upper bounds of the variables. In each iteration, the posi-
locity (step) values as inputs and return a number in [0,1],
tion and step of each dragonfly are updated using
which defines the probability of changing positions. The
Eqs. (3.7)/(3.8) and (3.6). For updating X and DX vectors,
output of such functions is directly proportional to the
neighbourhood of each dragonfly is chosen by calculating
value of the velocity vector. Therefore, a large value for the
the Euclidean distance between all the dragonflies and se-
velocity of a search agent makes it very likely to update its
lecting N of them. The position updating process is con-
position. This method simulates abrupt changes in particles
tinued iteratively until the end criterion is satisfied. The
with large velocity values similarly to continuous opti-
pseudo-codes of the DA algorithm are provided in Fig. 5.
mization (Fig. 6). Two examples of transfer functions in
It is worth discussing here that the main differences be-
the literature are illustrated in Fig. 6 [39–41].
tween the DA and PSO algorithm are the consideration of
As may be seen in this figure, there are two types of
separation, alignment, cohesion, attraction, distraction, and
transfer functions: s-shaped versus v-shaped. According to
random walk in this work. Although there are some works in
Saremi et al. [40], the v-shaped transfer functions are better
the literature that attempted to integrate separation, align-
than the s-shaped transfer functions because they do not
ment, and cohesion to PSO [36–38], this paper models the
force particles to take values of 0 or 1. In order to solve
swarming behaviour of dragonflies by considering all the
binary problems with the BDA algorithm, the following
possible factors applied to individuals in a swarm. The con-
transfer function is utilized [39]:
cepts of static and dynamic swarms are quite novel as well.  
The proposed model of this work is also completely different  Dx 
T ðDxÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ð3:11Þ
from the current improved PSO in the literature cited above. Dx2 þ 1
This transfer function is first utilized to calculate the
3.3 The DA algorithm for binary problems (BDA) probability of changing position for all artificial dragon-
flies. The following new updating position formula is then
Optimization in a binary search space is very different than employed to update the position of search agents in binary
a continuous space. In continuous search spaces, the search search spaces:

123
Neural Comput & Applic (2016) 27:1053–1073 1059

Fig. 6 S-shaped and v-shaped


transfer functions

(a) (b)

(
:Xt r\T ðDxtþ1 Þ Initialize the dragonflies population Xi (i = 1, 2, ..., n)
Xtþ1 ¼ ð3:12Þ Initialize step vectors ΔXi (i = 1, 2, ..., n)
Xt r  T ðDxtþ1 Þ while the end condition is not satisfied
Calculate the objective values of all dragonflies
where r is a number in the interval of [0,1]. Update the food source and enemy
With the transfer function and new position updating Update w, s, a, c, f, and e
equations, the BDA algorithm will be able to solve binary Calculate S, A, C, F, and E using Eqs. (3.1) to (3.5)
problems easily subject to proper formulation of the Update step vectors using Eq. (3.6)
problem. It should be noted here that since the distance of Calculate the probabilities using Eq. (3.11)
Update position vectors using Eq. (3.12)
dragonflies cannot be determined in a binary space as end while
clearly as a continuous space, the BDA algorithm considers
all of the dragonflies as one swarm and simulate explo- Fig. 7 Pseudo-codes of the BDA algorithm
ration/exploitation by adaptively tuning the swarming
factors (s, a, c, f, and e) as well as the inertia weigh (w).
The pseudo-codes of the BDA algorithm are presented in Pareto optimal dominance allow us to compare two solu-
Fig. 7. tions in a multi-objective search space. The definitions of
Pareto dominance and Pareto optimality are as follows
3.4 The DA algorithm for multi-objective problems [43]:
(MODA)
Definition 1 Pareto dominance:
Multi-objective problems have multiple objectives, which Suppose that there are two vectors such as: ~ x¼
are mostly in conflict. The answer for such problems is a ðx1 ; x2 ; . . .; xk Þ and y ¼ ðy1 ; y2 ; . . .; yk Þ.
~
set of solutions called Pareto optimal solutions set. This set Vector x dominates vector y (denote as x  y) iff:
includes Pareto optimal solutions that represent the best
8i 2 f1; 2; . . .; kg; ½f ðxi Þ  f ðyi Þ ^ ½9i 2 1; 2; . . .; k : f ðxi Þ
trade-offs between the objectives [42]. Without loss of
generality, multi-objective optimization can be formulated ð3:17Þ
as a minimization problem as follows: It can be seen in Eq. (3.17) that a solution dominates the
Minimize : F ð~
xÞ ¼ ff1 ð~
xÞ; f2 ð~
xÞ; . . .; fo ð~
x Þg ð3:13Þ other if it shows better or equal values on all objectives
(dimensions) and has better value in at last one of the
Subject to : gi ð~
xÞ  0; i ¼ 1; 2; . . .; m ð3:14Þ
objectives. The definition of Pareto optimality is as follows
hi ð~
xÞ ¼ 0; i ¼ 1; 2; . . .; p ð3:15Þ [44]:
Li  xi  Ui ; i ¼ 1; 2; . . .; n ð3:16Þ Definition 2 Pareto optimality:
where o is the number of objectives, m is the number of x 2 X is called Pareto optimal iff:
A solution ~
inequality constraints, p is the number of equality con-
9= ~
y 2 XjF ð~
yÞ  F ð~
xÞ ð3:18Þ
straints, and [Li, Ui] are the boundaries of i-th variable.
Due to the nature of multi-objective problems, the According to the definition 2, two solutions are non-
comparison between different solutions cannot be done by dominated with respect to each other if neither of them
arithmetic relational operators. In this case, the concepts of dominates the other. A set including all the non-dominated

123
1060 Neural Comput & Applic (2016) 27:1053–1073

solutions of a problem is called Pareto optimal set and


defined as follows:
Definition 3 Pareto optimal set:
The set of all Pareto optimal solutions is called Pareto
set as follows:
Ps := fx; y 2 Xj9FðyÞ  FðxÞg ð3:19Þ
A set containing the corresponding objective values of
Pareto optimal solutions in Pareto optimal set is called
Pareto optimal front. The definition of the Pareto optimal
front is as follows:
Definition 4 Pareto optimal front:
A set containing the value of objective functions for
Fig. 8 Conceptual model of the best hyper-spheres for selecting a
Pareto solutions set: food source or removing a solution from the archive
Pf := fFðxÞjx 2 Ps g ð3:20Þ
In order to solve multi-objective problems using meta- c
Pi ¼ ð3:21Þ
heuristics, an archive (repository) is widely used in the Ni
literature to maintain the Pareto optimal solutions during where c is a constant number greater than one, and Ni is the
optimization. Two key points in finding a proper set of number of obtained Pareto optimal solutions in the i-th
Pareto optimal solutions for a given problem are conver- segment.
gence and coverage. Convergence refers to the ability of a This equations allows the MODA algorithm to have
multi-objective algorithm in determining accurate ap- higher probability of choosing food sources from the less
proximations of Pareto optimal solutions. Coverage is the populated segments. Therefore, the artificial dragonflies
distribution of the obtained Pareto optimal solutions along will be encouraged to fly around such regions and improve
the objectives. Since most of the current multi-objective the distribution of the whole Pareto optimal front.
algorithms in the literature are posteriori, the coverage and For selecting enemies from the archive, however, the
number of solutions are very important for decision making worst (most populated) hyper-sphere should be chosen in
after the optimization process [45]. The ultimate goal for a order to discourage the artificial dragonflies from searching
multi-objective optimizer is to find the most accurate ap- around non-promising crowded areas. The selection is done
proximation of true Pareto optimal solutions (convergence) by a roulette-wheel mechanism with the following prob-
with uniform distributions (coverage) across all objectives. ability for each segment:
For solving multi-objective problems using the DA al-
Ni
gorithm, it is first equipped with an archive to store and Pi ¼ ð3:22Þ
c
retrieve the best approximations of the rue Pareto optimal
solutions during optimization. The updating position of where c is a constant number greater than one, and Ni is the
search agents is identical to that of DA, but the food number of obtained Pareto optimal solutions in the i-th
sources are selected from the archive. In order to find a segment.
well-spread Pareto optimal front, a food source is chosen In may be seen in Eq. (3.22) that the roulette-wheel
from the least populated region of the obtained Pareto mechanism assigns high probabilities to the most crowded
optimal front, similarly to the multi-objective particle hyper-spheres for being selected as enemies. An example
swarm optimization (MOPSO) algorithm [46]. To find the of the two above-discussed selection processes is illustrated
least populated area of the Pareto optimal front, the search in Fig. 8. Note that the main hyper-sphere that covers all
space should be segmented. This is done by finding the best the sub-hyper-spheres is not illustrated in this figure.
and worst objectives of Pareto optimal solutions obtained, The archive should be updated regularly in each it-
defining a hyper-sphere to cover all the solutions, and di- eration and may become full during optimization. There-
viding the hyper-spheres to equal sub-hyper-spheres in fore, there should be a mechanism to manage the archive. If
each iteration. After the creation of segments, the selection a solution is dominated by at least one of the archive
is done by a roulette-wheel mechanism with the following residences, it should be prevented from entering the
probability for each segment, which was proposed by archive. If a solution dominates some of the Pareto optimal
Coello Coello et al. [47]: solutions in the archive, they all should be removed from

123
Neural Comput & Applic (2016) 27:1053–1073 1061

the archive, and the solution should be allowed to enter the more challenging than unimodal functions. One of the
archive. If a solution is non-dominated with respect to all optima is called global optimum, and the rest are called
of the solutions in the archive, it should be added to the local optima. An algorithm should avoid all the local op-
archive. If the archive is full, one or more than one solu- tima to approach and approximate the global optimum.
tions may be removed from the most populated segments to Therefore, exploration and local optima avoidance of al-
accommodate new solution(s) in the archive. These rules gorithms can be benchmarked by multi-modal test
are taken from the work of Coello Coello et al. [47]. functions.
Figure 8 shows the best candidate hyper-sphere (segments) The last group of test functions, composite functions, are
to remove solutions (enemies) from in case the archive mostly the combined, rotated, shifted, and biased version of
become full. other unimodal and multi-modal test functions [52, 53].
All the parameters of the MODA algorithm are identical They mimic the difficulties of real search spaces by pro-
to those of the DA algorithm except two new parameters viding a massive number of local optima and different
for defining the maximum number of hyper-spheres and shapes for different regions of the search space. An algo-
archive size. After all, the pseudo-codes of MODA are rithm should properly balance exploration and exploitation
presented in Fig. 9. to approximate the global optimum of such test functions.
Therefore, exploration and exploitation combined can be
benchmarked by this group of test functions.
4 Results and discussion For verification of the results of DA, two well-known
algorithms are chosen: PSO [54] as the best algorithm
In this section, a number of test problems and one real case among swarm-based technique and GA [55] as the best
study are selected to benchmark the performance of the evolutionary algorithm. In order to collect quantitative
proposed DA, BDA, and MODA algorithms. results, each algorithm is run on the test functions 30
times and to calculate the average and standard deviation
4.1 Results of DA algorithm of the best approximated solution in the last iteration.
These two metrics show which algorithm behaves more
Three groups of test functions with different characteristics stable when solving the test functions. Due to the
are selected to benchmark the performance of the DA al- stochastic nature of the algorithms, a statistical test is also
gorithm from different perspectives. As shown in Ap- conducted to decide about the significance of the results
pendix 1, the test functions are divided the three groups: [56]. The averages and standard deviation only compare
unimodal, multi-modal, and composite functions [48–51]. the overall performance of the algorithms, while a statis-
As their names imply, unimodal test functions have single tical test considers each run’s results and proves that the
optimum, so they can benchmark the exploitation and results are statistically significant. The Wilcoxon non-
convergence of an algorithm. In contrast, multi-modal test parametric statistical test [39, 56] is conducted in this
functions have more than one optimum, which make them work. After all, each of the test functions is solved using

Fig. 9 Pseudo-codes of the


Initialize the dragonflies population Xi (i = 1, 2, ..., n)
MODA algorithm
Initialize step vectors ΔXi (i = 1, 2, ..., n)
Define the maximum number of hyper spheres (segments)
Define the archive size
while the end condition is not satisfied
Calculate the objective values of all dragonflies
Find the non-dominated solutions
Update the archive with respect to the obtained non-dominated solutions
If the archive is full
Run the archive maintenance mechanism to omit one of the current archive members
Add the new solution to the archive
end if
If any of the new added solutions to the archive is located outside the hyper spheres
Update and re-position all of the hyper spheres to cover the new solution(s)
end if
Select a food source from archive: =SelectFood(archive)
Select an enemy from archive: =SelectEnemy(archive)
Update step vectors using Eq. (3.11)
Update position vectors using Eq. (3.12)
Check and correct the new positions based on the boundaries of variables
end while

123
1062 Neural Comput & Applic (2016) 27:1053–1073

Table 1 Statistical results of


Test function DA PSO GA
the algorithms on the test
functions Ave Std Ave Std Ave Std

TF1 2.85E-18 7.16E-18 4.2E-18 1.31E-17 748.5972 324.9262


TF2 1.49E-05 3.76E-05 0.003154 0.009811 5.971358 1.533102
TF3 1.29E-06 2.1E-06 0.001891 0.003311 1949.003 994.2733
TF4 0.000988 0.002776 0.001748 0.002515 21.16304 2.605406
TF5 7.600558 6.786473 63.45331 80.12726 133307.1 85,007.62
TF6 4.17E-16 1.32E-15 4.36E-17 1.38E-16 563.8889 229.6997
TF7 0.010293 0.004691 0.005973 0.003583 0.166872 0.072571
TF8 -2857.58 383.6466 -7.1E?11 1.2E?12 -3407.25 164.4776
TF9 16.01883 9.479113 10.44724 7.879807 25.51886 6.66936
TF10 0.23103 0.487053 0.280137 0.601817 9.498785 1.271393
TF11 0.193354 0.073495 0.083463 0.035067 7.719959 3.62607
TF12 0.031101 0.098349 8.57E-11 2.71E-10 1858.502 5820.215
TF13 0.002197 0.004633 0.002197 0.004633 68,047.23 87,736.76
TF14 103.742 91.24364 150 135.4006 130.0991 21.32037
TF15 193.0171 80.6332 188.1951 157.2834 116.0554 19.19351
TF16 458.2962 165.3724 263.0948 187.1352 383.9184 36.60532
TF17 596.6629 171.0631 466.5429 180.9493 503.0485 35.79406
TF18 229.9515 184.6095 136.1759 160.0187 118.438 51.00183
TF19 679.588 199.4014 741.6341 206.7296 544.1018 13.30161

30 search agents over 500 iterations, and the results are Table 2 p values of the Wilcoxon ranksum test over all runs
presented in Tables 1 and 2. Note that the initial pa- F DA PSO GA
rameters of PSO and GA are identical to the values in the
original papers cited above. TF1 N/A 0.045155 0.000183
As per the results of the algorithms on the unimodal test TF2 N/A 0.121225 0.000183
functions (TF1–TF7), it is evident that the DA algorithm TF3 N/A 0.003611 0.000183
outperforms PSO and GA on the majority of the cases. The TF4 N/A 0.307489 0.000183
p values in Table 5 also show that this superiority is sta- TF5 N/A 0.10411 0.000183
tistically significant since the p values are less than 0.05. TF6 0.344704 N/A 0.000183
Considering the characteristic of unimodal test functions, it TF7 0.021134 N/A 0.000183
can be stated that the DA algorithm benefits from high TF8 0.000183 N/A 0.000183
exploitation. High exploitation assists the DA algorithm to TF9 0.364166 N/A 0.002202
rapidly converge towards the global optimum and exploit it TF10 N/A 0.472676 0.000183
accurately. TF11 0.001008 N/A 0.000183
The results of the algorithms on multi-modal test func- TF12 0.140465 N/A 0.000183
tions (TF8–TF13) show that again the DA algorithm pro- TF13 N/A 0.79126 0.000183
vides very competitive results compared to PSO. The TF14 N/A 0.909654 0.10411
p values reported in Table 2 also show that the DA and TF15 0.025748 0.241322 N/A
PSO algorithms show significantly better results than GA. TF16 0.01133 N/A 0.053903
Considering the characteristics of multi-modal test func- TF17 0.088973 N/A 0.241322
tions and these results, it may be concluded that the DA TF18 0.273036 0.791337 N/A
algorithm has high exploration which assist it to discover TF19 N/A 0.472676 N/A
the promising regions of the search space. In addition, the
local optima avoidance of this algorithm is satisfactory
since it is able to avoid all of the local optima and ap- The results of composite test functions (TF14–TF19)
proximate the global optima on the majority of the multi- show that the DA algorithm provides very competitive
modal test functions. results and outperforms others occasionally. However,

123
Neural Comput & Applic (2016) 27:1053–1073 1063

the p values show that the superiority is not as sig- the best food source obtained from the first to the last
nificant as those of unimodal and multi-modal test iteration (convergence).
functions. This is due to the difficulty of the composite Tracking the position of dragonflies during optimization
test functions that make them challenging for algorithms allows us to observe whether and how the DA algorithm
employed in this work. Composite test functions bench- explores and exploits the search space. Monitoring the
mark the exploration and exploitation combined. There- value of a parameter during optimization assists us to ob-
fore, these results prove that the operators of the DA serve the movement of candidate solutions. Preferably,
algorithm appropriately balance exploration and ex- there should be abrupt changes in the parameters in the
ploitation to handle difficulty in a challenging search exploration phase and gradual changes in the exploitation
space. Since the composite search spaces are highly phase. The average fitness of dragonflies during optimiza-
similar to the real search spaces, these results make the tion also shows the improvement in the fitness of the whole
DA algorithm potentially able to solve challenging op- swarm during optimization. Finally, the fitness of the food
timization problems. source shows the improvement of the obtained global op-
For further observing and analysing the performance of timum during optimization.
the proposed DA algorithm, four new metrics are employed Some of the functions (TF2, TF10, and TF17) are se-
in the following paragraphs. The main aims of this ex- lected and solved by 10 search agents over 150 iterations.
periment is to confirm the convergence and predict the The results are illustrated in Figs. 10, 11, 12, and 13.
potential behaviour of the DA algorithm when solving real Figure 10 shows the history of dragonfly’s position during
problems. The employed quantitative metrics are the po- optimization. It may be observed that the DA algorithm
sition of dragonflies from the first to the last iteration tends to search the promising regions of the search space
(search history), the value of a parameter from the first to extensively. The behaviour of DA when solving TF17,
the last iteration (trajectory), the average fitness of drag- which is a composite test function, is interesting because
onflies from the first to the last iteration, and the fitness of the coverage of search space seems to be high. This shows

Fig. 10 Search history of the


DA algorithms on unimodal,
multi-modal, and composite test
functions

Fig. 11 Trajectory of DA’s


search agents on unimodal,
multi-modal, and composite test
functions

Fig. 12 Average fitness of


DA’s search agents on
unimodal, multi-modal, and
composite test functions

123
1064 Neural Comput & Applic (2016) 27:1053–1073

Fig. 13 Convergence curve of


the DA algorithms on unimodal,
multi-modal, and composite test
functions

Table 3 Statistical results of


Test function BDA BPSO BGSA
the binary algorithms on the test
functions Ave Std Ave Std Ave Std

TF1 0.281519 0.417723 5.589032 1.97734 82.95707 49.78105


TF2 0.058887 0.069279 0.196191 0.052809 1.192117 0.228392
TF3 14.23555 22.68806 15.51722 13.68939 455.9297 271.9785
TF4 0.247656 0.330822 1.895313 0.483579 7.366406 2.213344
TF5 23.55335 34.6822 86.44629 65.82514 3100.999 2927.557
TF6 0.095306 0.129678 6.980524 3.849114 106.8896 77.54615
TF7 0.012209 0.014622 0.011745 0.006925 0.03551 0.056549
TF8 -924.481 65.68827 -988.565 16.66224 -860.914 80.56628
TF9 1.805453 1.053829 4.834208 1.549026 10.27209 3.725984
TF10 0.388227 0.5709 2.154889 0.540556 2.786707 1.188036
TF11 0.193437 0.113621 0.47729 0.129354 0.788799 0.251103
TF12 0.149307 0.451741 0.407433 0.231344 9.526426 6.513454
TF13 0.035156 0.056508 0.306925 0.241643 2216.776 5663.491

that the DA’s artificial dragonflies are able to search the exploitation. For one, the proposed static swarm promotes
search space effectively. exploration, assists the DA algorithm to avoid local optima,
Figure 11 illustrates the trajectory of the first variable of and resolves local optima stagnation when solving chal-
the first artificial dragonfly over 150 iterations. It can be lenging problems. For another, the dynamic swarm of
observed that there are abrupt changes in the initial it- dragonflies emphasizes exploitation as iteration increases,
erations. These changes are decreased gradually over the which causes a very accurate approximation of the global
course of iterations. According to Berg et al. [57], this optimum.
behaviour can guarantee that an algorithm eventually
convergences to a point and search locally in a search 4.2 Results of BDA algorithm
space.
Figures 12 and 13 show the average fitness of all To benchmark the performance of the BDA algorithm, test
dragonflies and the food source, respectively. The average functions TF1 to TF13 are taken from Sect. 4.1 and Ap-
fitness of dragonflies shows a decreasing behaviour on all pendix 1. For simulating a binary search space, we consider
of the test functions. This proves that the DA algorithm 15 bits to define the variables of the test functions. The
improves the overall fitness of the initial random popula- dimension of test functions is reduced from 30 to 5, so the
tion. A similar behaviour can be observed in the conver- total number of binary variables to be optimized by the
gence curves. This also evidences that the approximation of BDA algorithm is 75 (5 9 15). For verification of the re-
the global optimum becomes more accurate as the iteration sults, the binary PSO (BPSO) [58] and binary gravitational
counter increases. Another fact that can be seen is the ac- search algorithm (BGSA) [59] are chosen from the lit-
celerated trend in the convergence curves. This is due to erature. Each of the algorithms is run 30 times, and the
the emphasis on local search and exploitation as iteration results are presented in Tables 3 and 4. Note that the initial
increases which highly accelerate the convergence towards parameters of BPSO and BGSA are identical to the values
the optimum in the final steps of iterations. in the original papers cited above.
As summary, the results of this section proved that the Table 3 shows that the proposed algorithm outperforms
proposed DA algorithm shows high exploration and both BPSO and BGSA on the majority of binary test cases.

123
Neural Comput & Applic (2016) 27:1053–1073 1065

Table 4 p values of the Wilcoxon ranksum test over all runs Table 5 Results of the multi-objective algorithms on ZDT1
F BDA BPSO BGSA Algorithm IGD

TF1 N/A 0.000183 0.000183 Ave Std Median Best Worst


TF2 N/A 0.001706 0.000183 MODA 0.00612 0.002863 0.0072 0.0024 0.0096
TF3 N/A 0.121225 0.000246 MOPSO 0.00422 0.003103 0.0037 0.0015 0.0101
TF4 N/A 0.000211 0.000183 NSGA-II 0.05988 0.005436 0.0574 0.0546 0.0702
TF5 N/A 0.009108 0.000183
TF6 N/A 0.000183 0.000183
TF7 0.472676 N/A 0.344704 Table 6 Results of the multi-objective algorithms on ZDT2
TF8 0.064022 N/A 0.000583
Algorithm IGD
TF9 N/A 0.000583 0.000183
TF10 N/A 0.00033 0.00044 Ave Std Median Best Worst
TF11 N/A 0.000583 0.00033
MODA 0.00398 0.001604244 0.0033 0.0023 0.006
TF12 N/A 0.002827 0.000183
MOPSO 0.00156 0.000174356 0.0017 0.0013 0.0017
TF13 N/A 0.000583 0.000183
NSGA-II 0.13972 0.026263465 0.1258 0.1148 0.1834

The discrepancy of the results is very evident as per the Table 7 Results of the multi-objective algorithms on ZDT3
p values reported in Table 4. These results prove that the
Algorithm IGD
BDA algorithm inherits high exploration and exploitation
from the DA algorithm due to the use of the Ave Std Median Best Worst
v-shaped transfer function. MODA 0.02794 0.004021 0.0302 0.02 0.0304
MOPSO 0.03782 0.006297 0.0362 0.0308 0.0497
4.3 Results of MODA algorithm
NSGA-II 0.04166 0.008073 0.0403 0.0315 0.0557

As multi-objective case studies, five challenging test


functions from the well-known ZDT set proposed by Deb
Table 8 Results of the multi-objective algorithms on ZDT1 with
et al. [60] are chosen in this subsection. Note that the first linear front
three test functions are identical to ZDT1, ZDT2, and
Algorithm IGD
ZDT3. However, this paper modifies ZDT1 and ZDT2 to
have test problems with linear and tri-objective fronts as Ave Std Median Best Worst
the last two case studies. The details of these test functions MODA 0.00616 0.005186 0.0038 0.0022 0.0163
are available in Appendix 2. The results are collected and MOPSO 0.00922 0.005531 0.0098 0.0012 0.0165
discussed quantitatively and qualitatively. Quantitative re-
NSGA-II 0.08274 0.005422 0.0804 0.0773 0.0924
sults are calculated by the inverse generational distance
(IGD) proposed by Sierra and Coello Coello [61] over ten
runs. This performance metric is similar to generational Table 9 Results of the multi-objective algorithms on ZDT2 with
distance (GD) [62] and formulated as follows: three objectives
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Pn 2
Algorithm IGD
i¼1 di
IGD ¼ ð4:1Þ
n Ave Std Median Best Worst
where n is the number of true Pareto optimal solutions, and MODA 0.00916 0.005372 0.0063 0.0048 0.0191
di indicates the Euclidean distance between the i-th true MOPSO 0.02032 0.001278 0.0203 0.0189 0.0225
Pareto optimal solution and the closest obtained Pareto NSGA-II 0.0626 0.017888 0.0584 0.0371 0.0847
optimal solutions in the reference set.
For collecting and discussing the qualitative results, the
best Pareto optimal front in ten independent runs are pre- As per the results presented in Tables 5, 6, 7, 8, and 9,
sented. The MODA algorithm is compared to MOPSO [47] the MODA algorithm tends to outperform NSGA-II and
and non-dominated sorting genetic algorithm (NSGA-II) provides very competitive results compared to MOPSO on
[63]. After all, the quantitative results are presented in the majority of the test functions. Figures 14, 15, 16, 17,
Tables 5, 6, 7, 8, and 9, and the qualitative results are and 18 also show that the convergence and coverage of the
provided in Figs. 14, 15, 16, 17, and 18. Pareto optimal solutions obtained by MODA algorithm are

123
1066 Neural Comput & Applic (2016) 27:1053–1073

Fig. 14 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT1

Fig. 15 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT2

Fig. 16 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT3

mostly better than NSGA-II. High convergence of the of iterations. High coverage of the MODA algorithm is due
MODA originates from the accelerated convergence of to the employed food/enemy selection mechanisms. Since
search agents around the food sources selected from the the foods and enemies are selected from the less populated
archive over the course of iterations. Adaptive values for s, and most populated hyper-spheres, respectively, the search
a, c, f, e, and w in MODA allow its search agents to con- agents of the MODA algorithm tend to search the regions
verge towards the food sources proportional to the number of the search space that have Pareto optimal solutions with

123
Neural Comput & Applic (2016) 27:1053–1073 1067

Fig. 17 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT1 with linear front

Fig. 18 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT2 with three objectives

Fig. 19 A 7-blade propeller with 2 m diameter for submarines

low distribution and avoid highly distributed regions in populated regions. These results evidence the merits of the
Pareto front. Therefore, the distribution of the Pareto op- proposed MODA in solving multi-objective problems as a
timal solutions is adjusted and increased along the obtained posteriori algorithm.
Pareto optimal front. The maintenance mechanism for a To demonstrate the applicability of the proposed MODA
full archive also assists the MODA algorithm to discard algorithm in practice, a submarine’s propeller is optimized
excess Pareto optimal solutions (enemies) in populated by this algorithm as well. This problem has two objectives:
segments and allows adding new food sources in less cavitation versus efficiency. These two objectives are in

123
1068 Neural Comput & Applic (2016) 27:1053–1073

conflict and restricted by a large number of constraints as The shape of the propeller employed is illustrated in
other computational fluid dynamics (CFD) problems. This Fig. 19. Note that the full list of constraints and other
problem is formulated as follows: physical details of the propeller design problem are not
Maximize : gð X Þ ð4:2Þ provided in this paper, so interested readers are referred to
Carlton’s book [64].
Miniimize : V ð X Þ ð4:3Þ As shown in Fig. 20, the main structural parameters are
Subject to : T [ 40; 000; RPM ¼ 200; Z ¼ 7; the shapes of airfoils along the blades, which define the
ð4:4Þ final shape of the propeller. The structure of each airfoil is
D ¼ 2; d ¼ 0:4; and S ¼ 5;
determined by two parameters: maximum thickness and
where g is efficiency, V is cavitation, T is thrust, RPM is chord length. Ten airfoils are considered along the blade in
rotation per second of the propeller, Z is the number of this study, so there is a total of 20 structural parameters to
blades, D is the diameter of the propeller (m), d is the be optimized by the MODA algorithm.
diameter of hub (m), and S is the ship speed (m/s). This real case study is solved by the MODA algorithm
equipped with 200 artificial dragonflies over 300 it-
erations. Since the problem of submarine propeller design
has many constraints, MODA should be equipped with a
Maximum constraint-handling method. For simplicity, a death
thickness
penalty is utilized, which assign very low efficiency and
Chord length large cavitation to the artificial dragonflies that violate
any of the constraints. Therefore, they are dominated
automatically when finding non-dominated solutions in
the next iteration.
As can be seen in Fig. 21, the MODA algorithm found
61 Pareto optimal solutions for this problem. The low
Fig. 20 A blade is divided to ten airfoils each of which has two density of searched points (grey dots) is due to the highly
structural parameters: maximum thickness and chord length constrained nature of this problem. However, it seems that

Fig. 21 Search history,


obtained Pareto optimal front,
and shape of some of the
obtained Pareto optimal
solutions by MODA

123
Neural Comput & Applic (2016) 27:1053–1073 1069

the MODA algorithm successfully improved the initial and confirmed, which are due to the dynamic swarming
random designs and determined a very accurate ap- pattern modelled in this paper.
proximation of the true Pareto optimal front. The solutions The paper also considered designing a real propeller for
are highly distributed along both objectives, which confirm submarines using the proposed MODA algorithm, which is
the coverage of this algorithm in practice as well. There- a challenging and highly constrained CFD problem. The
fore, these results prove the convergence and coverage of results proved the effectiveness of the multi-objective
the MODA algorithm in solving real problems with un- version of DA in solving real problems with unknown
known true Pareto optimal front. Since the propeller design search spaces. As per the finding of this comprehensive
problem is highly constrained, these results also evidence study, it can be concluded that the proposed algorithms are
the merits of the proposed MODA algorithm in solving able to outperform the current well-known and powerful
challenging constrained problems as well. algorithms in the literature. Therefore, they are recom-
mended to researchers from different fields as open-source
optimization tools. The source codes of DA, BDA, and
MODA are publicly available at http://www.alimirjalili.
5 Conclusion
com/DA.html.
For future works, several research directions can be rec-
This paper proposed another SI algorithm inspired by the
ommended. Hybridizing other algorithms with DA and inte-
behaviour of dragonflies’ swarms in nature. Static and
grating evolutionary operators to this algorithm are two
dynamic swarming behaviours of dragonflies were imple-
possible research avenues. For the BDA algorithm, the effects
mented to explore and exploit the search space, respec-
of transfer functions on the performance of this algo-
tively. The algorithm was equipped with five parameters to
rithm worth to be investigated. Applying other multi-objective
control cohesion, alignment, separation, attraction (towards
optimization approaches (non-dominated sorting for instance)
food sources), and distraction (outwards enemies) of indi-
to MODA will also be valuable contributions. The DA, BDA,
viduals in the swarm. Suitable operators were integrated to
and MODA algorithm can all be tuned and employed to solve
the proposed DA algorithm for solving binary and multi-
optimization problems in different fields as well.
objective problems as well. A series of continuous, binary,
and multi-objective test problems were employed to Acknowledgments The author would like to thank Mehrdad
benchmark the performance of the DA, BDA, and MODA Momeny for providing his outstanding dragonfly photo.
algorithms from different perspectives. The results proved
that all of the proposed algorithms benefits from high ex-
ploration, which is due to the proposed static swarming Appendix 1: Single-objective test problems utilized
behaviour of dragonflies. The convergence of the artificial in this work
dragonflies towards optimal solutions in continuous, bina-
ry, and multi-objective search spaces was also observed See Tables 10, 11, 12.

Table 10 Unimodal
Function Dim Range Shift position fmin
benchmark functions
P
n 10 [-100,100] [-30,-30,…,-30] 0
TF1ð xÞ ¼ x2i
i¼1
Pn Q
n 10 [-10,10] [-3,-3,…,-3] 0
TF2ð xÞ ¼ jxi j þ jxi j
i¼1 i¼1
!2
P
n P
i 10 [-100,100] [-30,-30,…,-30] 0
TF3ð xÞ ¼ xj
i¼1 j1

TF4ð xÞ ¼ maxfjxi j; 1  i  ng 10 [-100,100] [-30,-30,…,-30] 0


i
Ph
n1  2 i 10 [-30,30] [-15,-15,…,-15] 0
TF5ð xÞ ¼ 100 xiþ1  x2i þðxi  1Þ2
i¼1
Pn 10 [-100,100] [-750,…,-750] 0
TF6ð xÞ ¼ ð½xi þ 0:5Þ2
i¼1
Pn 10 [-1.28,1.28] [-0.25,…,-0.25] 0
TF7ð xÞ ¼ ix4i þ random½0; 1Þ
i¼1

123
1070 Neural Comput & Applic (2016) 27:1053–1073

-418.9829 9 5
Appendix 2: Multi-objective test problems utilized
in this work

ZDT1:
fmin

0
Minimise : f1 ð xÞ ¼ x1 ð7:1Þ
Minimise : f2 ð xÞ ¼ gð xÞ  hðf1 ð xÞ; gð xÞÞ ð7:2Þ

[-30,-30,…,-30]
[-300,…,-300]

[-400,…,-400]

[-100,…,-100]
[-2,-2,…,-2]

9 X N
Shift position

Where : Gð xÞ ¼ 1 þ xi ð7:3Þ
N  1 i¼2
sffiffiffiffiffiffiffiffiffiffi
f 1 ð xÞ
hðf1 ð xÞ; gð xÞÞ ¼ 1  0  xi  1; 1  i  30 ð7:4Þ
gð x Þ
[-5.12, 5.12]

ZDT2:
[-500, 500]

[-600, 600]
[-32, 32]

[-50, 50]

[-50, 50]
Minimise : f1 ð xÞ ¼ x1 ð7:5Þ
Range

Minimise : f2 ð xÞ ¼ gð xÞ  hðf1 ð xÞ; gð xÞÞ ð7:6Þ

9 X N
Where : Gð xÞ ¼ 1 þ xi ð7:7Þ
Dim

N  1 i¼2
10

10

10

10

10

10


f1 ð xÞ 2
hðf1 ð xÞ; gð xÞÞ ¼ 1  0  xi  1; 1  i  30
TF13ð xÞ ¼ 0:1 sin2 ð3px1 Þ þ ðxi  1Þ2 1 þ sin2 ð3pxi þ 1Þ þ ðxn  1Þ2 1 þ sin2 ð2pxn Þ þ uðxi ; 5; 100; 4Þ

gð x Þ
ð7:8Þ
ZDT3:
i¼1
P

Minimise : f1 ð xÞ ¼ x1 ð7:9Þ
n
10 sinðpy1 Þ þ ðyi  1Þ2 1 þ 10 sin2 ðpyiþ1 Þ þ ðyn  1Þ2 þ uðxi ; 10; 100; 4Þ

Minimise : f2 ð xÞ ¼ gð xÞ  hðf1 ð xÞ; gð xÞÞ ð7:10Þ


9 X N
Where : Gð xÞ ¼ 1 þ xi ð7:11Þ
29 i¼2
i¼1
P
n

sffiffiffiffiffiffiffiffiffiffi 
f 1 ð xÞ f1 ð xÞ

hðf1 ð xÞ; gð xÞÞ ¼ 1   sinð10pf1 ð xÞÞ


gð x Þ gð x Þ
þ 20 þ e

0  xi  1; 1  i  30
ð7:12Þ


2px i Þ

ZDT1 with linear PF:


cos ð

Minimise : f1 ð xÞ ¼ x1 ð7:13Þ
i¼1
P
 n

Minimise : f2 ð xÞ ¼ gð xÞ  hðf1 ð xÞ; gð xÞÞ ð7:14Þ


1
exp n
Table 11 Multimodal benchmark functions

9 X N

kðxi  aÞm xi \  a


Where : Gð xÞ ¼ 1 þ xi ð7:15Þ
cos pxiffi þ 1

sffiffiffiffiffiffiffiffiffiffiffiffiffi!

N  1 i¼2
xi [ a
x2i  10 cosð2pxi Þ þ 10

2
i

 
x

uðxi ; a; k; mÞ ¼ 0  a\xi \a
i
i¼1
P

i¼1
P
n

n1

f1 ð xÞ
n
i¼1
P
1
n

hðf1 ð xÞ; gð xÞÞ ¼ 1  0  xi  1; 1  i  30 ð7:16Þ


m
pffiffiffiffiffiffi

gð x Þ
< kðxi  aÞ
jxi j

TF10ð xÞ ¼ 20 exp 0:2

i¼1
Q
n
x2i 

ZDT2 with three objectives:


xi sin

i¼1
P

Minimise : f1 ð xÞ ¼ x1 ð7:17Þ
8

:
n


TF11ð xÞ ¼ 4000

n

1
i¼1

i¼1
P

Minimise : f2 ð xÞ ¼ x2 ð7:18Þ
yi ¼ 1 þ xi þ1
p
n
n

4
TF12ð xÞ ¼
TF8ð xÞ ¼

TF9ð xÞ ¼
Function

Minimise : f3 ð xÞ ¼ gð xÞ  hðf1 ð xÞ; gð xÞÞ  hðf2 ð xÞ; gð xÞÞ


ð7:19Þ

123
Neural Comput & Applic (2016) 27:1053–1073 1071

Table 12 Composite
Function Dim Range fmin
benchmark functions
TF14 (CF1)
f1, f2, f3, …, f10 = Sphere function 10 [-5, 5] 0
[,1, ,2, ,3,…,,10] = [1,1,1,…1]
[k1, k2, k3,…,k10] = [5/100, 5/100, 5/100,…, 5/100]
TF15 (CF2)
f1, f2, f3, …, f10 = Griewank’s function 10 [-5, 5] 0
[,1, ,2, ,3,…,,10] = [1,1,1,…1]
[k1, k2, k3,…,k10] = [5/100, 5/100, 5/100,…, 5/100]
TF16 (CF3):
f1, f2, f3,…, f10 = Griewank’s function 10 [-5, 5] 0
[,1, ,2, ,3,…,,10] = [1,1,1,…1]
[k1, k2, k3,…,k10] = [1,1,1,…1]
TF17 (CF4)
f1 ; f2 ¼ Ackley’s function 10 [-5, 5] 0
f3 ; f4 ¼ Rastrigin’s function
f5 ; f6 ¼ Weierstrass function
f7 ; f8 ¼ Griewank’s function
f9 ; f10 ¼ Sphere function
[,1, ,2, ,3,…,,10] = [1,1,1,…1]
½k1 ; k2 ; k3 ; . . .; k10  ¼ ½5=32; 5=32; 1; 1; 5=0:5; 5=0:5; 5=100; 5=100; 5=100; 5=100
TF18 (CF5)
f1 ; f2 ¼ Rastrigin’s function 10 [-5, 5] 0
f3 ; f4 ¼ Weierstrass function
f5 ; f6 ¼ Griewank’s function
f7 ; f8 ¼ Ackley’s function
f9 ; f10 ¼ Sphere Function
[,1, ,2, ,3,…,,10] = [1,1,1,…1]
½k1 ; k2 ; k3 ; . . .; k10  ¼ ½1=5; 1=5; 5=0:5; 5=0:5; 5=100; 5=100; 5=32; 5=32; 5=100; 5=100
TF19 (CF6)
f1 ; f2 ¼ Rastrigin’s function 10 [-5, 5] 0
f3 ; f4 ¼ Weierstrass function
f5 ; f6 ¼ Griewank’s function
f7 ; f8 ¼ Ackley’s function
f9 ; f10 ¼ Sphere Function
[,1, ,2, ,3,…,,10] = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9,1]
½k1 ; k2 ; k3 ; . . .; k10  ¼ ½0:1  1=5; 0:2  1=5; 0:3  5=0:5; 0:4  5=0:5; 0:5  5=100;
0:6  5=100; 0:7  5=32; 0:8  5=32; 0:9  5=100; 1  5=100

9 X N 2. Muro C, Escobedo R, Spector L, Coppinger R (2011) Wolf-pack


Where : Gð xÞ ¼ 1 þ xi ð7:20Þ (Canis lupus) hunting strategies emerge from simple rules in
N  1 i¼3 computational simulations. Behav Process 88:192–197
 2 3. Jakobsen PJ, Birkeland K, Johnsen GH (1994) Swarm location in
f1 ð xÞ zooplankton as an anti-predator defence mechanism. Anim Behav
hðf1 ð xÞ; gð xÞÞ ¼ 1  0  xi  1; 1  i  30 ð7:21Þ
gð xÞ 47:175–178
4. Higdon J, Corrsin S (1978) Induced drag of a bird flock. Am Nat
112(986):727–744
5. Goss S, Aron S, Deneubourg J-L, Pasteels JM (1989) Self-or-
ganized shortcuts in the Argentine ant. Naturwissenschaften
76:579–581
References 6. Beni G, Wang J (1993) Swarm intelligence in cellular robotic
systems. In: Dario P, Sandini G, Aebischer P (eds) Robots and
1. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. biological systems: towards a new bionics? NATO ASI series, vol
Adv Eng Softw 69:46–61 102. Springer, Berlin, Heidelberg, pp 703–712

123
1072 Neural Comput & Applic (2016) 27:1053–1073

7. Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm intelligence: 30. Wolpert DH, Macready WG (1997) No free lunch theorems for
from natural to artificial systems. Oxford University Press, Oxford optimization. Evolut Comput IEEE Trans 1(1):67–82
8. Dorigo M, Stützle T (2003) The ant colony optimization meta- 31. Thorp JH, Rogers DC (2014) Thorp and Covich’s freshwater
heuristic: algorithms, applications, and advances. In: Glover F, invertebrates: ecology and general biology. Elsevier, Amsterdam
Kochenberger GA (eds) Handbook of metaheuristics. Interna- 32. Wikelski M, Moskowitz D, Adelman JS, Cochran J, Wilcove DS,
tional series in operations research & management science, vol May ML (2006) Simple rules guide dragonfly migration. Biol
57. Springer, USA, pp 250–285 Lett 2:325–329
9. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: opti- 33. Russell RW, May ML, Soltesz KL, Fitzpatrick JW (1998) Mas-
mization by a colony of cooperating agents. Syst Man Cybern sive swarm migrations of dragonflies (Odonata) in eastern North
Part B Cybern IEEE Trans 26:29–41 America. Am Midl Nat 140:325–342
10. Colorni A, Dorigo M, Maniezzo V (1991) Distributed optimiza- 34. Reynolds CW (1987) Flocks, herds and schools: a distributed
tion by ant colonies. In: Proceedings of the first European con- behavioral model. ACM SIGGRAPH Comput Gr 21:25–34
ference on artificial life, pp 134–142 35. Yang X-S (2010) Nature-inspired metaheuristic algorithms, 2nd
11. Eberhart RC, Kennedy J (1995) A new optimizer using particle edn. Luniver Press
swarm theory. In: Proceedings of the sixth international sympo- 36. Cui Z, Shi Z (2009) Boid particle swarm optimisation. Int J Innov
sium on micro machine and human science, pp 39–43 Comput Appl 2:77–85
12. Eberhart RC, Shi Y (2001) Particle swarm optimization: devel- 37. Kadrovach BA, Lamont GB (2002) A particle swarm model for
opments, applications and resources. In: Proceedings of the 2001 swarm-based networked sensor systems. In: Proceedings of the
congress on evolutionary computation, pp 81–86 2002 ACM symposium on applied computing, pp 918–924
13. Karaboga D (2005) An idea based on honey bee swarm for nu- 38. Cui Z (2009) Alignment particle swarm optimization. In: Cog-
merical optimization. In: Technical report-tr06, Erciyes univer- nitive informatics, 2009. ICCI’09. 8th IEEE international con-
sity, engineering faculty, computer engineering department ference on, pp 497–501
14. Karaboga D, Basturk B (2007) A powerful and efficient algorithm 39. Mirjalili S, Lewis A (2013) S-shaped versus V-shaped transfer
for numerical function optimization: artificial bee colony (ABC) functions for binary particle swarm optimization. Swarm Evolut
algorithm. J Global Optim 39:459–471 Comput 9:1–14
15. AlRashidi MR, El-Hawary ME (2009) A survey of particle 40. Saremi S, Mirjalili S, Lewis A (2014) How important is a transfer
swarm optimization applications in electric power systems. function in discrete heuristic algorithms. Neural Comput
Evolut Comput IEEE Trans 13:913–918 Appl:1–16
16. Wei Y, Qiqiang L (2004) Survey on particle swarm optimization 41. Mirjalili S, Wang G-G, Coelho LDS (2014) Binary optimization
algorithm. Eng Sci 5:87–94 using hybrid particle swarm optimization and gravitational search
17. Chandra Mohan B, Baskaran R (2012) A survey: ant colony algorithm. Neural Comput Appl 25:1423–1435
optimization based recent research and implementation on sev- 42. Mirjalili S, Lewis A (2015) Novel performance metrics for robust
eral engineering domain. Expert Syst Appl 39:4618–4627 multi-objective optimization algorithms. Swarm Evolut Comput
18. Dorigo M, Stützle T (2010) Ant colony optimization: overview 21:1–23
and recent advances. In: Gendreau M, Potvin J-Y (eds) Handbook 43. Coello CAC (2009) Evolutionary multi-objective optimization:
of metaheuristics. International series in operations research & some current research trends and topics that remain to be ex-
management science, vol 146. Springer, USA, pp 227–263 plored. Front Comput Sci China 3:18–30
19. Karaboga D, Gorkemli B, Ozturk C, Karaboga N (2014) A 44. Ngatchou P, Zarei A, El-Sharkawi M (2005) Pareto multi ob-
comprehensive survey: artificial bee colony (ABC) algorithm and jective optimization. In: Intelligent systems application to power
applications. Artif Intell Rev 42:21–57 systems, 2005. Proceedings of the 13th international conference
20. Sonmez M (2011) Artificial Bee Colony algorithm for opti- on, pp 84–91
mization of truss structures. Appl Soft Comput 11:2406–2418 45. Branke J, Kaußler T, Schmeck H (2001) Guidance in evolu-
21. Wang G, Guo L, Wang H, Duan H, Liu L, Li J (2014) Incor- tionary multi-objective optimization. Adv Eng Softw 32:499–507
porating mutation scheme into krill herd algorithm for global 46. Coello Coello CA, Lechuga MS (2002) MOPSO: A proposal for
numerical optimization. Neural Comput Appl 24:853–871 multiple objective particle swarm optimization. In: Evolutionary
22. Wang G-G, Gandomi AH, Alavi AH (2014) Stud krill herd al- computation, 2002. CEC’02. Proceedings of the 2002 congress
gorithm. Neurocomputing 128:363–370 on, pp 1051–1056
23. Wang G-G, Gandomi AH, Alavi AH (2014) An effective krill 47. Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple
herd algorithm with migration operator in biogeography-based objectives with particle swarm optimization. Evolut Comput
optimization. Appl Math Model 38:2454–2462 IEEE Trans 8:256–279
24. Wang G-G, Gandomi AH, Alavi AH, Hao G-S (2014) Hybrid 48. Yao X, Liu Y, Lin G (1999) Evolutionary programming made
krill herd algorithm with differential evolution for global nu- faster. Evolut Comput IEEE Trans 3:82–102
merical optimization. Neural Comput Appl 25:297–308 49. Digalakis J, Margaritis K (2001) On benchmarking functions for
25. Wang G-G, Gandomi AH, Zhao X, Chu HCE (2014) Hybridizing genetic algorithms. Int J Comput Mathematics 77:481–506
harmony search algorithm with cuckoo search for global numerical 50. Molga M, Smutnicki C (2005) Test functions for optimization
optimization. Soft Comput. doi:10.1007/s00500-014-1502-7 needs. Test functions for optimization needs. http://www.robert
26. Wang G-G, Guo L, Gandomi AH, Hao G-S, Wang H (2014) marks.org/Classes/ENGR5358/Papers/functions.pdf
Chaotic krill herd algorithm. Inf Sci 274:17–34 51. Yang X-S (2010) Test problems in optimization. arXiv preprint
27. Wang G-G, Lu M, Dong Y-Q, Zhao X-J (2015) Self-adaptive arXiv:1008.0549
extreme learning machine. Neural Comput Appl. doi:10.1007/ 52. Liang J, Suganthan P, Deb K (2005) Novel composition test func-
s00521-015-1874-3 tions for numerical global optimization. In: Swarm intelligence
28. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw symposium, 2005. SIS 2005. Proceedings 2005 IEEE, pp 68–75
83:80–98 53. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y, Auger A
29. Mirjalili S, Mirjalili SM, Hatamlou A (2015) Multi-Verse Opti- et al (2005) Problem definitions and evaluation criteria for the
mizer: a nature-inspired algorithm for global optimization. Neural CEC 2005 special session on real-parameter optimization. In:
Comput Appl. doi:10.1007/s00521-015-1870-7 KanGAL Report, vol 2005005

123
Neural Comput & Applic (2016) 27:1053–1073 1073

54. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: 60. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective
Neural networks, 1995. Proceedings, IEEE International confer- evolutionary algorithms: empirical results. Evol Comput
ence on, pp 1942–1948 8:173–195
55. John H (1992) Holland, adaptation in natural and artificial sys- 61. Sierra MR, Coello Coello CA (2005) Improving PSO-based
tems. MIT Press, Cambridge multi-objective optimization using crowding, mutation and [-
56. Derrac J, Garcı́a S, Molina D, Herrera F (2011) A practical tu- dominance. In: Coello Coello CA, Hernández Aguirre A, Zitzler
torial on the use of nonparametric statistical tests as a method- E (eds) Evolutionary multi-criterion optimization. Lecture notes
ology for comparing evolutionary and swarm intelligence in computer science, vol 3410. Springer, Berlin, Heidelberg,
algorithms. Swarm Evolut Comput 1:3–18 pp 505–519
57. van den Bergh F, Engelbrecht A (2006) A study of particle swarm 62. Van Veldhuizen DA, Lamont GB (1998) Multiobjective evolu-
optimization particle trajectories. Inf Sci 176:937–971 tionary algorithm research: a history and analysis (Final Draft)
58. J. Kennedy J, Eberhart RC (1997) A discrete binary version of the TR-98-03
particle swarm algorithm. In: Systems, man, and cybernetics, 63. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and
1997. computational cybernetics and simulation, 1997 IEEE in- elitist multiobjective genetic algorithm: NSGA-II. Evolut Com-
ternational conference on, pp 4104–4108 put IEEE Trans 6:182–197
59. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2010) BGSA: bi- 64. Carlton J (2012) Marine propellers and propulsion. Butterworth-
nary gravitational search algorithm. Nat Comput 9:727–745 Heinemann, Oxford

123

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy