0% found this document useful (0 votes)
77 views17 pages

Modified GWO Hindawi PDF

Uploaded by

Nitin Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views17 pages

Modified GWO Hindawi PDF

Uploaded by

Nitin Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Hindawi Publishing Corporation

Applied Computational Intelligence and So Computing


Volume 2016, Article ID 7950348, 16 pages
http://dx.doi.org/10.1155/2016/7950348

Research Article
Modified Grey Wolf Optimizer for
Global Engineering Optimization

Nitin Mittal,1 Urvinder Singh,2 and Balwinder Singh Sohi1


1
Department of Electronics and Communication Engineering, Chandigarh University, Mohali, Punjab 140413, India
2
Department of Electronics and Communication Engineering, Thapar University, Patiala, Punjab 147004, India

Correspondence should be addressed to Nitin Mittal; mittal.nitin84@gmail.com

Received 30 November 2015; Accepted 3 April 2016

Academic Editor: Samuel Huang

Copyright © 2016 Nitin Mittal et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Nature-inspired algorithms are becoming popular among researchers due to their simplicity and flexibility. The nature-inspired
metaheuristic algorithms are analysed in terms of their key features like their diversity and adaptation, exploration and exploitation,
and attractions and diffusion mechanisms. The success and challenges concerning these algorithms are based on their parameter
tuning and parameter control. A comparatively new algorithm motivated by the social hierarchy and hunting behavior of grey
wolves is Grey Wolf Optimizer (GWO), which is a very successful algorithm for solving real mechanical and optical engineering
problems. In the original GWO, half of the iterations are devoted to exploration and the other half are dedicated to exploitation,
overlooking the impact of right balance between these two to guarantee an accurate approximation of global optimum. To overcome
this shortcoming, a modified GWO (mGWO) is proposed, which focuses on proper balance between exploration and exploitation
that leads to an optimal performance of the algorithm. Simulations based on benchmark problems and WSN clustering problem
demonstrate the effectiveness, efficiency, and stability of mGWO compared with the basic GWO and some well-known algorithms.

1. Introduction Evolution (DE) [4], and Biogeography-Based Optimization


(BBO) [5–9].
Metaheuristic algorithms are powerful methods for solving The physical algorithms are inspired by physical processes
many real-world engineering problems. The majority of these such as heating and cooling of materials (Simulated Anneal-
algorithms have been derived from the survival of fittest ing [10]), discrete cultural information which is treated as in
theory of evolutionary algorithms, collective intelligence of between genetic and culture evolution (Memetic Algorithm
swarm particles, behavior of biological inspired algorithms, [11]), harmony of music played by musicians (Harmony
and/or logical behavior of physical algorithms in nature. Search [12, 13]), cultural behavior of frogs (Shuffled Frog-
Evolutionary algorithms are those who mimic the evolu- Leaping Algorithm [14]), Gravitational Search algorithm [15],
tionary processes in nature. The evolutionary algorithms are Multiverse Optimizer (MVO) [16], and Chemical Reaction
based on survival of fittest candidate for a given environment. Optimization (CRO) [17].
These algorithms begin with a population (set of solutions) Swarm intelligence is the group of natural metaheuristics
which tries to survive in an environment (defined with fitness inspired by the “collective intelligence” of swarms. The col-
evaluation). The parent population shares its properties of lective intelligence is built up through a population of homo-
adaptation to the environment to the children with various geneous agents interacting with each other and with their
mechanisms of evolution such as genetic crossover and environment. Example of such intelligence is found among
mutation. The process continues over a number of gener- colonies of ants, flocks of birds, schools of fish, and so forth.
ations (iterative process) till the solutions are found to be Particle Swarm Optimization [18] is developed based on the
most suitable for the environment. Some of the evolutionary swarm behavior of birds. The firefly algorithm [19] is formu-
algorithms are Genetic Algorithm (GA) [1], Evolution Strate- lated based on the flashing behavior of fireflies. Bat Algorithm
gies (ES) [2], Genetic Programming (GP) [3], Differential (BA) [20] is based on the echolocation behavior of bats.
2 Applied Computational Intelligence and Soft Computing

Ant Colony Optimization (ACO) [21, 22] is inspired by the 2. Overview of Grey Wolf Optimizer Algorithm
pheromone trail laying behavior of real ant colonies. A new
evolutionary optimization algorithm, Cuckoo Search (CS) Grey Wolf Optimizer (GWO) is a typical swarm-intelligence
Algorithm [23], is inspired by lifestyle of cuckoo birds. The algorithm which is inspired from the leadership hierarchy
major algorithms include Ant Colony Optimization (ACO) and hunting mechanism of grey wolves in nature. Grey wolves
[21, 22], Particle Swarm Optimization (PSO) [18], Artificial are considered as apex predators; they have average group size
Bee Colony (ABC) Algorithm [24], Fish Swarm Algorithm of 5–12. In the hierarchy of GWO, alpha (𝛼) is considered
(FSA) [25], Glowworm Swarm Optimization (GSO) [26], the most dominating member among the group. The rest of
Grey Wolf Optimizer (GWO) [27], Fruit Fly Optimization the subordinates to 𝛼 are beta (𝛽) and delta (𝛿) which help
Algorithm (FFOA) [28], Bat Algorithm (BA) [20], Novel to control the majority of wolves in the hierarchy that are
Bat Algorithm (NBA) [29], Dragonfly Algorithm (DA) [30], considered as omega (𝜔). The 𝜔 wolves are of lowest ranking
Cat Swarm Optimization (CSO) [31], Cuckoo Search (CS) in the hierarchy.
Algorithm [23], Cuckoo Optimization Algorithm (COA) The mathematical model of hunting mechanism of grey
[32], and Spider Monkey Optimization (SMO) Algorithm wolves consists of the following:
[33].
The biologically inspired algorithms comprise natural (i) Tracking, chasing, and approaching the prey.
metaheuristics derived from living phenomena and behav-
ior of biological organisms. The intelligence derived with (ii) Pursuing, encircling, and harassing the prey until it
bioinspired algorithms is decentralized, distributed, self- stops moving.
organizing, and adaptive in nature under uncertain environ- (iii) Attacking the prey.
ments. The major algorithms in this field include Artificial
Immune Systems (AIS) [34], Bacterial Foraging Optimization
(BFO) [35], and Krill Herd Algorithm [36]. 2.1. Encircling Prey. Grey wolves encircle the prey during the
Because of their inherent advantages, such algorithms hunt which can be mathematically written as [27]
can be applied to various applications including power 󵄨󵄨 󳨀󳨀→ 󵄨󵄨
systems operations and control, job scheduling problems, 𝐷⃗ = 󵄨󵄨󵄨𝐶⃗ ⋅ 𝑋𝑝 (𝑡) − 𝑋⃗ (𝑡)󵄨󵄨󵄨 ,
clustering and routing problems, batch process scheduling, 󵄨 󵄨
(1)
image processing, and pattern recognition problems. 󳨀 󳨀→
𝑋⃗ (𝑡 + 1) = 𝑋𝑝 (𝑡) − 𝐴⃗ ⋅ 𝐷,⃗
GWO is recently developed heuristics inspired from the
leadership hierarchy and hunting mechanism of grey wolves
in nature and has been successfully applied for solving eco- where 𝑡 indicates the current iteration, 𝐴⃗ and 𝐶⃗ are coefficient
󳨀󳨀→
nomic dispatch problems [37], feature subset selection [38], vectors, 𝑋𝑝 is the position vector of the prey, and 𝑋⃗ indicates
optimal design of double later grids [39], time forecasting the position vector of a grey wolf.
[40], flow shop scheduling problem [41], optimal power flow The vectors 𝐴⃗ and 𝐶⃗ are calculated as follows:
problem [42], and optimizing key values in the cryptography
𝐴⃗ = 2𝑎⃗ ⋅ →
󳨀
algorithms [43]. A number of variants are also proposed
𝑟1 − 𝑎,⃗
to improve the performance of basic GWO that include (2)
binary GWO [44], a hybrid version of GWO with PSO [45], 𝐶⃗ = 2 ⋅ →
󳨀
𝑟2 ,
integration of DE with GWO [46], and parallelized GWO
[47, 48].
where components of 𝑎 are linearly decreased from 2 to 0 over
Every optimization algorithm stated above needs to
the course of iterations and 𝑟1 and 𝑟2 are random vectors in
address the exploration and exploitation of a search space.
[0, 1].
In order to be successful, an optimization algorithm needs to
establish a good ratio between exploration and exploitation.
In this paper, a modified GWO (mGWO) is proposed to 2.2. Hunting. Hunting of prey is usually guided by 𝛼 and
balance the exploration and exploitation trade-off in original 𝛽, and 𝛿 will participate occasionally. The best candidate
GWO algorithm. Different functions with diverse slopes are solutions, that is, 𝛼, 𝛽, and 𝛿, have better knowledge about
employed to tune the parameters of GWO algorithm for the potential location of prey. The other search agents (𝜔)
varying exploration and exploitation combinations over the update their positions according to the position of three best
course of iterations. Increasing the exploration in comparison search agents. The following formulas are proposed in this
to exploitation increases the convergence speed and avoids regard:
the local minima trapping effect.
The rest of the paper is organized as follows. Section 2 󳨀→ 󵄨󵄨󵄨󳨀 → 󳨀→ 󵄨󵄨
𝐷𝛼 = 󵄨󵄨𝐶1 ⋅ 𝑋𝛼 − 𝑋⃗ 󵄨󵄨󵄨 ,
gives the overview of original GWO. The proposed mGWO 󵄨 󵄨
algorithm is explained in Section 3. The experimental results 󳨀→ 󵄨󵄨󵄨󳨀 → 󳨀→ 󵄨󵄨
are demonstrated in Section 4. Section 5 solves the clustering 𝐷𝛽 = 󵄨󵄨𝐶2 ⋅ 𝑋𝛽 − 𝑋⃗ 󵄨󵄨󵄨 , (3)
󵄨 󵄨
problem in WSN for cluster head selection to demonstrate
the applicability of the proposed algorithm. Finally, Section 6 󳨀→ 󵄨󵄨󵄨󳨀 → 󳨀→ 󵄨󵄨
𝐷𝛿 = 󵄨󵄨𝐶3 ⋅ 𝑋𝛿 − 𝑋⃗ 󵄨󵄨󵄨 ,
concludes the paper. 󵄨 󵄨
Applied Computational Intelligence and Soft Computing 3

󳨀→ 󳨀→ 󳨀→ 󳨀→
𝑋1 = 𝑋𝛼 − 𝐴 1 ⋅ (𝐷𝛼 ) , exploration of the search space prevents an algorithm from
finding an accurate approximation of the global optimum. On
󳨀→ 󳨀→ 󳨀→ 󳨀→ the other hand, mere exploitation results in local optima stag-
𝑋2 = 𝑋𝛽 − 𝐴 2 ⋅ (𝐷𝛽 ) , (4)
nation and again low quality of the approximated optimum.
󳨀→ 󳨀→ 󳨀→ 󳨀→ In GWO, the transition between exploration and
𝑋3 = 𝑋𝛿 − 𝐴 3 ⋅ (𝐷𝛿 ) , exploitation is generated by the adaptive values of 𝑎 and 𝐴. In
󳨀→ 󳨀→ 󳨀→ this, half of the iterations are devoted to exploration (|𝐴| ≥ 1)
𝑋 (𝑡) + 𝑋2 (𝑡) + 𝑋3 (𝑡) and the other half are used for exploitation (|𝐴| < 1), as
𝑋⃗ (𝑡 + 1) = 1 . (5)
shown in Figure 1(a). Generally, higher exploration of search
3
space results in lower probability of local optima stagnation.
2.3. Attacking Prey. In order to mathematically model for There are various possibilities to enhance the exploration
approaching the prey, we decrease the value of 𝑎.⃗ The rate as shown in Figure 1(b), in which exponential functions
are used instead of linear function to decrease the value of 𝑎
fluctuation range of 𝐴⃗ is also decreased by 𝑎.⃗ 𝐴⃗ is a random
over the course of iterations. Too much exploration is similar
value in the interval [−𝑎, 𝑎] where 𝑎 is decreased linearly from
to too much randomness and will probably not give good
2 to 0 over the course of iterations. When random values of 𝐴⃗ optimization results. But too much exploitation is related
are in [−1, 1], the next position of a search agent can be in any to too little randomness. Therefore, there must be a balance
position between its current position and the position of the between exploration and exploitation.
prey. The value |𝐴| < 1 forces the wolves to attack the prey. In GWO, the value of 𝑎 decreases linearly from 2 to 0
After the attack again they search for the prey in the next using the update equation as follows:
iteration, wherein they again find the next best solution 𝛼
among all wolves. This process repeats till the termination 𝑡
𝑎 = 2 (1 − ), (6)
criterion is fulfilled. 𝑇
where 𝑇 indicates the maximum number of iterations and
3. Modified GWO Algorithm 𝑡 is the current iteration. Our mGWO employs exponential
function for the decay of 𝑎 over the course of iterations.
Finding the global minimum is a common, challenging task Consider
among all minimization methods. In population-based opti-
mization methods, generally, the desirable way to converge 𝑡2
𝑎 = 2 (1 − ) (7)
towards the global minimum can be divided into two basic 𝑇2
phases. In the early stages of the optimization, the individuals
should be encouraged to scatter throughout the entire search as shown in Figure 1(c). Using this exponential decay func-
space. In other words, they should try to explore the whole tion, the numbers of iterations used for exploration and
search space instead of clustering around local minima. In exploitation are 70% and 30%, respectively.
the latter stages, the individuals have to exploit information The pseudocode of mGWO is given in Algorithm 1.
gathered to converge on the global minimum. In GWO, with
fine-adjusting of the parameters 𝑎 and 𝐴, we can balance 4. Results and Discussion
these two phases in order to find global minimum with fast
convergence speed. This section investigates the effectiveness of mGWO in
Although different improvements of individual-based practice. It is common in this field to benchmark the
algorithms promote local optima avoidance, the litera- performance of algorithms on a set of mathematical functions
ture shows that population-based algorithms are better in with known global optima. We also follow the same process
handling this issue. Regardless of the differences between and employ 27 benchmark functions for comparison. The test
population-based algorithms, the common approach is the functions are divided to four groups: unimodal, multimodal,
division of optimization process to two conflicting mile- fixed-dimension multimodal, and composite benchmark
stones: exploration versus exploitation. The exploration functions. The unimodal functions (𝐹1 –𝐹7 ) are suitable for
encourages candidate solutions to change abruptly and benchmarking the exploitation of algorithms since they have
stochastically. This mechanism improves the diversity of the one global optimum and no local optima. On the contrary,
solutions and causes high exploration of the search space. In multimodal functions (𝐹8 –𝐹13 ) have a large number of local
contrast, the exploitation aims for improving the quality of optima and are helpful to examine exploration and local
solutions by searching locally around the obtained promising optima avoidance of algorithms.
solutions in the exploration. In this milestone, candidate The mathematical formulation of the employed test func-
solutions are obliged to change less suddenly and search tions is presented in Tables 1–4. We consider 30 variables
locally. for unimodal and multimodal test function for further
Exploration and exploitation are two conflicting mile- improving their difficulties.
stones where promoting one results in degrading the other. Since heuristic algorithms are stochastic optimization
A right balance between these two milestones can guaran- techniques, they have to be run at least more than 10
tee a very accurate approximation of the global optimum times to generate meaningful statistical results. It is again
using population-based algorithms. On the one hand, mere a common strategy that an algorithm is run on a problem
4 Applied Computational Intelligence and Soft Computing

2 2

1.8 1.8

1.6 1.6

1.4 1.4

1.2 1.2
Value of a

Value of a
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Iteration Iteration

Exploration Exploration
Exploitation Exploitation
(a) (b)
2

1.8

1.6

1.4

1.2
Value of a

0.8

0.6

0.4

0.2

0
0 20 40 60 80 100
Iteration

Exploration
Exploitation
(c)

Figure 1: (a) Updating the value of 𝑎 for GWO, (b) some samples of possible functions for updating 𝑎 over the course of iterations, and (c)
updating the value of 𝑎 over the course of iterations for mGWO.

𝑚 times and average/standard deviation/median of the best utilized 30 search agents and 3000 iterations for each of the
obtained solution in the last iteration are calculated as the algorithms.
metrics of performance. We follow the same method to The convergence curves of unimodal, multimodal, fixed-
generate and report the results over 30 independent runs. dimension multimodal, and composite benchmark functions
In order to verify the performance of mGWO algorithm, for the competitive optimization algorithms are given in
PSO, BA, CS, and GWO algorithms are chosen. Note that we Figures 2, 3, 4, and 5, respectively. As Table 5 shows,
Applied Computational Intelligence and Soft Computing 5

Initialize the search agent (grey wolf) population 𝑋𝑖 (𝑖 = 1, 2, . . . , 𝑛)


Initialize 𝑎, 𝐴 and 𝐶
Calculate the fitness of each search agent
𝑋𝛼 = the best (or dominating) search agent
𝑋𝛽 = the second best search agent
𝑋𝛿 = the third best search agent
while (𝑡 < Maximum number of iterations)
for each search agent
update the position of the current search agent by (5)
end for
update 𝑎 by (7)
update 𝐴 and 𝐶 by (2)
calculate the fitness of all search agents
update 𝑋𝛼 , 𝑋𝛽 and 𝑋𝛿
𝑡=𝑡+1
end while
Return 𝑋𝛼

Algorithm 1: Pseudocode of mGWO algorithm.

Table 1: Unimodal benchmark functions. find promising regions of the search space. In addition, high
Function Dim Range 𝑓min
local optima avoidance of this algorithm is another finding
𝑛 that can be inferred from these results.
𝐹1 (𝑥) = ∑ 𝑥𝑖2 30 [−100, 100] 0 The rest of the results, which belong to 𝐹14 –𝐹23 and
𝑖=1
𝑛 𝑛 𝐹24 –𝐹27 , are provided in Tables 7 and 8, respectively. The
𝐹2 (𝑥) = ∑|𝑥𝑖 | + ∏|𝑥𝑖 | 30 [−10, 10] 0 results are consistent with those of other test functions, in
𝑖=1 𝑖=1 which mGWO shows very competitive results compared to
𝑛 𝑖 2
other algorithms.
𝐹3 (𝑥) = ∑ (∑ 𝑥𝑗 ) 30 [−100, 100] 0
𝑖=1 𝑗−1
󵄨 󵄨
𝐹4 (𝑥) = max {󵄨󵄨󵄨𝑥𝑖 󵄨󵄨󵄨 , 1 ≤ 𝑖 ≤ 𝑛} 30 [−100, 100] 0 5. Cluster Head Selection in
𝑖
𝑛−1
2
WSN Using mGWO
2
𝐹5 (𝑥) = ∑ [100 (𝑥𝑖+1 − 𝑥𝑖2 ) + (𝑥𝑖 − 1) ] 30 [−30, 30] 0
𝑖=1 Cluster head (CH) selection problem is a well-known prob-
𝑛
2 lem in the field of wireless sensor networks (WSNs) in
𝐹6 (𝑥) = ∑ ([𝑥𝑖 + 0.5]) 30 [−100, 100] 0
𝑖=1
which the energy consumption cost of the network should
𝑛 be minimized [49–53]. In this paper, this problem is solved
𝐹7 (𝑥) = ∑𝑖𝑥𝑖4 + random (0, 1) 30 [−1.28, 1.28] 0 using mGWO algorithm and compared with GA, PSO, BA,
𝑖=1
CS, and GWO.
The main challenges in designing and planning the
mGWO algorithm provides the best results in 5 out of 7 operations of WSNs are to optimize energy consumption and
unimodal benchmark test functions. The mGWO algorithm prolong network lifetime. Cluster-based routing techniques,
also provides very competitive results compared to CS on 𝐹5 such as the well-known low-energy adaptive clustering hier-
and 𝐹6 . As discussed above, unimodal functions are suitable archy (LEACH) [50], are used to achieve scalable solutions
for benchmarking exploitation of the algorithms. Therefore, and extend the network lifetime until the last node dies
these results evidence high exploitation capability of the (LND). In order to achieve prolonged network lifetime in
mGWO algorithm. cluster-based routing techniques, the lifetime of the CHs
The statistical results of the algorithms on multimodal plays an important role. Improper cluster formation may
test function are presented in Table 6. It may be seen that cause some CHs to be overloaded. Such overload may
mGWO algorithm highly outperforms other algorithms on cause high energy consumption of the CH and degrade
𝐹9 , 𝐹10 , 𝐹11 , and 𝐹12 . It should be noted that mGWO algorithm the overall performance of the WSN. Therefore, proper CH
outperforms other algorithms on these multimodal test selection is the most important issue for clustering sensor
functions except PSO for 𝐹13 . The results of multimodal test nodes. Designing an energy efficient clustering algorithm
function strongly prove that high exploration of mGWO is not an easy task. Therefore, nature-inspired optimization
algorithm is a suitable mechanism for avoiding local solu- algorithms may be applied to tackle cluster-based routing
tions. Since the multimodal functions have an exponential problem in WSN. Evolutionary algorithms (EAs) have been
number of local solutions, the results show that mGWO used in recent years as metaheuristics to address energy-
algorithm is able to explore the search space extensively and aware routing challenges by designing intelligent models
6

Table 2: Multimodal benchmark functions.


Function Dim Range 𝑓min
𝑛
󵄨 󵄨
𝐹8 (𝑥) = ∑ − 𝑥𝑖 sin (√󵄨󵄨󵄨𝑥𝑖 󵄨󵄨󵄨) 30 [−500, 500] −418.9829 × 5
𝑖=1
𝑛
𝐹9 (𝑥) = ∑[𝑥2𝑖 − 10 cos (2𝜋𝑥𝑖 ) + 10] 30 [−5.12, 5.12] 0
𝑖=1

1 𝑛 1 𝑛
𝐹10 (𝑥) = −20 exp (−0.2√ ∑𝑥𝑖2 ) − exp ( ∑ cos (2𝜋𝑥𝑖 )) + 20 + 𝑒 30 [−32, 32] 0
𝑛 𝑖=1 𝑛 𝑖=1
𝑛
1 𝑛 2 𝑥
𝐹11 (𝑥) = ∑𝑥𝑖 − ∏ cos ( 𝑖 ) + 1 30 [−600, 600] 0
4000 𝑖=1 𝑖=1 √𝑖
𝑛−1 𝑛
𝜋 2
𝐹12 (𝑥) = {10 sin (𝜋𝑦1 ) + ∑ (𝑦𝑖 − 1)2 [1 + 10 sin2 (𝜋𝑦𝑖+1 )] + (𝑦𝑛 − 1) } + ∑𝑢 (𝑥𝑖 , 10, 100, 4)
𝑛 𝑖=1 𝑖=1
𝑚
{ 𝑘(𝑥𝑖 − 𝑎) 𝑥𝑖 > 𝑎
{
{ 30 [−50, 50] 0
𝑥𝑖 + 1 {
𝑦𝑖 = 1 + , 𝑢 (𝑥𝑖 , 𝑎, 𝑘, 𝑚) = {0 −𝑎 < 𝑥𝑖 < 𝑎
4 {
{
{ 𝑚
{𝑘(−𝑥𝑖 − 𝑎) 𝑥𝑖 < −𝑎
𝑛 𝑛
2 2
𝐹13 (𝑥) = 0.1 {sin2 (3𝜋𝑥1 ) + ∑ (𝑥𝑖 − 1) [1 + sin2 (3𝜋𝑥𝑖 + 1)] + (𝑥𝑛 − 1) [1 + sin2 (2𝜋𝑥𝑛 )]} + ∑𝑢 (𝑥𝑖 , 5, 100, 4) 30 [−50, 50] 0
𝑖=1 𝑖=1
Applied Computational Intelligence and Soft Computing
Applied Computational Intelligence and Soft Computing 7

Table 3: Fixed-dimension multimodal benchmark functions.


Function Dim Range 𝑓min
−1
25
1 1
𝐹14 (𝑥) = ( +∑ ) 2 [−65, 65] 1
500 𝑗=1 𝑗 + ∑2 (𝑥 − 𝑎 )6
𝑖=1 𝑖 𝑖𝑗
2
11
𝑥1 (𝑏𝑖2 + 𝑏𝑖 𝑥2 )
𝐹15 (𝑥) = ∑ [𝑎𝑖 − ] 4 [−5, 5] 0.00030
𝑖=1 𝑏𝑖2 + 𝑏𝑖 𝑥3 + 𝑥4
1
𝐹16 (𝑥) = 4𝑥21 − 2.1𝑥41 + 𝑥16 + 𝑥1 𝑥2 − 4𝑥22 + 4𝑥42 2 [−5, 5] −1.0316
3
2
5.1 2 5 1
𝐹17 (𝑥) = (𝑥2 − 2 𝑥1 + 𝑥1 − 6) + 10 (1 − ) cos 𝑥1 + 10 2 [−5, 5] 0.398
4𝜋 𝜋 8𝜋
2
𝐹18 (𝑥) = [1 + (𝑥1 + 𝑥2 + 1) (19 − 14𝑥1 + 3𝑥1 − 14𝑥2 + 6𝑥1 𝑥2 + 3𝑥22 )]
2

2 2 [−2, 2] 3
⋅ [30 + (2𝑥1 − 3𝑥2 ) (18 − 32𝑥1 + 12𝑥21 + 48𝑥2 − 36𝑥1 𝑥2 + 27𝑥22 )]
4 3
2
𝐹19 (𝑥) = −∑𝑐𝑖 exp (−∑𝑎𝑖𝑗 (𝑥𝑗 − 𝑝𝑖𝑗 ) ) 3 [1, 3] −3.86
𝑖=1 𝑗=1
4 6
2
𝐹20 (𝑥) = −∑𝑐𝑖 exp (−∑𝑎𝑖𝑗 (𝑥𝑗 − 𝑝𝑖𝑗 ) ) 6 [0, 1] −3.32
𝑖=1 𝑗=1
5
𝑇 −1
𝐹21 (𝑥) = −∑ [(𝑋 − 𝑎𝑖 ) (𝑋 − 𝑎𝑖 ) + 𝑐𝑖 ] 4 [0, 10] −10.1532
𝑖=1
7
𝑇 −1
𝐹22 (𝑥) = −∑ [(𝑋 − 𝑎𝑖 ) (𝑋 − 𝑎𝑖 ) + 𝑐𝑖 ] 4 [0, 10] −10.4028
𝑖=1
10
𝑇 −1
𝐹23 (𝑥) = − ∑ [(𝑋 − 𝑎𝑖 ) (𝑋 − 𝑎𝑖 ) + 𝑐𝑖 ] 4 [0, 10] −10.5363
𝑖=1

that collaborate together to optimize an appropriate energy- The term 𝐸elec denotes the per-bit energy dissipation during
aware objective function [52]. GWO is one of the powerful transmission. The per-bit amplification energy is propor-
heuristics that can be applied for efficient load balanced tional to 𝑑4 when the transmission distance exceeds the
clustering. In this paper, mGWO based clustering algorithm threshold 𝑑0 (called crossover distance) and otherwise is
is used to solve the abovementioned load balancing problem. proportional to 𝑑2 . The parameters 𝜀friss amp and 𝜀two ray amp
The algorithm forms clusters in such a way that the overall denote transmitter amplification parameters for free-space
energy consumption of the network is minimized. Total and multipath fading models, respectively. The value of 𝑑0 is
energy consumption in the network is the sum of the total given by
energy dissipated from the non-CHs to send information to
their respective CHs and the total energy consumed by CH 𝜀friss amp
nodes to aggregate the information and send it to the base 𝑑0 = √ . (10)
𝜀two ray amp
station (BS).
Consider a WSN of 𝑛 sensor nodes randomly deployed The reception energy of the 𝑘-bit data message can be
in the sensing field and organized into 𝐾 clusters: expressed by
𝐶1 , 𝐶2 , . . . , 𝐶𝐾 . The fitness function for the energy consump-
tion may be defined as 𝐸RX = 𝑘𝐸elec , (11)
𝐾 𝐾 where 𝐸elec denotes the per-bit energy dissipation during
𝑓 = (∑ ∑ 𝐸TX𝑠,CH + 𝐸RX + 𝐸DA ) + ∑𝐸TXCH ,BS , (8) reception.
𝑖 𝑖
𝑖=1 𝑠∈𝐶𝑖 𝑖=1
𝐸DA is the data aggregation energy expenditure and is set
where 𝐾 is the total number of CHs, 𝑠 ∈ 𝐶𝑖 is a non-CH as 𝐸DA = 5 nj/bit. The values of other parameters are set to
associated with the 𝑖th CH, and 𝐸TXnode ,node is the energy 𝐸elec = 50 nj/bit, 𝜀friss amp = 100 pj/bit/m2 , and 𝜀two ray amp =
1 2
dissipated for transmitting data from node1 to node2 . 0.0013 pj/bit/m4 , respectively [51].
In order to calculate radio energy transmission and For the simulation setup, 100 nodes are randomly
reception costs, a 𝑘-bit message and also the transmitter- deployed in a 100 m × 100 m area of the sensing field. BS
receiver separation distance 𝑑 are given by is placed at the center of the field. The initial energy of all
homogeneous nodes is set to 𝐸0 = 1 J. During this analysis,
2
{𝑘𝐸elec + 𝑘𝜀friss amp 𝑑 , if 𝑑 < 𝑑0 three parameters, namely, first node dead (FND), half nodes
𝐸TX = { (9) dead (HND), and last node dead (LND) are employed to
𝑘𝐸 + 𝑘𝜀two ray amp 𝑑4 , if 𝑑 ≥ 𝑑0 .
{ elec outline the network lifetime.
8 Applied Computational Intelligence and Soft Computing

Table 4: Composite benchmark functions.

Function Dim Range 𝑓min


𝐹24 (CF1):
𝑓1 , 𝑓2 = Rastrigin’s function
𝑓3 , 𝑓4 = Weierstrass’ function
𝑓5 , 𝑓6 = Griewank’s function
10 [−5, 5] 0
𝑓7 , 𝑓8 = Ackley’s function
𝑓9 , 𝑓10 = sphere function
[𝜎1 , 𝜎2 , 𝜎3 , . . . , 𝜎10 ] = [1, 1, 1, . . . , 1]
[𝜆 1 , 𝜆 2 , 𝜆 3 , . . . , 𝜆 10 ] = [1/5, 1/5, 5/0.5, 5/0.5, 5/100, 5/100, 5/32, 5/32, 5/100, 5/100]
𝐹25 (CF2):
𝑓1 , 𝑓2 = Ackley’s function
𝑓3 , 𝑓4 = Rastrigin’s function
𝑓5 , 𝑓6 = sphere function
10 [−5, 5] 0
𝑓7 , 𝑓8 = Weierstrass’s function
𝑓9 , 𝑓10 = Griewank’s function
[𝜎1 , 𝜎2 , 𝜎3 , . . . , 𝜎10 ] = [1, 2, 1.5, 1.5, 1, 1, 1.5, 1.5, 2, 2]
[𝜆 1 , 𝜆 2 , 𝜆 3 , . . . , 𝜆 10 ] = [2 ∗ 5/32, 5/32, 2 ∗ 1, 1, 2 ∗ 5/100, 5/100, 2 ∗ 10, 10, 2 ∗ 5/60, 5/60]
𝐹26 (CF3):
𝑓1 , 𝑓2 = expanded Schaffer Rosenbrock’s function
𝑓3 , 𝑓4 = Rastrigin’s function
𝑓5 , 𝑓6 = expanded Griewank’s and Rosenbrock’s function
10 [−5, 5] 0
𝑓7 , 𝑓8 = Weierstrass’s function
𝑓9 , 𝑓10 = Griewank’s function
[𝜎1 , 𝜎2 , 𝜎3 , . . . , 𝜎10 ] = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
[𝜆 1 , 𝜆 2 , 𝜆 3 , . . . , 𝜆 10 ] = [5 ∗ 5/100, 5/100, 5 ∗ 1, 1, 5 ∗ 1, 1, 5 ∗ 10, 10, 5 ∗ 5/200, 5/200]
𝐹27 (CF4):
𝑓1 = Weierstrass’s function
𝑓2 = expanded Schaffer Rosenbrock’s function
𝑓3 = expanded Griewank’s and Rosenbrock’s function
𝑓4 = Ackley’s function
𝑓5 = Rastrigin’s function
𝑓6 = Griewank’s function 10 [−5, 5] 0
𝑓7 = expanded Schaffer Rosenbrock’s noncont function
𝑓8 = Rastrigin’s noncont function
𝑓9 = elliptic function
𝑓10 = sphere noise function
[𝜎1 , 𝜎2 , 𝜎3 , . . . , 𝜎10 ] = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
[𝜆 1 , 𝜆 2 , 𝜆 3 , . . . , 𝜆 10 ] = [10, 5/20, 1, 5/32, 1, 5/100, 5/50, 1, 5/100, 5/100]

Table 5: Results of unimodal benchmark functions.


PSO BA CS GWO mGWO
𝐹
Avg. Std. Avg. Std. Avg. Std. Avg. Std. Avg. Std.
𝐹1 5.31𝐸 − 11 9.76𝐸 − 11 22546.833 6668.3781 2.89𝐸 − 16 3.33𝐸 − 16 2.65𝐸 − 155 1.18𝐸 − 154 6.44E − 205 0.00E + 00
𝐹2 5.67𝐸 − 07 1.26𝐸 − 06 2864.8141 9441.2388 5.92𝐸 − 09 5.43𝐸 − 09 7.23𝐸 − 92 8.41𝐸 − 92 3.34E − 119 4.95E − 119
𝐹3 3228.8636 1932.4945 61977.294 26054.622 1.0408052 0.9448328 1.78𝐸 − 42 4.14𝐸 − 42 2.74E − 52 1.19E − 51
𝐹4 40.759779 16.786821 56.421446 10.918884 10.803036 3.0246624 5.86𝐸 − 37 2.57𝐸 − 36 1.20E − 51 4.25E − 51
𝐹5 62.716929 49.490404 18305153 12467683 22.524975 15.539303 26.799858 0.5767914 26.900044 0.852116
𝐹6 2.89𝐸 − 11 4.27𝐸 − 11 19775.038 7361.6353 4.99E − 16 1.30E − 15 0.7070627 0.2983174 0.7862954 0.2449287
𝐹7 0.0542038 0.0175609 5.0503673 1.6183878 0.0466985 0.0214709 0.0003805 0.0002072 0.0002609 0.000176
Applied Computational Intelligence and Soft Computing 9

F1 F2
0
10
100

10−50
Best score obtained so far

Best score obtained so far


10−50
10−100

10−150

10−100

10−200
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F3 F4
0
10
100
Best score obtained so far

Best score obtained so far

10−20
10−20

10−40 10−40

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F5 F6

108

100
Best score obtained so far

Best score obtained so far

106

10−5

104

10−10

102
10−15
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

Figure 2: Continued.
10 Applied Computational Intelligence and Soft Computing

F7
2
10

Best score obtained so far


100

10−2

500 1000 1500 2000 2500 3000


Iteration

PSO GWO
BA mGWO
CS
Figure 2: Convergence graph of unimodal benchmark functions.

Table 6: Results of multimodal benchmark functions.


PSO BA CS GWO mGWO
𝐹
Avg. Std. Avg. Std. Avg. Std. Avg. Std. Avg. Std.
𝐹8 — — — — −9958.7573 382.10737 −5697.6258 1036.3742 −5712.462 924.47382
𝐹9 65.433009 14.197664 96.065525 32.511237 33.785055 6.9716023 0 0 0 0
𝐹10 10.769295 9.1708499 17.231924 1.1118681 0.9966469 0.4914999 8.88𝐸 − 15 2.27𝐸 − 15 7.82E − 15 7.94E − 16
𝐹11 0.0173265 0.0205398 237.38699 81.345844 0.0008613 0.0038519 0.0025395 0.0066692 0 0
𝐹12 0.3185579 0.5824145 18644754 18385611 0.1635701 0.4113459 0.0478581 0.0208711 0.0469178 0.0216789
𝐹13 0.020326 0.0488385 76481054 53670684 0.3009574 1.1566959 0.6303231 0.1787968 0.6051939 0.1740609

Table 7: Results of fixed-dimension multimodal functions.


PSO BA CS GWO mGWO
𝐹
Avg. Std. Avg. Std. Avg. Std. Avg. Std. Avg. Std.
𝐹14 0.9980038 0 9.7544907 7.0552637 0.9980038 0 4.1298947 4.2619933 4.8165637 4.3097562
𝐹15 0.0011042 0.0003066 0.0032289 0.005698 0.0003075 1.11E − 19 0.0044241 0.0081813 0.007327 0.0098145
𝐹16 −1.0316285 2.28𝐸 − 16 −0.9908202 0.1825 −1.0316285 2.28E − 16 −1.0316285 1.75𝐸 − 09 −1.0316284 4.86𝐸 − 09
𝐹17 0.3978874 0 0.3978874 1.34E − 10 0.3978874 0 0.3978978 4.63𝐸 − 05 0.3978876 5.08𝐸 − 07
𝐹18 3 8.15E − 16 12.45 20.119315 3 1.39𝐸 − 15 7.0500022 18.11215 3.0000017 2.87𝐸 − 06
𝐹19 −3.8627821 2.24E − 15 −3.8627821 1.72𝐸 − 08 −3.8627821 2.28𝐸 − 15 −3.862402 0.0016777 −3.8611515 0.00319
𝐹20 −3.2625486 0.0609909 −3.2566017 0.0606853 −3.3219952 5.19E − 16 −3.1770991 0.26433 −3.2734635 0.0609917
𝐹21 −6.5146454 3.1931088 −5.7607751 3.1257537 −10.1532 3.65E − 15 −9.6433311 1.569109 −9.2742029 2.1934501
𝐹22 −7.9202597 3.4833672 −5.2362723 2.7999111 −10.402941 1.95E − 15 −10.402879 4.66𝐸 − 05 −10.136952 1.1884781
𝐹23 −7.8374538 3.4395208 −5.9516938 3.8940879 −10.53641 1.95E − 15 −9.724888 2.4976263 −10.536237 0.0001251

Table 9 shows the best results obtained for the 6. Conclusion


CH selection problem in WSN. The results of Table 9
show that mGWO algorithm is able to find the best This paper proposed a modification to the Grey Wolf Opti-
results compared to other algorithms. The results of mizer named mGWO, inspired by the hunting behavior of
mGWO are closely followed by the CS and GWO grey wolves in nature. An exponential decay function is used
algorithms. to balance the exploration and exploitation in the search
Applied Computational Intelligence and Soft Computing 11

F8 F9
−100

100

Best score obtained so far

Best score obtained so far


−10100

10−5

−10200
10−10

−10300
0 1000 2000 3000 500 1000 1500 2000 2500 3000
Iteration Iteration
F10 F11

100
100
Best score obtained so far

Best score obtained so far

10−5 10−5

10−10
10−10

10−15

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration
F12 F13

108
108

106 106
Best score obtained so far

Best score obtained so far

104 104

102 102

100
100

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

PSO GWO PSO GWO


BA mGWO BA mGWO
CS CS

Figure 3: Convergence graph of multimodal benchmark functions.


12 Applied Computational Intelligence and Soft Computing

F14 F15

102
10−1

Best score obtained so far


Best score obtained so far

10−2
1
10

10−3

100
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F17 F18

102
Best score obtained so far

Best score obtained so far

100

101

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F19 F20
−100.51 −100.1

−100.53 −100.2
Best score obtained so far

Best score obtained so far

−100.3
−100.55

−100.4
0.57
−10

−100.5

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

Figure 4: Continued.
Applied Computational Intelligence and Soft Computing 13

F21 F22

−100
0
−10

Best score obtained so far


Best score obtained so far

−101 −101
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F23

−100
Best score obtained so far

−101
500 1000 1500 2000 2500 3000
Iteration

PSO GWO
BA mGWO
CS

Figure 4: Convergence graph of fixed-dimension multimodal functions.

Table 8: Results of composite functions.

PSO BA CS GWO mGWO


𝐹
Avg. Std. Avg. Std. Avg. Std. Avg. Std. Avg. Std.
𝐹24 375.1142 149.1547 774.141 269.3536 413.6706 177.65048 337.9168 173.3873 371.8084 194.0954
𝐹25 167.0097 31.86513 490.7166 180.3477 132.158 24.5817 133.9568 23.17741 132.3655 25.86431
𝐹26 955.6119 280.8871 1476.118 117.9418 915.0283 262.3901 943.7718 277.9337 911.9168 319.8876
𝐹27 437.6602 183.1154 1328.464 177.0158 395 267.08204 559.3846 301.082 390.904 279.606
14 Applied Computational Intelligence and Soft Computing

F24 F25

103 102.8

Best score obtained so far


Best score obtained so far
102.6

102.4

102.2

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

F26 F27
Best score obtained so far

Best score obtained so far

103

103

500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iteration Iteration

PSO GWO PSO GWO


BA mGWO BA mGWO
CS CS
Figure 5: Convergence graph of composite functions.

Table 9: Comparison results of CH selection problem in WSN. The paper also considered the clustering problem in WSN
in which the CH selection is performed using the proposed
Optimal
Algorithm FND HND LND mGWO algorithm, which is a challenging and NP hard
cost of FF
problem. The results show that the proposed method is found
GA 39.798 mJ 2047.1 2678.2 3337.5 to be very effective for real-world applications due to fast
PSO 39.854 mJ 2034.8 2665.2 3324.6 convergence and fewer chances to get stuck at local minima.
BA 40.324 mJ 1996.7 2654.8 3298.9 It can be concluded that the proposed algorithm is able to
CS 38.412 mJ 2079.2 2702.5 3372.4 outperform the current well-known and powerful algorithms
GWO 38.560 mJ 2073.9 2696.3 3369.8 in the literature. The results prove the competence and
superiority of mGWO to existing metaheuristic algorithms
mGWO 38.209 mJ 2084.6 2714.4 3382.2
and it has an ability to become an effective tool for solving
real word optimization problems.

space over the course of iterations. The results proved that Competing Interests
the proposed algorithm benefits from high exploration in
comparison to the standard GWO. The authors declare that they have no competing interests.
Applied Computational Intelligence and Soft Computing 15

References [19] X.-S. Yang, “Firefly algorithm, stochastic test functions and
design optimization,” International Journal of Bio-Inspired Com-
[1] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and putation, vol. 2, no. 2, pp. 78–84, 2010.
Machine Learning, Addison-Wesley, 1989. [20] X.-S. Yang and A. H. Gandomi, “Bat algorithm: a novel
[2] T. Back, F. Hoffmeister, and H. P. Schwefel, “A survey of approach for global engineering optimization,” Engineering
evolution strategies,” in Proceedings of the 4th International Computations, vol. 29, no. 5, pp. 464–483, 2012.
Conference on Genetic Algorithms, San Diego, Calif, USA, July
[21] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimiza-
1991.
tion by a colony of cooperating agents,” IEEE Transactions on
[3] J. R. Koza, Genetic Programming: On the Programming of Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no.
Computers by Natural Selection, MIT Press, Cambridge, UK, 1, pp. 29–41, 1996.
1992.
[22] M. Dorigo and L. M. Gambardella, “Ant colony system: a coop-
[4] R. Storn and K. Price, “Differential evolution—a simple and
erative learning approach to the traveling salesman problem,”
efficient heuristic for global optimization over continuous
IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp.
spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–
53–66, 1997.
359, 1997.
[23] X.-S. Yang and S. Deb, “Engineering optimisation by cuckoo
[5] D. Simon, “Biogeography-based optimization,” IEEE Transac-
search,” International Journal of Mathematical Modelling and
tions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713,
Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010.
2008.
[6] W. Gong, Z. Cai, C. X. Ling, and H. Li, “A real-coded [24] D. Karaboga and B. Akay, “A comparative study of artificial Bee
biogeography-based optimization with mutation,” Applied colony algorithm,” Applied Mathematics and Computation, vol.
Mathematics and Computation, vol. 216, no. 9, pp. 2749–2758, 214, no. 1, pp. 108–132, 2009.
2010. [25] X. Li, Z. Shao, and J. Qian, “An optimizing method base
[7] W. Gong, Z. Cai, and C. X. Ling, “DE/BBO: a hybrid differential on autonomous animates: fish swarm algorithm,” Systems
evolution with biogeography-based optimization for global Engineering—Theory & Practice, vol. 22, pp. 32–38, 2002.
numerical optimization,” Soft Computing, vol. 15, no. 4, pp. 645– [26] K. Krishnanand and D. Ghose, “Glowworm swarm optimi-
665, 2011. sation: a new method for optimising multi-modal functions,”
[8] H. Ma and D. Simon, “Blended biogeography-based optimiza- International Journal of Computational Intelligence Studies, vol.
tion for constrained optimization,” Engineering Applications of 1, no. 1, pp. 93–119, 2009.
Artificial Intelligence, vol. 24, no. 3, pp. 517–525, 2011. [27] S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,”
[9] U. Singh and T. S. Kamal, “Design of non-uniform circular Advances in Engineering Software, vol. 69, pp. 46–61, 2014.
antenna arrays using biogeography-based optimisation,” IET [28] W.-T. Pan, “A new fruit fly optimization algorithm: taking
Microwaves, Antennas and Propagation, vol. 5, no. 11, pp. 1365– the financial distress model as an example,” Knowledge-Based
1370, 2011. Systems, vol. 26, pp. 69–74, 2012.
[10] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by [29] X.-B. Meng, X. Z. Gao, Y. Liu, and H. Zhang, “A novel bat
simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, algorithm with habitat selection and Doppler effect in echoes
1983. for optimization,” Expert Systems with Applications, vol. 42, no.
[11] P. Moscato, “On evolution, search, optimization, genetic algo- 17-18, pp. 6350–6364, 2015.
rithms and martial arts: towards Memetic Algorithms,” Caltech [30] S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic opti-
Concurrent Computation Program Report 826, 1989. mization technique for solving single-objective, discrete, and
[12] Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic multi-objective problems,” Neural Computing & Applications,
optimization algorithm: harmony search,” Simulation, vol. 76, 2015.
no. 2, pp. 60–68, 2001.
[31] S. C. Chu and P. W. Tsai, “Computational intelligence based
[13] K. S. Lee and Z. W. Geem, “A new meta-heuristic algorithm for on the behaviour of cats,” International Journal of Innovative
continuous engineering optimization: harmony search theory Computing Information and Control, vol. 3, pp. 163–173, 2007.
and practice,” Computer Methods in Applied Mechanics and
[32] R. Rajabioun, “Cuckoo optimization algorithm,” Applied Soft
Engineering, vol. 194, no. 36–38, pp. 3902–3933, 2005.
Computing Journal, vol. 11, no. 8, pp. 5508–5518, 2011.
[14] M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frog-leaping
algorithm: a memetic meta-heuristic for discrete optimization,” [33] J. C. Bansal, H. Sharma, S. S. Jadon, and M. Clerc, “Spider
Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. Monkey Optimization algorithm for numerical optimization,”
Memetic Computing, vol. 6, no. 1, pp. 31–47, 2014.
[15] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: a
gravitational search algorithm,” Information Sciences, vol. 179, [34] D. Dasgupta, Artificial Immune Systems and Their Applications,
no. 13, pp. 2232–2248, 2009. Springer, 1999.
[16] S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, “Multi-verse [35] S. Das, A. Biswas, S. Dasgupta, and A. Abraham, “Bacterial for-
optimizer: a nature-inspired algorithm for global optimization,” aging optimization algorithm: theoretical foundations, analysis,
Neural Computing & Applications, vol. 27, no. 2, pp. 495–513, and applications,” Studies in Computational Intelligence, vol. 203,
2016. pp. 23–55, 2009.
[17] A. Y. S. Lam and V. O. K. Li, “Chemical-reaction-inspired meta- [36] A. H. Gandomi and A. H. Alavi, “Krill herd: a new bio-inspired
heuristic for optimization,” IEEE Transactions on Evolutionary optimization algorithm,” Communications in Nonlinear Science
Computation, vol. 14, no. 3, pp. 381–399, 2010. and Numerical Simulation, vol. 17, no. 12, pp. 4831–4845, 2012.
[18] J. Kennedy and R. Eberhart, “Particle swarm optimization,” [37] V. K. Kamboj, S. K. Bath, and J. S. Dhillon, “Solution of
in Proceedings of the IEEE International Conference on Neural non-convex economic load dispatch problem using Grey Wolf
Network, vol. 4, pp. 1942–1948, Perth, Australia, December 1995. Optimizer,” Neural Computing and Applications, 2015.
16 Applied Computational Intelligence and Soft Computing

[38] E. Emary, H. M. Zawbaa, C. Grosan, and A. E. Hassenian,


“Feature subset selection approach by gray-wolf optimization,”
in Afro-European Conference for Industrial Advancement, vol.
334 of Advances in Intelligent Systems and Computing, Springer,
2015.
[39] S. Gholizadeh, “Optimal design of double layer grids consid-
ering nonlinear behaviour by sequential grey wolf algorithm,”
Journal of Optimization in Civil Engineering, vol. 5, no. 4, pp.
511–523, 2015.
[40] Y. Yusof and Z. Mustaffa, “Time series forecasting of energy
commodity using grey wolf optimizer,” in Proceedings of the
International Multi Conference of Engineers and Computer
Scientists (IMECS ’15), vol. 1, Hong Kong, March 2015.
[41] G. M. Komaki and V. Kayvanfar, “Grey wolf optimizer algo-
rithm for the two-stage assembly flow shop scheduling problem
with release time,” Journal of Computational Science, vol. 8, pp.
109–120, 2015.
[42] A. A. El-Fergany and H. M. Hasanien, “Single and multi-
objective optimal power flow using grey wolf optimizer and
differential evolution algorithms,” Electric Power Components
and Systems, vol. 43, no. 13, pp. 1548–1559, 2015.
[43] K. Shankar and P. Eswaran, “A secure visual secret share (VSS)
creation scheme in visual cryptography using elliptic curve
cryptography with optimization technique,” Australian Journal
of Basic & Applied Science, vol. 9, no. 36, pp. 150–163, 2015.
[44] E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary grey
wolf optimization approaches for feature selection,” Neurocom-
puting, vol. 172, pp. 371–381, 2016.
[45] V. K. Kamboj, “A novel hybrid PSOGWOapproach for unit
commitment problem,” Neural Computing and Applications,
2015.
[46] A. Zhu, C. Xu, Z. Li, J. Wu, and Z. Liu, “Hybridizing grey Wolf
optimization with differential evolution for global optimization
and test scheduling for 3D stacked SoC,” Journal of Systems
Engineering and Electronics, vol. 26, no. 2, pp. 317–328, 2015.
[47] T.-S. Pan, T.-K. Dao, T.-T. Nguyen, and S.-C. Chu, “A communi-
cation strategy for paralleling grey wolf optimizer,” Advances in
Intelligent Systems and Computing, vol. 388, pp. 253–262, 2015.
[48] J. Jayapriya and M. Arock, “A parallel GWO technique for
aligning multiple molecular sequences,” in Proceedings of the
International Conference on Advances in Computing, Communi-
cations and Informatics (ICACCI ’15), pp. 210–215, IEEE, Kochi,
India, August 2015.
[49] M. M. Afsar and M.-H. Tayarani-N, “Clustering in sensor
networks: a literature survey,” Journal of Network and Computer
Applications, vol. 46, pp. 198–226, 2014.
[50] W. B. Heinzelman, A. Chandrakasan, and H. Balakrish-
nan, “Energy-efficient communication protocol for wireless
microsensor networks,” in Proceedings of the 33rd Annual
Hawaii International Conference on System Siences (HICSS ’00),
p. 223, IEEE, January 2000.
[51] N. Mittal and U. Singh, “Distance-based residual energy-
efficient stable election protocol for WSNs,” Arabian Journal for
Science and Engineering, vol. 40, no. 6, pp. 1637–1646, 2015.
[52] E. A. Khalil and B. A. Attea, “Energy-aware evolutionary
routing protocol for dynamic clustering of wireless sensor
networks,” Swarm and Evolutionary Computation, vol. 1, no. 4,
pp. 195–203, 2011.
[53] B. A. Attea and E. A. Khalil, “A new evolutionary based routing
protocol for clustered heterogeneous wireless sensor networks,”
Applied Soft Computing, vol. 12, no. 7, pp. 1950–1957, 2012.
Advances in Journal of
Industrial Engineering
Multimedia
Applied
Computational
Intelligence and Soft
Computing
The Scientific International Journal of
Distributed
Hindawi Publishing Corporation
World Journal
Hindawi Publishing Corporation
Sensor Networks
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Advances in

Fuzzy
Systems
Modelling &
Simulation
in Engineering
Hindawi Publishing Corporation
Hindawi Publishing Corporation Volume 2014 http://www.hindawi.com Volume 2014
http://www.hindawi.com

Submit your manuscripts at


Journal of
http://www.hindawi.com
Computer Networks
and Communications  Advances in 
Artificial
Intelligence
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014

International Journal of Advances in


Biomedical Imaging Artificial
Neural Systems

International Journal of
Advances in Computer Games Advances in
Computer Engineering Technology Software Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International Journal of
Reconfigurable
Computing

Advances in Computational Journal of


Journal of Human-Computer Intelligence and Electrical and Computer
Robotics
Hindawi Publishing Corporation
Interaction
Hindawi Publishing Corporation
Neuroscience
Hindawi Publishing Corporation Hindawi Publishing Corporation
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy