AOHarris Hawk
AOHarris Hawk
net/publication/354121414
CITATIONS READS
22 1,020
4 authors:
Some of the authors of this publication are also working on these related projects:
Special Issue "Optimization Algorithms for Engineering Applications" in Information (ISSN 2078-2489) View project
All content following this page was uploaded by Qingxin Liu on 26 August 2021.
Research article
1
School of Information Engineering, Sanming University, Sanming 365004, Fujian, China
2
School of Computer Science and Technology, Hainan University, Haikou 570228, Hainan, China
Abstract: This paper introduces an improved hybrid Aquila Optimizer (AO) and Harris Hawks
Optimization (HHO) algorithm, namely IHAOHHO, to enhance the searching performance for global
optimization problems. In the IHAOHHO, valuable exploration and exploitation capabilities of AO
and HHO are retained firstly, and then representative-based hunting (RH) and opposition-based
learning (OBL) strategies are added in the exploration and exploitation phases to effectively improve
the diversity of search space and local optima avoidance capability of the algorithm, respectively. To
verify the optimization performance and the practicability, the proposed algorithm is comprehensively
analyzed on standard and CEC2017 benchmark functions and three engineering design problems. The
experimental results show that the proposed IHAOHHO has more superior global search performance
and faster convergence speed compared to the basic AO and HHO and selected state-of-the-art meta-
heuristic algorithms.
1. Introduction
Meta-heuristic optimization algorithms develop rapidly [1–3] because of its simple concept,
flexibility and ability to avoid local optima, and have been widely used in solving various complex
optimization problems in the real world [4,5]. According to different inspiration of the algorithms,
meta-heuristics can be divided into three main categories: evolutionary, physics-based and swarm
intelligence based techniques. The inspirations of evolutionary algorithms are the laws of evolution in
7077
nature. There are some representative evolutionary algorithms such as Genetic Algorithm (GA) [6],
Differential Evolution Algorithm (DE) [7], Evolution Strategy (ES) [8], Biogeography-Based
Optimizer (BBO) [9] and Probability-Based Incremental Learning (PBIL) [10]. Inspired by the
physical rules of the universe, physics-based techniques include Simulated Annealing (SA) [11],
Gravity Search Algorithm (GSA) [12], Black Hole Algorithm (BH) [13], Multi-Verse Optimizer
(MVO) [14], Sine Cosine Algorithm (SCA) [15], Arithmetic Optimization Algorithm (AOA) [16],
Heat Transfer Relation-based Optimization Algorithm (HTOA) [17] and so forth. Swarm intelligence
(SI) based methods belong to the most popular category, which are inspired by swarm behaviors of
creatures in nature. The representative SI algorithms include Particle Swarm Optimization (PSO) [18],
Ant Colony Optimization Algorithm (ACO) [19], Firefly Algorithm (FA) [20], Grey Wolf Optimizer
(GWO) [21], Cuckoo Search Algorithm (CS) [22], Whale Optimization Algorithm (WOA) [23], Salp
Swarm Algorithm (SSA) [24], Remora Optimization Algorithm [25], Slime Mould Algorithm
(SMA) [26], and Horse herd Optimization Algorithm (HOA) [27].
The Aquila Optimizer (AO) [28] and Harris Hawks Optimization (HHO) [29] are both latest SI
algorithms that simulate hunting behaviors of Aquila and Harris’ hawks respectively. Due to the short
time for AO to be proposed, there is no research on the improvement of AO yet, but AO has been used
to solve the real-world optimization problems. AlRassas et al. [30] applied AO to optimize parameters
of Adaptive Neuro-Fuzzy Inference System (ANFIS) model to boost the prediction accuracy of oil
production forecasting. This research revels the good practicable performance of AO. For another thing,
once the HHO was proposed, it attracted a large number of researchers to improve or apply it to solve
optimization problems in many fields. Chen et al. [31] proposed the first powerful variant of HHO by
integrating chaos, topological multi-population, and differential evolution (DE) strategies. Chaos
mechanism is for exploitation, multi-population strategy is for global search ability, and the DE
mechanism is for increasing the accuracy of the solutions. Inspired by the survival-of-the-fittest
principle of evolutionary algorithms, Al-Betar et al. [32] proposed three new versions of HHO
incorporated tournament, proportional and linear rank-based strategies respectively to accelerate
convergence. The proposed new versions show a better balance between the exploration and
exploitation and enhance local optima avoidance as well. Song et al. [33] utilized dimension decision
strategy in CS to improve the convergence speed, and Gaussian mutation to increase the convergence
accuracy and premature convergence avoidance. Yousri et al. [34] improved the exploration
performance of HHO using the fractional calculus (FOC) memory concept. The hawks move with a
fractional-order velocity, and the escaping energy of prey is adaptively adjusted based on FOC
parameters to avoid local optima stagnation. Gupta et al. [35] enhanced the search-efficiency and
premature convergence avoidance of HHO by adding a nonlinear energy parameter, different settings
for rapid dives, opposition-based learning strategy and a greedy selection mechanism. Akdag et al. [36]
introduced seven types of random distribution functions to increase the performance of HHO, and then
applied the modified HHO to solve optimum power flow (OPF) problem. Yousri et al. [37] applied
HHO to optimize parameters of the Proportional-Integral controller for designing load frequency
control (LFC). Jia et al. [38] proposed a dynamic HHO using a mutation mechanism to avoid local
optima and enhance the search capability. This improved HHO was applied for satellite image
segmentation as well.
Otherwise, there are also attempts of hybrid algorithm of HHO. Hussain et al. [39] integrated
sine-cosine algorithm (SCA) in HHO for numerical optimization and feature selection. The SCA
integration is used to cater ineffective exploration in HHO, moreover exploitation is enhanced by
dynamically adjusting candidate solutions for avoiding solution stagnancy in HHO. Bao et al. [40]
proposed HHO-DE by hybridizing HHO and DE algorithms. The convergence accuracy, ability to
avoid local optima and stability are greatly improved compared to HHO and DE. Houssein et al. [41]
proposed a hybrid algorithm called CHHO-CS by combining HHO with CS and chaotic maps. The
CHHO-CS achieves a better balance between exploration and exploitation phases, and effectively
avoids premature convergence. Kaveh et al. [42] hybridized HHO with Imperialist Competitive
Algorithm (ICA). Combination of the exploration strategy of ICA and exploitation technique of HHO
helps to achieve a better search performance. The satisfactory outcomes of several HHO-based hybrid
algorithms proposed in the literature show potential research direction.
Thus, in view of defects in the slow convergence and local optima stagnation of HHO and inspired
by the above researches, we try a hybridization to enhance the performance of HHO and AO. An
improved hybrid Aquila Optimizer and Harris Hawks Optimization namely IHAOHHO is proposed.
First of all, we combine the exploration phase of AO with the exploitation phase of HHO together.
This operation extracts and retains the strong exploration and exploitation capabilities of basic AO and
HHO. Then, in order to further improve the performance of IHAOHHO, the representative-based
hunting (RH) and opposition-based learning (OBL) strategies are introduced into IHAOHHO. RH is
mixed into the exploration phase to increase the diversification and OBL is added into the exploitation
phase to avoid local optima stagnation, respectively. Thus, the capabilities of exploration, exploitation
and local optima avoidance are effectively enhanced in the proposed algorithm. The standard and
CEC2017 benchmark functions and three engineering design problems are utilized to test the
exploration and exploitation capabilities of IHAOHHO. The proposed algorithm is compared with
basic AO, HHO, and several well-known meta-heuristic algorithms, including HOA, SSA, WOA,
GWO, MVO, IPOP-CMA-ES [43], LSHADE [44], Sine-cosine and Spotted Hyena-based Chimp
Optimization Algorithm (SSC) [45] and RUNge Kutta Optimizer (RUN) [46]. The experimental results
show that the proposed IHAOHHO algorithm outperforms other state-of-the-art algorithms.
The rest of this paper is organized as follows: The Section 2, provides a brief overview of the
related work: basic AO and HHO algorithms, as well as the RH and OBL strategies. The Section 3,
describes in detail the proposed hybrid algorithm. The Section 4, conducts simulation experiments and
results analysis. Finally, Section 5, concludes the paper.
2. Preliminaries
AO is a latest novel swarm intelligence algorithm proposed by Abualigah et al. in 2021. There
are four hunting behaviors of Aquila for different kinds of prey. Aquila can switch hunting strategies
flexibly for different prey, and then uses its fast speed united with sturdy feet and claws to attack prey.
The brief description of mathematical model can be described as follows.
Step 1: Expanded exploration: high soar with a vertical stoop
In this method, the Aquila flies high over the ground and explorers the search space widely, and
then a vertical dive will be taken once the Aquila determines the area of the prey. The mathematical
representation of this behavior is written as:
t
X (t 1) X best (t ) (1 ) ( X M (t ) X best (t ) rand ) (1)
T
1 N
X M (t ) X i (t )
N i 1
(2)
where X best (t ) represents the best position obtained so far, and X M (t ) denotes to the average position
of all Aqulias in current iteration. t and T are the current iteration and the maximum number of
iterations, respectively. N is the population size and rand is a random number between 0 and 1.
Step 2: Narrowed exploration: contour flight with short glide attack
This is the most commonly used hunting method for Aquila. It uses short gliding to attack the
prey after descending within the selected area and flying around the prey. The position update formula
is represented as:
where, X R (t ) represents a random position of the hawk, and D is the dimension size. LF ( D)
represents Levy flight function, which is presented as follows:
u
LF ( D ) s 1
(4)
|v|
(1 ) sin( )
2 (5)
1
1 ( )
( ) 2 2
2
where, s and are constant values equal to 0.01 and 1.5 respectively, u and v are random
numbers between 0 and 1. y and x are used to present the spiral shape in the search, which are
calculated as follows:
x r sin( )
y r cos( )
r r1 0.00565 D1 (6)
D1 3
2
where, r1 means the number of search cycles between 1 and 20, D1 is composed of integer numbers
from 1 to the dimension size ( D ), and is equal to 0.005.
Step 3: Expanded exploitation: low flight with a slow descent attack
In the third method, when the area of prey is roughly determined, the Aquila descends vertically
to do a preliminary attack. AO exploits the selected area to get close to and attack the prey. This
behavior is presented as follows:
where and are the exploitation adjustment parameters fixed to 0.1, UB and LB are the
upper and lower bounds of the problem.
where X (t ) is the current position, and QF (t ) represents the quality function value which used to
balance the search strategy. G1 denotes the movement parameter of Aquila during tracking prey,
which is a random number between [-1,1]. G2 denotes the flight slope when chasing prey, which
decreases linearly from 2 to 0.
The flowchart of AO is shown in Figure 1.
Start
YES NO
t<(2/3)*T
YES NO YES
rand<0.5 rand<0.5 NO
Update Xi Update Xi
NO YES
Termination criteria? Return Xbest End
The Harris’ hawks usually perch on some random locations, wait and monitor the desert to detect
the prey. There are two perching strategies based on the positions of other family members and the
prey, which are selected in accordance with the random q value.
The HHO algorithm has a transition mechanism from exploration to exploitation phase based on
the escaping energy of the prey, and then changes the different exploitative behaviors. The energy of
the prey is modeled as follows, which decreases during the escaping behavior.
t
E 2 E0 (1 ) (10)
T
where E represents the escaping energy of the prey, E 0 is the initial state of the energy. When
| E | 1 , the algorithm performs the exploration stage, and when | E | 1 , the algorithm performs the
exploitation phase.
In this phase, four different chasing and attacking strategies are proposed on the basis of the
escaping energy of the prey and chasing styles of the Harris’ hawks. Except for the escaping energy,
parameter r is also utilized to choose the chasing strategy, which indicates the chance of the prey in
successfully escaping ( r 0.5 ) or not ( r 0.5 ) before attack.
i. Soft besiege
When r 0.5 and | E | 0.5 , the prey still has enough energy and tries to escape, so the Harris’
hawks encircle it softly to make the prey more exhausted and then attack it. This behavior is modeled
as follows:
X (t 1) X (t ) E | JX best (t ) X (t ) | (11)
X (t ) X best (t ) X (t ) (12)
where X (t ) indicates the difference between the position of prey and the current position, J
represents the random jump strength of prey.
ii. Hard besiege
When r 0.5 and | E | 0.5 , the prey has a low escaping energy, and the Harris’ hawks encircle
the prey readily and finally attack it. In this situation, the positions are updated as follows:
X (t 1) X best (t ) E | X (t ) | (14)
Z Y S LF ( D ) (16)
Y if F (Y ) F ( X (t ))
X (t 1) (17)
Z if F ( Z ) F ( X (t ))
where S is a random vector. Note that, only the better position between Y and Z is selected as the
next position.
iv. Hard besiege with progressive rapid dives
When | E | 0.5 and r 0.5 , the prey has no enough energy to escape, so the hawks perform a
hard besiege to decrease the distance between their average position and the prey, and finally attack
and kill the prey. The mathematical representation of this behavior is as follows:
Z Y S LF ( D ) (19)
Y if F (Y ) F (X(t ))
X (t 1) (20)
Z if F ( Z ) F ( X (t ))
Note that only the better position between Y and Z will be the next position for the new iteration.
The flowchart of HHO is displayed in Figure 2.
Start
YES NO
|E|≥1
YES NO
r≥0.5
YES NO YES NO
|E|≥0.5 |E|≥0.5
NO YES
Termination criteria? Return Xbest End
The strategy of representative-based hunting was first proposed to improve the exploration and
diversification of GWO algorithm in 2021 [47]. To achieve this, an archive called representative
archive (RA) is constructed to maintain the representative solutions. A random representative search
agent is selected from the five-best search agents archived by the RA, and a random search agent is
selected from the RA. Meanwhile, two random search agents are selected from the population. These
four selections efficiently improve the diversity, exploration capability and premature convergence
avoidance. The mathematical model of RH is as follows:
where XR_best and XR_archive are randomly selected from the five-best representative search agents and
the whole archive, respectively. X rand 1 and X rand 2 are randomly selected from the whole population.
and the Cauchy distribution cd are calculated by:
T t Exponent
=( ) ( initial final ) final (22)
T 1
Opposition-based learning (OBL) is a powerful optimization tool proposed by Tizhoosh in 2005 [48].
The main idea of OBL is simultaneously considering the fitness of an estimate and its corresponding
counter estimate to obtain a better candidate solution (Figure 3). An optimization process usually starts
at a random initial solution. If the random solution is near the optimal solution, the algorithm converges
fast. However, it’s possible that the initial solution is far away from the optimum or just at exact
opposite position. In this case, it might take quite long time to converge or not converge at all. Thus,
considering the opposite direction of the candidate solution in each step increases the probability of
finding a better solution. We can choose the opposite point as the candidate solution once the opposite
solution is beneficial and then proceed to the next iteration. The OBL concept has successfully been
used in varieties of meta-heuristics algorithms [49–53] to improve the convergence speed. OBL is
defined by:
where xjOBL represents the opposite solution, l j and u j are the lower and upper bounds of the problem
in jth dimension. The opposite solution described by Eq (24) can effectively help the population jump
out of the local optima.
The AO simulates hunting behaviors for fast-moving prey within a wide flying area in exploration
phase. The characteristics of these behaviors make AO have a strong global search ability and fast
convergence speed. However, the selected search space is not exhaustively searched during the
exploitation phase. The role of Levy flight is weak in the late iterations, which tends to result in
premature convergence. Thus, the AO algorithm possesses good exploration capability and fast
convergence speed, but it is hard to escape from local optima in the exploitation stage. For the HHO
algorithm, the experimental results show that deficiencies of insufficient diversification of the
population and low convergence speed exist in the exploration phase. On the basis of the energy and
escape probability of the prey, four different hunting strategies are used to implement various position
updating methods in the exploitation phase. In addition, the transition mechanism from exploration to
exploitation is a good way to adapt to animal characteristics. As a whole, the energy of prey decreases
with the increase of iterations, making the algorithm enter the exploitation stage.
|E|=1
In this work, we retain the exploration phase of AO and the exploitation phase of HHO, and
combine them together through the transition mechanism. The exploration phase of HHO is highly
dominant with randomization that seems clueless search mechanism. In contrast, the position updating
in exploration phase of AO is based on the best solution and average position with some randomness,
which is more reasonable. And the four exploitation strategies based on the different values of E and r
help the algorithm fully exploit the search space. This hybridization is beneficial to give full play to
the advantages of these two basic algorithms. The global search capability, faster convergence speed,
and detailed exploitation of the algorithms are all reserved. However, the diversity of the population
in the exploration phase is insufficient due to the lack of randomness. As described in Section 2.3, RH
is designed for improving the exploration and diversification of an optimization algorithm. Selections
from different sub-population can efficiently improve the diversity and exploration capability. Thus,
RH strategy is utilized to further improve the diversification of the population in exploration phase,
which is conducive to find the most promising region quickly. Besides, AO and HHO possess a
common defect of local optima stagnation. The OBL strategy can utilize the opposite solution to make
the population jump out of the local optima. Therefore, OBL strategy is added to the exploitation phase
to enhance the ability to jump out of the local optima as well. All these strategies effectively improve
the convergence speed, convergence accuracy and the overall optimization performance of the hybrid
algorithm. This improved hybrid Aquila Optimizer and Harris Hawks Optimization algorithm is named
IHAOHHO. Different phases of IHAOHHO are illustrated in Figure 4. The pseudo-code of
IHAOHHO is given in Algorithm 1, and the summarized flowchart is shown in Figure 5.
Start
AO with RH YES NO
|E|≥1
Representative NO YES
archive r≥0.5
YES NO YES NO
YES NO |E|≥0.5 |E|≥0.5
rand<0.5
Update Xnewi Update Xnewi Update the Update the Update the Update the
and X_Ri and X_Ri position of Xi position of Xi position of Xi position of Xi
using Eq. (1) using Eq. (3) using Eq. (15) using Eq. (18) using Eq. (11) using Eq. (14)
and Eq. (21) and Eq. (21)
Update Xnewi
Calculate the using Eq. (24)
fitness of Xnewi
and X_Ri OBL
NO
f(Xnewi)<f(Xi)
Update Xi
YES
Xi=Xnewi
HHO with OBL
NO YES
Termination criteria? Return Xbest End
30 [-10, 10] 0
F2 ( x ) i 1 xi i 1 xi
n n
2 30 [-100, 100] 0
F3 ( x ) i 1 ( j 1 x j )
n i
30 [-100, 100] 0
F6 ( x) i 1 ( xi 5)2
n
30 [-1.28,1.28] 0
F7 ( x) i 1 ixi4 random[0,1)
n
30 [-5.12, 5.12] 0
F9 ( x) i 1[ xi2 10 cos(2 xi ) 10]
n
1 n 2 1 n 30 [-32, 32] 0
F10 ( x) 20 exp(0.2 xi ) exp( n i1 cos(2 xi )) 20 e
n i 1
1 x 30 [-600, 600] 0
xi2 i1 cos( ii ) 1
n n
F11 ( x)
4000 i 1
2
30 [-50, 50] 0
{10sin( y1 ) i 1 ( yi 1) [1 10sin 2 ( yi 1 )] ( yn 1) 2 }
n 1
F12 ( x)
n
xi 1
i 1 u ( xi ,10,100, 4), where yi 1
n
,
4
k ( xi a) m xi a
u ( xi , a, k , m) 0 a xi a
k ( x a ) m xi a
i
process. Finally, the computational complexities of position updating of hawks and fitness comparison
in one iteration are O(2 × N × D) and O(N) respectively, where D is dimension size of the problem.
Therefore, the total computational complexity of the proposed IHAOHHO algorithm is O(N × (1 + 2
× D × T + 2 × T)). As described in the literature, the computational complexity of AO and HHO are
both O(N × (1 + D × T + T)). Compared to the basic AO and HHO, the computational complexity of
IHAOHHO is slightly increased due to the RH and OBL strategies, which is acceptable for improving
the convergence accuracy and speed of the algorithm.
In this section, we implement four main experiments to evaluate the performance of the proposed
IHAOHHO algorithm. Standard benchmark function experiment is carried out firstly, which is used to
evaluate the performance of the algorithm in solving 23 simple numerical optimization problems.
Secondly, the CEC2017 benchmark functions are tested to assess the performance of the algorithm in
solving complex numerical optimization problems. Then, sensitivity analysis is performed to
investigate the effect of the control parameters. The last one is engineering design problems, which
aims to assess the performance of IHAOHHO in solving real-world problems. All experiments are
implemented in MATLAB R2016a on a PC with Intel (R) core (TM) i5-9500 CPU @ 3.00GHz and
RAM 16GB memory on OS windows 10.
6 [0, 1] -3.32
F20 ( x) i 1 ci exp( j 1 aij ( x j pij ) 2 )
4 6
We utilize 23 standard benchmark functions to test the performance of the IHAOHHO algorithm,
which are divided into three types including unimodal, multimodal and fixed-dimension multimodal
benchmark functions. The main characteristic of unimodal functions is that there is only one global
optimum but no local optima. This kind of functions can be used to evaluate the exploitation capability
and convergence rate of an algorithm. Unlike unimodal functions, multimodal and fixed-dimension
multimodal functions have one global optimum and multiple local optima. These types of functions
are utilized to evaluate the exploration and local optima avoidance capabilities. The benchmark
function details are listed in Tables 1–3.
For verification of the results, IHAOHHO is compared with the basic AO, HHO, and HOA, SSA,
WOA, GWO, MVO as several well-known meta-heuristic algorithms. For all tests, we set the
population size N = 30, dimension size D = 30, maximum number of iterations T = 500, and run 30
times independently. The parameter settings of each algorithm are shown in Table 4. After all, the
average and standard deviation results of these test functions are exhibited in Table 5. Figure 6 shows
the convergence curves of 23 test functions. The partial search history, trajectory and average fitness
maps are shown in Figure 7. The Wilcoxon signed-rank test results are also listed in Table 6.
Unimodal test functions are usually used to investigate the exploitation capability of the algorithm
since they have only one global optimum and no local optima. As seen from Table 5, the IHAOHHO
performs much better than other selected algorithms exclude F6. For all unimodal functions exclude
F6, IHAOHHO obtains the smallest average values and standard deviations compared to other
algorithms, which indicate the best accuracy and stability among all these algorithms. Hence, the
exploitation capability of the proposed IHAOHHO algorithm is competitive with all the selected meta-
heuristic algorithms.
Figure 7. Parameter space, search history, trajectory, average fitness, and convergence
curve of IHAOHHO.
Multimodal test functions F8–F23 contain plentiful local optima whose number increases
exponentially with the dimension size of the problem. These functions are very useful to evaluate the
exploration ability and local optima avoidance of the algorithm. It can be seen from Table 5 that
IHAOHHO outperforms other algorithms in most of the multimodal and fixed-dimension multimodal
functions. For multimodal functions F8–F13, IHAOHHO shows completely superiority than other
selected algorithms with the best average values and standard deviations. For ten fixed-dimensions
multimodal functions F14–F23, IHAOHHO performs barely satisfactory. The IHAOHHO outperforms
others in terms of both average values and standard deviations in F21–F23, and achieves the best
accuracy for F16–F18. These results reveal that IHAOHHO can also provide superior exploration capability.
Search agents tend to change drastically to investigate promising regions of the search space in
early iterations, and then exploit the region in detail and converge gradually as the number of iterations
increases. Convergence curves of the IHAOHHO, AO, HHO, HOA, SSA, WOA, GWO, and MVO for 23
standard benchmark functions are given in Figure 6, which show the convergence rate of algorithms.
As we can see, IHAOHHO shows competitive performance compared to other state-of-the-art
algorithms. The IHAOHHO algorithm presents faster convergence speed than all other algorithms in
F1–F4 and F8–F11. For else test functions, IHAOHHO may not have much advantages than other
algorithms in terms of convergence speed with the reason that some algorithms are excellent as well,
but the convergence accuracy of IHAOHHO is better than other algorithms in most of the test functions.
The superiority of IHAOHHO in terms of convergence speed is likely to come from the RH
strategy in exploration phase. To be specific, the RH strategy provides better randomness and diversity
for the search agents, making search agents explore the search space widely and randomly. The
improvement of randomness and diversity increases the probability of finding the most promising
region quickly. The advantage of convergence accuracy is likely to be derived from the OBL strategy,
which improves randomness of search agents. The search agents can choose the better one to jump out
of the local optima in each iteration. These two strategies help the hybrid algorithm outperforms the
basic AO and HHO. Overall, IHAOHHO can efficiently achieve great solutions for all 23 standard
benchmark functions.
Furthermore, Figure 7 shows us the results of several representative test functions on search
history, trajectory, average fitness and convergence curve. From search history maps, we can see the
search agents’ distribution of the IHAOHHO while exploring and exploiting the search space. Because
of the fast convergence, the vast majority of search agents are concentrated near the global optimum.
Inspecting trajectory figures in Figure 7, the first search agent constantly oscillates in the first
dimension of the search space, which suggests that the search agent investigates the most promising
areas and better solutions widely. This powerful search capability is likely to come from the RH and
OBL strategies. The average fitness presents if exploration and exploitation are conducive to improve
the first random population and an accurate approximation of the global optimum can be found in the
end. Similarly, it can be noticed that the average fitness oscillates like trajectories in the early iterations,
and then decreases abruptly and levels off accordingly. The average fitness maps show the great
improvement of the first random population and the acquisition of the final global optimal
approximation as well. At last, the convergence curves reveal the best fitness values found by search
agents after each of iteration. By observing this, the IHAOHHO shows very fast convergence speed.
Table 6. P-values from the Wilcoxon signed-rank test for the results in Table 5.
IHAOHHO vs IHAOHHO vs IHAOHHO vs IHAOHHO vs IHAOHHO vs IHAOHHO vs IHAOHHO vs
F
AO HHO HOA SSA WOA MVO GWO
F1 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F2 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F3 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F4 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F5 0.10699 1.2207E-04 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F6 1.8311E-04 1.8311E-04 6.1035E-05 1.1597E-03 6.1035E-05 6.1035E-05 6.1035E-05
F7 6.3721E-02 0.10699 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F8 6.1035E-05 0.63867 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F9 NaN NaN 6.1035E-05 6.1035E-05 NaN 6.1035E-05 6.1035E-05
F10 NaN NaN 6.1035E-05 6.1035E-05 1.9531E-03 6.1035E-05 6.1035E-05
F11 NaN NaN 0.1250 6.1035E-05 NaN 6.1035E-05 NaN
F12 2.6245E-03 1.2207E-04 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F13 2.1545E-02 1.2207E-04 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05
F14 0.97797 0.45428 1.8066E-02 2.6245E-03 0.18762 6.1035E-05 3.5339E-02
F15 6.1035E-05 0.45428 6.1035E-05 6.1035E-05 6.1035E-05 6.1035E-05 0.3028
F16 6.1035E-05 8.3618E-03 6.1035E-05 6.1035E-05 9.4604E-02 6.1035E-05 5.5359E-02
F17 6.1035E-04 0.45428 6.1035E-05 6.1035E-05 0.27686 3.0518E-04 1.0254E-02
F18 8.3252E-02 1.2207E-04 8.3252E-02 6.1035E-05 0.48871 0.67877 0.13538
F19 7.2998E-02 1.8066E-02 7.2998E-02 6.1035E-05 1.8066E-02 6.1035E-05 3.0151E-02
F20 6.1035E-04 0.93408 8.5449E-04 1.5259E-03 2.0142E-03 6.1035E-05 8.5449E-04
F21 3.3569E-03 6.1035E-05 6.1035E-05 4.2120E-02 1.2207E-04 2.0142E-03 1.8311E-04
F22 1.2207E-04 6.1035E-05 6.1035E-05 8.3252E-02 1.8311E-04 4.126E-02 8.3618E-03
F23 1.1597E-03 6.1035E-05 6.1035E-05 3.3026E-02 6.1035E-05 1.6882E-02 6.7139E-03
The Wilcoxon signed-rank test is a non-parametric statistical test and useful to evaluate the
statistical performance differences between the proposed IHAOHHO algorithm and other algorithms.
As is well-known, p-values less than 0.05 indicate that there is a significant difference between the
two compared algorithms. The calculated results of Wilcoxon signed-rank test between IHAOHHO
and other seven algorithms for each benchmark functions are listed in Table 6. According to the
criterion of 0.05, IHAOHHO outperforms all other algorithms in varying degrees. This superiority is
statistically significant on unimodal functions F1–F6, which strongly indicates that IHAOHHO
possesses high exploitation. IHAOHHO also shows better results on multimodal function F8–F23,
which may suggest that IHAOHHO has a high capability of exploration. To sum up, the IHAOHHO
algorithm can provide better results on almost all benchmark functions than other comparative algorithms.
half of the challenging hybrid and composition functions as shown in Table 7. The test results are
compared to some well-known and latest algorithms proposed recently, in which IPOP-CMA-ES and
LSHADE are the best behaved on CEC2017 in the literature. As the previous section described, each
algorithm is ran 30 times with 500 iterations, and average and standard deviation results are presented
in Table 8. From the comparison results, the proposed IHAOHHO obtains the 3rd rank following IPOP-
CMA-ES and LSHADE, and exceeds SSC, RUN and HOA methods completely. It reveals that
IHAOHHO can also achieve better results on complex functions.
The performance of an optimization algorithm is affected by the values of the control parameters.
For the sake of better performance, the influence of the parameters should be investigated to select the
appropriate values. The IHAOHHO algorithm owns three parameters σinitial, σfinal and Exponent in
Eq (22). At the end of the iteration, the algorithm needs to search in detail and minimize randomness
as much as possible. Thus,σfinal should be equal to 0 to get rid of the random term in Eq (21). Next,
the left two parametersσinitial and Exponent are assessed by the representative standard and CEC2017
benchmark functions in Table 9. The mean-square error values are obtained using benchmark functions
from different categories including unimodal, multimodal and fixed-multimodal of standard benchmark
functions, and unimodal, multimodal, hybrid and composite of CEC2017 with different parameters. The
best performance bolded is obtained by values 1 and 2 for parameters σinitial and Exponent.
Considering equality and inequality constraints is a necessary process for optimization because
most optimization problems have constraints in the real world. In this subsection, three well-known
constrained engineering design problems, which include speed reducer design problem,
tension/compression spring design problem and three-bar truss design problem, are solved to further
verify the performance of IHAOHHO. The results of IHAOHHO are compared to the basic AO, HHO,
and HOA, SSA, WOA, GWO, MVO as well. The parameter settings are the same as the previous
numerical experiments. For all tests, each algorithm is ran 15 times independently. The best result
among 15 times for each algorithm and the Wilcoxon signed-rank test results between IHAOHHO and
other algorithms are shown in Tables 10–12.
This problem aims to optimize seven variables to minimize the speed reducer’s total weights,
which include the face width (x1), module of teeth (x2), a discrete design variable on behalf of the teeth
in the pinion (x3), length of the first shaft between bearings (x4), length of the second shaft between
bearings (x5), diameters of the first shaft (x6) and diameters of the second shaft (x7). Four constraints:
covering stress, bending stress of the gear teeth, stresses in the shafts and transverse deflections of the
shafts as shown in Figure 8 should be satisfied. The mathematical formulation is represented as follows:
Minimize
f ( x) 0.7854 x1 x(
2 3.3333 x3 14.9334 x3 43.0934)
2 2
Subject to
27
g1 ( x) 1 0,
x1 x22 x3
397.5
g 2 ( x) 1 0,
x1 x22 x32
1.93x 3
g3 ( x) 4
1 0,
x2 x3 x64
1.93 x53
g 4 ( x) 1 0,
x2 x3 x74
745 x4 2
( ) 16.9 106
x2 x3
g5 ( x) 1 0,
110.0 x63
745 x4 2
( ) 157.5 106
x2 x3
g 6 ( x) 1 0,
85.0 x63
xx
g 7 ( x) 2 3 1 0,
40
5x
g8 ( x) 2 1 0,
x1
x
g9 ( x) 1 1 0,
12 x2
1.5 x6 1.9
g10 ( x) 1 0,
x4
1.1x7 1.9
g11 ( x) 1 0,
x5
Variable range
2.6 x1 3.6,
0.7 x2 0.8,
17 x3 28,
7.3 x4 8.3,
7.8 x5 8.3,
2.9 x6 3.9,
5.0 x7 5.5,
Compared to other algorithms, IHAOHHO can obviously achieve better results in the speed
reducer design problem, as shown in Table 10. P-values in Table 10 show us the significant difference
between IHAOHHO and other algorithms, proving the statistical superiority of the proposed algorithm.
Table 10. Comparison of IHAOHHO results with other competitors for the speed reducer
design problem.
Algorithm Optimum variables Optimum P-value
x1 x2 x3 x4 x5 x6 x7 weight
IHAOHHO 3.49683 0.7 17 7.33302 7.8 3.35006 5.28575 2995.816 NaN
AO 3.49688 0.7 17 8.10828 7.8 3.37081 5.28578 3008.168 0.025574
HHO 3.49731 0.7 17 7.3 7.8 3.47527 5.28482 3028.6976 0.035339
HOA 3.56008 0.7 17 7.34912 7.8 3.49325 5.28415 3058.577 6.1035e-05
SSA 3.49732 0.7 17 8.03843 7.80061 3.52296 5.28577 3049.1538 0.012451
WOA 3.4976 0.7 17 7.3 7.8 3.44134 5.28525 3019.3398 0.043721
MVO 3.52164 0.7 17 7.44477 8.29729 3.43143 5.2842 3038.4984 0.018066
GWO 3.49231 0.7 17.0038 8.1759 8.04815 3.35214 5.28783 3013.2315 0.0026245
In this case, the intention is to minimize the weight of the tension/compression spring shown in
Figure 9. Constraints on surge frequency, shear stress and deflection must be satisfied during optimum
design. There are three parameters need to be minimized, including the wire diameter(d), mean coil
diameter(D) and the number of active coils (N). The mathematical form of this problem can be written
as follows:
Consider
x [ x1 x2 x3 x4 ] [d D N], .
Minimize
f ( x) ( x3 2) x2 x12 ,
Subject to
x23 x3
g1 ( x) 1 0,
71785 x14
4 x22 x1 x2 1
g 2 ( x) 0,
12566( x2 x1 x1 ) 5108 x12
3 4
140.45 x1
g3 ( x) 1 0,
x22 x3
x x
g 4 ( x) 1 2 1 0,
1.5
Variable range
0.05 x1 2.00,
0.25 x2 1.30,
2.00 x3 15.00,
The experiment results are listed in Table 11 and show that the IHAOHHO can attain the best
weight values compared to all other algorithms. IHAOHHO obtains the significant different results
compared to others exclude HOA.
Table 11. Comparison of IHAOHHO results with other competitors for the
tension/compression spring design problem.
Algorithm Optimum variables Optimum weight P-value
d D N
IHAOHHO 0.054826 0.49772 5.273 0.010881 NaN
AO 0.051647 0.38603 9.3553 0.011692 6.1035e-05
HHO 0.059559 0.64197 3.4141 0.012329 0.047913
HOA 0.054031 0.47388 6.0876 0.011188 0.63867
SSA 0.05 0.326589 12.8798 0.012149 0.00030518
WOA 0.059166 0.62905 3.534 0.012186 0.047913
MVO 0.059421 0.63742 3.4573 0.012282 0.025574
GWO 0.057335 0.57116 4.1668 0.011579 6.1035e-05
The three-bar truss design problem is a classical optimization application in civil engineering
field. The main intention of this case is to minimize the weight of a truss with three bars by considering
two structural parameters as illustrated in Figure 10. Deflection, stress and buckling are the three main
constrains. The mathematical formulation of this problem is given:
Consider
x [ x1 x2 ] [ A1 A2 ],
Minimize
f ( x) (2 2 x1 x2 ) l ,
Subject to
2 x1 x2
g1 ( x) P 0,
2 x12 2 x1 x2
x2
g 2 ( x) P 0,
2 x 2 x1 x2
2
1
1
g3 ( x) P 0,
2 x2 x1
Consider
0 x1 , x2 1,
Table 12. Comparison of IHAOHHO results with other competitors for the three-bar truss
design problem.
5. Conclusions
In this paper, an improved hybrid Aquila Optimizer and Harris Hawks Optimization algorithm is
proposed by combining the exploration part of AO with the exploitation part of HHO. The
advantageous parts of basic AO and HHO are combined to keep the well-behaved exploration and
exploitation capabilities. Two strategies including representative-based hunting and opposition-based
learning are incorporated into the proposed IHAOHHO to further improve the optimization
performance. The representative-based hunting strategy can effectively enhance the diversity of the
population and fully explore the search space. The opposition-based learning strategy contributes to
keep the algorithm from trapping in local optima. This algorithm is evaluated by standard benchmark
functions and CEC2017 test functions to analyze its exploration, exploitation and local optima
avoidance capabilities. The experiments show competitive results compared to other state-of-the-art
meta-heuristic algorithms, which prove that IHAOHHO has better optimization performance than
others. Three engineering design problems are solved as well to further verify the superiority of the
algorithm, and the results are also competitive with other meta-heuristic algorithms.
The performance of the proposed algorithm on CEC2017 benchmark functions still needs to be
improved. The exploration and exploitation capabilities need to be further investigated to break the
limitations on CEC2017 test suit. And the transition from exploration to exploitation phase of
IHAOHHO is simple. For further work, the transition mechanism can be improved to provide a better
balance between the exploration and exploitation phases of this algorithm. Besides, the IHAOHHO
algorithm can only solve single-objective optimization problems. Multi-objective version of
IHAOHHO may be developed to solve multi-objective problems in the future.
Acknowledgements
This work was partially supported by Sanming University introduces high-level talents to start
scientifc research funding support Project (20YG01, 20YG14), Guiding science and technology
projects in Sanming City (2020-S-39, 2021-S-8), Educational research projects of young and middle-
aged teachers in Fujian Province (JAT200638, JAT200618), Scientifc research and development fund
of Sanming University (B202029, B202009), Collaborative education project of industry university
cooperation of the Ministry of Education (202002064014), School level education and teaching reform
project of Sanming University (J2010306, J2010305), Higher education research project of Sanming
University (SHE2102, SHE2013).
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
References
1. I. Boussad, J. Lepagnot, P. Siarry, A survey on optimization metaheuristics, Inf. Sci., 237 (2013),
82–117.
2. T. Dokeroglu, E. Sevinc, T. Kucukyilmaz, A. Cosar, A survey on new generation metaheuristic
algorithms, Comput. Ind. Eng., 137 (2019), 106040.
3. K. Hussain, M. Salleh, C. Shi, Y. Shi, Metaheuristic research: a comprehensive survey, Artif.
Intell. Rev., 52 (2019), 2191–2233.
4. L. Abualigah, A. Diabat, Advances in sine cosine algorithm: a comprehensive survey, Artif. Intell.
Rev., 54 (2021), 2567–2608.
5. L. Abualigah, A. Diabat, A comprehensive survey of the Grasshopper optimization algorithm:
results, variants, and applications, Neural Comput. Appl., 32 (2020), 15533–15556.
6. J. H. Holland, Genetic algorithms, Sci. Am., 267 (1992), 66–72.
7. R. Storn, K. Price, Differential evolution-a simple and efficient heuristic for global optimization
over continuous spaces, J. Glob. Optim., 11 (1997), 341–359.
8. I. Rechenberg, Evolutionsstrategien, in Simulationsmethoden in der Medizin und Biologie,
Springer, Berlin, Heidelberg, (1978), 83–114.
9. D. Simon, Biogeography-based optimization, IEEE Trans. Evol. Comput., 12 (2008), 702–713.
10. D. Dasgupta, Z. Michalewicz, Evolutionary algorithms in engineering applications, DBLP, 1997.
11. S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simmulated annealing, Science, 220
(1983), 671–80.
12. E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search algorithm, Inf. Sci.,
179 (2009), 2232–2248.
13. A. Hatamlou, Black hole: a new heuristic optimization approach for data clustering, Inf. Sci., 222
(2013), 175–84.
14. S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-Verse Optimizer: a nature-inspired algorithm for
global optimization, Neural Comput. Appl., 27 (2015), 495–513.
15. S. Mirjalili, SCA: A sine cosine algorithm for solving optimization problems, Knowl.-Based Syst.,
96 (2016).
16. L. Abualigah, A. Diabat, S. Mirjalili, M. A. Elaziz, A. H. Gandomi, The arithmetic optimization
algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609.
17. F. Asef, V. Majidnezhad, M. R. Feizi-Derakhshi, S. Parsa, Heat transfer relation-based
optimization algorithm (HTOA), Soft Comput., (2021), 1–30.
18. J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of the 1995 IEEE
international conference on neural networks (ICNN '93), IEEE, 4 (1995), 1942–1948.
19. M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell., 1 (2006),
28–39.
20. X. S. Yang, Firefly algorithm, stochastic test functions and design optimisation, Int. J. Bio-
Inspired Comput., 2 (2010), 78–84.
21. S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61.
22. A. H. Gandomi, X. S. Yang, A. H. Alavi, Cuckoo search algorithm: a metaheuristic approach to
solve structural optimization problems, Eng. Comput., 29 (2013), 17–35.
23. S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67.
24. S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm
algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114
(2017), 163–191.
25. H. Jia, X. Peng, C. Lang, Remora optimization algorithm, Expert Systems with Applications, 185
(2021), 115665.
26. S. M. Li, H. L. Chen, M. J. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new
method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323.
27. F. Miarnaeimi, G. Azizyan, M. Rashki, Horse herd optimization algorithm: a nature-inspired
algorithm for high-dimensional optimization problems, Knowl.-Based Syst., 213 (2020).
28. L. Abualigah, D. Yousri, M. A. Elaziz, A. A. Ewees, M. A. A. Al-qaness, A. H. Gandomi, Aquila
Optimizer: a novel meta-heuristic optimization algorithm, Comput. Ind. Eng., 157 (2021), 107250.
29. A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. L. Chen, Harris Hawks optimization:
algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872.
30. A. M. AlRassas, M. A. A. Al-qaness, A. A. Ewees, S. Ren, M. Abd Elaziz, R. Damaševičius, et
al., Optimized ANFIS model using Aquila Optimizer for oil production forecasting, Processes, 9
(2021), 1194.
31. C. Hao, A. A. Heidari, H. Chen, M. Wang, Z. Pan, A. H. Gandomi, Multi-population differential
evolution-assisted harris hawks optimization: framework and case studies, Future Gener. Comput.
Syst., 111 (2020), 175–198.
32. M. A. Al-Betar, M. A. Awadallah, A. A. Heidari, H. Chen, C. Li, Survival exploration strategies
for harris hawks optimizer, Expert Syst. Appl., 168 (2020), 114243.
33. S. Song, P. Wang, A. A. Heidari, M. Wang, S. Xu, Dimension decided harris hawks optimization
with gaussian mutation: balance analysis and diversity patterns, Knowl.-Based Syst., 215 (2020),
106425.
34. D. Yousri, S. Mirjalili, J. A. T. Machado, S. B. Thanikantie, O. Elbaksawi, A. Fathy, Efficient
fractional-order modified Harris Hawks optimizer for proton exchange membrane fuel cell
modeling, Eng. Appl. Artif. Intell., 100 (2021), 104193.
35. S. Gupta, K. Deep, A. A. Heidari, H. Moayedi, M. Wang, Opposition-based learning Harris hawks
optimization with advanced transition rules: Principles and analysis, Expert Syst. Appl., 158
(2020), 113510.
36. O. Akdag, A. Ates, C. Yeroglu, Modification of harris hawks optimization algorithm with random
distribution functions for optimum power flow problem, Neural Comput. Appl., 33 (2021).
37. D. Yousri, A. Fathy, S. B. Thanikanti, Recent methodology based Harris Hawks optimizer for
designing load frequency control incorporated in multi-interconnected renewable energy plants,
Sustainable Energy Grids Networks, 22 (2020), 100352.
38. H. Jia, C. Lang, D. Oliva, W. Song, X. Peng, Dynamic harris hawks optimization with mutation
mechanism for satellite image segmentation, Remote Sens., 11 (2019), 1421.
39. K. Hussain, N. Neggaz, W. Zhu, E. H. Houssein, An efficient hybrid sine-cosine Harris hawks
optimization for low and high-dimensional feature selection, Expert Syst. Appl., 176 (2021),
114778.
40. X. Bao, H. Jia, C. Lang, A novel hybrid harris hawks optimization for color image multilevel
thresholding segmentation, IEEE Access, 7 (2019), 76529–76546.
41. E. H. Houssein, M. E. Hosney, M. Elhoseny, D. Oliva, M. Hassaballah, Hybrid Harris hawks
optimization with cuckoo search for drug design and discovery in chemoinformatics, Sci. Rep.,
10 (2020), 14439.
42. A. Kaveh, P. Rahmani, A. D. Eslamlou, An efficient hybrid approach based on Harris Hawks
optimization and imperialist competitive algorithm for structural optimization, Eng. Comput.,
(2021), 4598.
43. A. Auger, N. Hansen, A restart cma evolution strategy with increasing population size, IEEE
Congr. Evol. Comput., 2 (2005), 1769–1776.
44. R. Tanabe, A. S. Fukunaga, Improving the search performance of SHADE using linear population
size reduction, IEEE Congr. Evol. Comput., 2014.
45. G. Dhiman, SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering
applications, Knowl.-Based Syst., 222 (2021), 106926.
46. I. Ahmadianfar, A. A. Heidari, A. H. Gandomi, X. Chu, H. Chen, RUN beyond the metaphor: an
efficient optimization algorithm based on Runge Kutta method, Expert Syst. Appl., 181 (2021),
115079.
47. M. Banaie-Dezfouli, M. H. Nadimi-Shahraki, Z. Beheshti, R-gwo: representative-based grey wolf
optimizer for solving engineering problems, Appl.Soft Comput., (2021), 107328.
48. H. Tizhoosh, Opposition-based learning: A new scheme for machine intelligence, in Proceedings
of the International Conference on Computational Intelligence for Modeling, (2005), 695–701.
49. S. Rahnamayan, H. R. Tizhoosh, M. M. A. Salama, Opposition-based differential evolution, IEEE
Trans. Evol. Comput., 12 (2014), 64–79.
50. Z. Jia, L. Li, S. Hui, Artificial bee colony using opposition-based learning, Adv. Intell. Syst.
Comput., 329 (2015), 3–10.
51. M. A. Elaziz, D. Oliva, S. Xiong, An improved opposition-based sine cosine algorithm for global
optimization, Expert Syst. Appl., 90 (2017), 484–500.
52. A. A. Ewees, M. A. Elaziz, E. H. Houssein, Improved grasshopper optimization algorithm using
opposition-based learning, Expert Syst. Appl., 112 (2018), 156–172.
53. C. Fan, N. Zheng, J. Zheng, L. Xiao, Y. Liu, Kinetic-molecular theory optimization algorithm
using opposition-based learning and varying accelerated motion, Soft Comput., 24 (2020).
54. N. H. Awad, M. Z. Ali, P. N. Suganthan, J. J. Liang, B. Y. Qu, Problem definitions and evaluation
criteria for the CEC2017, in Special Session and Competition on Single Objective Real-Parameter
Numerical Optimization, IEEE Congress on Evolutionary Computation, 2017.