Biomimetics 10 00090
Biomimetics 10 00090
School of Electrical & Information Engineering, Anhui University of Science and Technology,
Huainan 232001, China; than@aust.edu.cn (T.H.); 2023200859@aust.edu.cn (H.W.); lqz990709@163.com (Q.L.);
hyr628@163.com (Y.H.)
* Correspondence: 2023200766@aust.edu.cn
There are a wide variety of metaheuristic algorithms, which can be categorized into three
groups based on their inspiration and working principles: evolution-based algorithms,
group intelligence-based algorithms, and algorithms based on physical principles [4].
Evolution-based algorithms are mainly used to realize the overall progress of the popu-
lation and finally complete the optimal solution by simulating the evolutionary law of
superiority and inferiority in nature (Darwin’s law) [5]. Among the most prominent exam-
ples of these are genetic algorithms (GA) [6] and differential evolution (DE) [7]. Genetic
algorithms simulate the process of biological evolution and optimize the solution through
selection, crossover and mutation operations, with strong global search abilities which are
suitable for discrete optimization problems. Differential evolution algorithms generate
new solutions through the different operations of individuals in a population, which is
excellent in dealing with nonlinear and multimodal optimization problems. By simulating
a group’s intelligence, group intelligence-based algorithms [8,9] aim to produce a globally
optimal solution. Each group in this algorithm is a biological population, and the most rep-
resentative examples are the particle swarm optimization algorithm [10] and the ant colony
algorithm [11], which use the cooperative behavior of a population to accomplish tasks that
individuals are unable to complete. The PSO simulates the social behavior of bird or fish
flocks and achieves global optimization through collaboration among individuals, which is
simple, efficient, and suitable for continuous optimization problems. The ACO simulates
the foraging behavior of ants and optimizes the paths through a pheromone mechanism,
which is excellent in path optimization problems. There are also many other popular
algorithms, such as the artificial bee colony algorithm [12], which simulates the foraging be-
havior of bees to optimize solutions through information sharing and collaboration, the bat
optimization algorithm [13], which simulates the echolocation behavior of bats to optimize
solutions through frequency and amplitude adjustment, and the gray wolf optimization
algorithm [14], which simulates the collaboration and competition between leaders and
followers in gray wolf packs. All of these algorithms have strong global search capabilities.
The firefly algorithm (FA), which simulates the behavior of fireflies glowing to attract mates,
optimizes solutions through light intensity and movement rules for multi-peak optimiza-
tion problems. The fundamental concept of physical principle-based algorithms, of which
simulated annealing (SA) [15] is the best example, is to use natural processes or physics
principles as the basis for search techniques used to solve complex optimization problems.
It mimics the annealing process of solids and performs well in combinatorial optimization
problems by controlling the “temperature” parameter to balance global exploration and
local exploitation in the search process. In addition to the above algorithms, others include
the gravitational search algorithm (GSA) [16] and the water cycle algorithm (WCA) [17].
The GSA optimizes the solution by simulating gravitational interactions between celestial
bodies and using mutual attraction between masses, demonstrating a strong global search
capability. The WCA, on the other hand, simulates water cycle processes in nature and uses
the convergence and dispersion mechanism of water flow to optimize the solution, which
also has excellent global search performance. In addition, there are special types of hybrid
optimization algorithms, which combine the features of two or more metaheuristics to
enhance the performance of the algorithms by incorporating different search mechanisms.
For example, the hybrid particle swarm optimization algorithm with differential evolution
(DEPSO [18]) combines the population intelligence of the particle swarm optimization
algorithm and the variability capability of differential evolution, which enables DEPSO to
efficiently balance global and local searches and to improve the efficiency and effectiveness
of the optimization process, especially for global optimization problems in continuous
space. Based on a three-phase model that includes hippopotamus positioning in rivers and
ponds, defense strategies against predators, and escape strategies, the HO is a new algo-
Biomimetics 2025, 10, 90 3 of 31
rithm inspired by hippopotamus population behaviors which was proposed by Amiri [19]
et al. in 2024. In the optimization sector, the hippopotamus optimization (HO) algorithm
stands out for its excellent performance, which is able to quickly identify and converge
to the optimal solution and effectively avoid falling into local minima. The algorithm’s
efficient local search strategy and fast optimality-finding speed enable it to excel in solving
complex problems. It effectively balances global exploration and local exploitation and is
able to quickly find high-quality solutions, making it an effective tool for solving complex
optimization problems.
Currently, metaheuristic algorithms have a wide range of application prospects in
the field of engineering optimization. Hu [20] et al. used four metaheuristic algorithms,
namely, the African vulture optimization algorithm (AVOA), the teaching–learning-based
optimization algorithm (TLBO), the sparrow search algorithm (SSA), and the gray wolf
optimization algorithm (GWO), to optimize a hybrid model and proposed integrated
prediction of steady-state thermal performance prediction data for an energy pile-driven
model. Sun [21] et al. responded to most of the industrial design problems and proposed
a fuzzy logic particle swarm optimization algorithm based on the associative constraints
processing method. A particle swarm optimization algorithm was used as a searcher, and a
set of fuzzy logic rules integrating the feasibility of the individual was designed to enhance
its searching ability. Wu [22] et al. responded to the ant colony optimization algorithm’s
limitations, such as early blind searching, slow convergence, low path smoothness, and
other limitations, and proposed an ant colony optimization algorithm based on farthest
point optimization and a multi-objective strategy. Palanisamy and Krishnaswamy [23]
used hybrid HHO-PSO (hybrid particle swarm optimization) for failure testing of wire
ropes for hardness, wear and tear analysis, tensile strength, and fatigue life and adopted a
hybrid HHO-based artificial neural network-based HHO (Hybrid ANN-HHO) to predict
the performance of the experimental wire ropes. Liu [24] et al. proposed an improved
adaptive hierarchical optimization algorithm (HSMAOA) in response to problems such
as premature convergence and falling into local optimization when dealing with complex
optimization problems in arithmetic optimization algorithms. Cui [25] et al. combined
the whale optimization algorithm (WOA) with attention-to-the-technology (ATT) and
convolutional neural networks (CNNs) to optimize the hyperparameters of the LSTM
model and proposed a new load prediction model to address the over-reliance of most
methods on the default hyperparameter settings. Che [26] et al. used a circular chaotic map
as well as a nonlinear function for multi-strategy improvement of the whale optimization
algorithm (WOA) and used the improved WOA to optimize the key parameters of the LSTM
to improve its performance and modeling time. Elsisi [27] used a different learning process
based on the improved gray wolf optimizer (IGWO) and fitness–distance balancing (FDB)
methodology to balance the original gray wolf optimizer’s exploration and development
approach and design a new automated adaptive model predictive control (AMPC) for
self-driving cars to solve the rectification problem of self-driving car parameters and the
uncertainty of the vision system. Karaman [28] et al. used the artificial bee colony (ABC)
optimization algorithm to go in search of the optimal solution for the hyperparameters and
activation function of the YOLOv5 algorithm and enhance the accuracy of colonoscopy.
Yu and Zhang [29], in order to minimize the wake flow effect, proposed an adaptive moth
flame optimization algorithm with enhanced detection exploitation capability (MFOEE)
to optimize the turbine layout of wind farms. Dong [30] et al. optimized the genetic
algorithm (GA) based on the characteristics of flood avoidance path planning and proposed
an improved ant colony genetic optimization hybrid algorithm (ACO-GA) to achieve
dynamic planning of evacuation paths for dam-breaking floods. Shanmugapriya [31]
et al. proposed an IoT-based HESS energy management strategy for electric vehicles by
Biomimetics 2025, 10, 90 4 of 31
optimizing the weight parameters of a neural network using the COA technique to improve
the SAGAN algorithm in order to improve the battery life of electric vehicles. Beşkirli
and Dağ [32] proposed an improved CPA algorithm (I-CPA) based on the instructional
factor strategy and applied it to the problem of solar photovoltaic (PV) module parameter
identification in order to improve the accuracy and efficiency of PV model parameter
estimation. Beşkirli and Dağ [33] proposed a multi-strategy-based tree seed algorithm (MS-
TSA) which effectively improves the global search capability and convergence performance
of the algorithm by introducing an adaptive weighting mechanism, a chaotic elite learning
method, and an experience-based learning strategy. It performs well in both CEC2017 and
CEC2020 benchmark tests and achieves significant optimization results in solar PV model
parameter estimation. Liu [34] et al. proposed an improved DBO algorithm and applied it
to the optimal design of off-grid hybrid renewable energy systems to evaluate the energy
cost with life cycle cost as the objective function. However, the above algorithms face the
challenges of data size and complexity in practical applications and still suffer from the
problem of easily falling into local optima, low efficiency, and insufficient robustness, which
limit the performance and applicability of the algorithms.
When solving real-world problems, the HO algorithm excels due to its adaptability and
robustness and is able to maintain stable performance in a wide range of optimization prob-
lems, making it an ideal choice for fast and efficient optimization problems. Maurya [35]
et al. used the hippopotamus optimization algorithm (HO) to optimize distributed genera-
tion planning and network reconfiguration in the consideration of different loading models
in order to improve the performance of a power grid. Chen [36] et al. addressed the limita-
tions of the VMD algorithm and improved it by using the excellent optimization capability
of the HO algorithm to achieve preliminary denoising, and in doing so, proposed a single-
sign-on modal identification method based on hippopotamus optimization-variational
modal decomposition (HO-VMD) and singular value decomposition-regularized total least
squares-Prony (SVD-RTLS-Prony) algorithms. Ribeiro and Muñoz [37] used particle swarm
optimization, hippopotamus optimization, and differential evolution algorithms to tune a
controller with the aim of minimizing the root mean square (RMS) current of the batteries
in an integrated vehicle simulation, thus mitigating battery stress events and prolonging
its lifetime. Wang [38] et al. used an improved hippopotamus optimization algorithm
(IHO) to improve solar photovoltaic (PV) output prediction accuracy. The IHO algorithm
addresses the limitations of traditional algorithms in terms of search efficiency, convergence
speed, and global searching. Mashru [39] et al. proposed the multi-objective hippopotamus
optimizer (MOHO), which is a unique approach that excels in solving complex structural
optimization problems. Abdelaziz [40] et al. used the hippopotamus optimization algo-
rithm (HO) to optimize two key metrics and proposed a new optimization framework to
cope with the problem of the volatility of renewable energy generation and unpredictable
electric vehicle charging demand to enhance the performance of the grid. Baihan [41] et al.
proposed an optimizer-optimized CNN-LSTM approach that hybridizes the hippopotamus
optimization algorithm (HOA) and the pathfinder algorithm (PFA) with the aim of improv-
ing the accuracy of sign language recognition. Amiri [42] et al. designed and trained two
new neuro-fuzzy networks using the hippopotamus optimization algorithm with the aim of
creating an anti-noise network with high accuracy and low parameter counts for detecting
and isolating faults in gas turbines in power plants. In addition to the above applications,
there are many global optimization and engineering design problems. However, the theory
of “no-free-lunch” (NFL) states that no optimization algorithm can solve all problems [43],
and each existing optimization algorithm can only achieve the expected results on certain
types of problems, so improvement of the HO algorithm is still necessary. Although the
HO algorithm has many advantages, its performance level decreases when dealing with
Biomimetics 2025, 10, 90 5 of 31
complex global optimization and engineering design problems, and it cannot avoid falling
into local optima. It is still necessary to adjust the algorithm parameters and strategies
according to specific problems in practical applications in order to fully utilize its potential.
Therefore, we propose the MHO algorithm to enhance the ability of HO to solve these
problems. The main contributions of this paper are as follows:
• Use the method of the sine chaotic map to replace the original population initialization
method in order to prevent the HO algorithm from settling into local optimal solutions
and to produce high-quality starting solutions.
• Introduce a new convergence factor to alter the growth mechanism of hippopotamus
populations during the exploration phase improves the global search capability of HO.
• Incorporate a small-hole imaging reverse learning strategy into the hippopotamus
escaping predator stage to avoid interference between dimensions, expand the search
range of the algorithm to avoid falling into a local optimum, and thus improve the
performance of the algorithm.
• The MHO model is tested on 23 benchmark functions, the optimization ability of the
model is tested by comparing it with other algorithms, and three engineering design
problems are successfully solved.
The structure of this paper is as follows: Section 2 presents the hippopotamus al-
gorithm and three methods for enhancing the hippopotamus optimization algorithm;
Section 3 presents experiments and analysis, including evaluating the experimental results
and comparing the MHO algorithm with other algorithms; Section 4 applies MHO to three
engineering design problems; and Section 5 provides a summary of the entire work.
2. Improved Algorithm
2.1. Sine Chaotic Map
A sine chaotic map [44] is a kind of chaotic system that generates chaotic sequences by
nonlinear transformation of a sinusoidal function, which becomes a typical representative
of a chaotic map due to the advantages of simple structure and high efficiency, and its
mathematical expression is
xk+1 = αsin( xk ) (1)
where k is a non-negative integer; xk ∈ [0, 1] denotes the value of the current iteration step;
and α ∈ [0, 1] is the chaos coefficient control parameter.
The sine map starts chaotic behavior when the parameter α is close to 0.87, and
superior chaotic properties can be observed when α is close to 1. Therefore, the intro-
duction of the sine chaotic map into the random initialization of the initial value of the
hippopotamus optimization (HO) algorithm can make the hippopotamus population uni-
formly distributed throughout the search space, which improves the diversity of the initial
population, enhances the global search capability of the HO algorithm, and effectively
avoids falling into the local optimal solution. Figure 1 shows the population distribution
initialized by the algorithm:
In the HO algorithm, a hippopotamus is a candidate solution to the optimization
problem, which means that each hippopotamus’ position in the search space is updated to
represent the values of the decision variables. Thus, each hippopotamus is represented as a
vector and the population of hippopotamuses is mathematically characterized by a matrix.
Similar to traditional optimization algorithms, the initialization phase of HO involves the
generation of a random initial solution, and the vector of decision variables is generated
as follows:
Xi : xi,j = lb j + r × ub j − lb j , i = 1, 2, . . . , N; j = 1, 2, . . . , m (2)
Biomimetics 2025, 10, 90 6 of 31
where Xi denotes the location of the ith candidate solution, r is a random number in the
range of 0~1, and lb and ub represent the lower and upper limits of the jth decision variable,
respectively. Let N denote the population size of hippopotamus in the herd, while m
denotes the number of decision variables in the problem and the population matrix is
composed by Equation (3).
x1 x1,1 · · · x1,j ··· x1,m
. . .. .. . ..
.. .. . . .. .
x = xi = xi,1 · · · xi,j ··· xi,m (3)
.. .. . .. .. ..
. . .. . . .
x N N ×m x N,1 · · · x N,j ··· x N,m N ×m
and furthermore,
Biomimetics 2025, 10, x FOR PEER REVIEW Sine_chaos = αsin(kπx ) 6 of (5)
34
where k is a parameter that controls the chaotic behavior and x is an initial value.
(a) (b)
(c) (d)
Figure
Figure 1.1. Comparison
Comparison of the the distribution
distribution of algorithmic
algorithmic initialization: (a) (a) histogram
histogram of of frequency
frequency
distribution
distributionof ofconventional
conventionalrandom
random initialization;
initialization; (b)
(b) scatter plot of
of the
the distribution
distribution of
of conventional
conventional
random
random initialization in two‐dimensional
initialization in two-dimensionalspace;
space;(c)(c)histogram
histogramof of frequency
frequency distribution
distribution of sinu-
of sinusoi‐
soidal
dal chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic mapmap
chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic ini‐
initialization in two-dimensional space.
tialization in two‐dimensional space.
Mhipoo
In Equation (6), Xi denotes the position of the male hippopotamus and Dhippo
→
indicates the location of the dominant hippopotamus. As shown in Equation (7), r 1,...,4 is
a random vector between 0 and 1, r5 is a random number between 0 and 1, I1 and I2 are
integers between 1 and 2. MGi is the average of a number of randomly selected hippopota-
muses, which includes the currently considered hippopotamus with equal probability, y1 is
a random number between 0 and 1, and e1 and e2 are random integers that can be either 1
or 0.
→
I2 × r 1 + (∼ e1 )
→
2× r2−1
→
h= r3 (7)
→
I1 × r 4 + (∼ e2 )
r5
t
T = exp − (8)
Max_iterations
(
FBhippo FBhippo xij + h1 · Dhippo − I2 MGi , T > 0.6
Xi : xi = (9)
Ξ , else
xij + h2 · MGi − D , r6 > 0.5
hippo
Ξ=
(10)
lb j + r7 ub j − lb j , else
h i
f or i = 1, 2, . . . , N2 and j = 1, 2, . . . , m
Equations (9) and (10) describe the position of the female or immature hippopotamus
FBhippo
in the herd (Xi ). The majority of immature hippos are with their mothers, but due
to curiosity, sometimes immature hippos are separated from the herd or stay away from
their mothers.
If the convergence factor T is greater than 0.6, this means that the immature hippo has
distanced itself from its mother (Equation (9)). If r6 is greater than 0.5, this means that the
Biomimetics 2025, 10, 90 8 of 31
immature hippopotamus has distanced itself from its mother but is still in or near the herd;
otherwise, it has left the herd. Equations (9) and (10) are based on modeling this behavior
for immature and female hippos. Randomly chosen numbers or vectors, denoted as I1 and
I2 , are extracted from the set of five scenarios outlined in equation h. In Equation (10), r7 is
a random number between 0 and 1. Equations (11) and (12) describe the position update of
female or immature hippos. The objective function value is denoted by Fi :
(
Mhippo Mhippo
Xi Fi < Fi
Xi = (11)
Xi else
(
FBhippo FBhippo
Xi Fi
Xi = (12)
Xi
Using h-vectors, I1 and I2 scenarios enhance the algorithm’s global search and improve
its exploration capabilities.
The growth mechanism is improved by introducing a new convergence factor T, which
is specifically designed to dynamically adjust the behavioral patterns of immature hippos,
and the following equation is an improved formulation of T:
6
t
T = 1− (13)
Max_iterations
(
FBhippo FBhippo xij + h1 · Dhippo − I2 MGi , T > 0.95
Xi : xij = (14)
Ξ , else
xij + h2 · MGi − D , r6 > 0.5
hippo
Ξ=
(15)
lb j + r7 · ub j − lb j , else
h i
f or i = 1, 2, . . . , N2 and j = 1, 2, . . . , m
where t is the current iteration number and Max_iterations is the maximum iteration numbers.
Plots of the functions of Equations (8) and (13) before and after the improvement are
shown in Figure 2. The simulated immature hippopotamus individuals will show a higher
propensity to explore within the hippopotamus population or within the surrounding area
when T > 0.95 (Equation (14)). This behavior promotes the algorithm to refine its search in
a local region close to the current optimal solution, thus enhancing the algorithm’s search
accuracy and efficiency in that region. The immature hippo attempts to move away from
the present optimal solution when T ≤ 0.95 and r6 > 0.5. This is a method intended to
prolong the search in order to lower the possibility that the algorithm would fall into a
local optimum and to enable a more thorough investigation of the global solution space
(Equation (15)). The algorithm is able to identify and escape potential local optimality
traps more efficiently this way, thus increasing the probability of finding a globally optimal
solution. When r6 ≤ 0.5, immature hippos perform random exploration, allowing the
algorithm to maintain diversity and avoid premature convergence. This improvement
enhances the HO algorithm’s search capability and adaptability by better simulating the
natural behavior of hippos.
optimality traps more efficiently this way, thus increasing the probability of finding a
globally optimal solution. When r6 0 .5 , immature hippos perform random exploration,
two solutions, and choose the better solution to move on to the next iteration. Based on
allowing
this the algorithm
approach, to presents
this study maintainsmall-hole
diversity and avoidreverse
imaging premature convergence.
learning This im‐
[46] technique to
provement enhances the HO algorithm’s search capability and adaptability by better
enhance population variety, which enhances the algorithm’s global search capability and sim‐
ulating
more the natural
accurately behavior of the
approximates hippos.
global optimal solution.
a j bj
X best
2 h
(16)
X best
a j bj h
2
h
Let n ; through the transformation to obtain X b e s t , the expression is Equation
h Original T
Improved T
(17), and Equation (18) is obtained when n 1 .
Figure 2. Plots of convergence factor T before and after improvement.
a j band
Figure 2. Plots of convergence factor T before j aj
after bj X
improvement.
X best best (17)
2 2
The principle of small-hole imaging is shown in Figure n3, which is a combined method
n
2.3. Small‐Hole Imaging Reverse Learning Strategy
combining pinhole imaging with a j b j X best
dimension-by-dimension
X best
inverse learning derived from
(18)
Many academics have proposed the reverse
LensOBL [47]. The aim is to find an inverse solution for each learning strategy to address
dimension of thethe issue
feasible
that mostthus
solution, intelligent
reducingoptimization algorithms
the risk of the algorithm are proneinto
falling to alocal
localextremes
optimum. [45]. The core
idea behind this strategy is to create a corresponding reverse solution for the current so‐
small
lution during population optimization, compare the objective function values of these two
hole
solutions, and choose the better solution to move on to the next iteration.
screen Based on this
receiver
approach, this study presents small‐hole imaging reverse learning [46] screen technique to en‐
P
hance population variety, which enhances the algorithm’s P
global search capability and
more accurately approximates the global optimal solution.
h
Theh principle of small‐hole imaging is shown in Figure 3, which is a combined
method combining
X best pinhole imaging with O dimension‐by‐dimension
X best inverse learning de‐
bj
aj
rived from LensOBL [47]. The aim is to find an inverse solution for each dimension of the
feasible solution, thus reducing the risk of the algorithm falling into a local optimum.
flame
Assume that in a certain space, there is a flame p with height h whose projection
j
on the X‐axis is X best (the jth dimensional optimal solution), the upper and lower
bounds of the coordinate axes are a j and b j (the upper and lower bounds of the jth
dimensional
Figure
Figure solution),
3.3.Schematic
Schematic and
diagram
diagram ofasmall‐hole
of screen with
small-hole a small
imaging
imagingreverse
reverse is placed on the base O . The flame
holelearning.
learning.
passing through the small hole can receive an inverted image p with height h on the
AsAssume
receiving bethat
canscreen.
seen infrom
The aflame
certain space,
Equation
passing there
(18), is athe
small‐hole
through flame p hole
withreverse
imaging
small height
can h whose
learning
receive projection
is the correct
an inverted imageon
j
the
generalX-axis
p of heightis X (the jth dimensional
on the strategy
reversehbest
learning receivingwhenscreen, n and
1 , but
optimal solution),
thenatathis the upper
time, small‐hole
reversed and lower
point X b e s imaging
t
bounds
learn‐
(the reversed
of
ing the
is coordinate
only the axes
current are
optimala j and b
position
j (the upper
through and lower
general bounds
reverse
solution of the jth dimensional solution) is obtained on the X‐axis through small‐hole of
learningthe jth
to dimensional
obtain a fixed
solution),
reverse and
point; a screen
this fixed with a
positionsmall hole is
is frequently placed on the base
far awayimaging, O. The flame
from the Equation passing
global optimal through
position.
imaging. Therefore, from the principle of small‐hole (16) can be de‐
the small hole
Therefore, by can receive
adjusting the an inverted
distance image the
between p′ with
receiving h′ on the
heightscreen and receiving
the screen.
small‐hole
rived.
The flame
screen passing
to change thethrough
adjustment the factor
small hole n , we
cancan receive analgorithm
use the inverted image
to obtain p′ of optimalh
anheight
′
on the receiving
solution closer to screen, and then
the position, a reversed
making it jumppoint
out ofX ′the (the reversed
best local solution
optimal region the jth
of closer
and
dimensional solution)
to the global optimal region. is obtained on the X-axis through small-hole imaging. Therefore,
fromThe thedevelopment
principle of small-hole
phase of the imaging,
originalEquation (16) can algorithm
hippopotamus be derived.describes a hippo‐
potamus fleeing from a predator. Another behavior of a hippopotamus facing a predator
( a j −b j )
occurs when a hippopotamus is unable 2 to − Xbesta predator
repel h with its defensive behaviors,
= ′ (16)
so the hippopotamus tries to get out ( a − b ) h
X ′of the area
best −
j
2
in
j order to avoid the predator. This strat‐
egy causes the hippo to find a safe location close to its current position. In the third phase,
the authors simulate this behavior, which improves the algorithm’s local search capabili‐
ties. Random places are created close to the hippo’s present location in order to simulate
this behavior.
HippoE HippoE local local local
Biomimetics 2025, 10, 90 10 of 31
Let hh′ = n; through the transformation to obtain X ′ best , the expression is Equation (17),
and Equation (18) is obtained when n = 1.
a j + bj a j + bj X
X ′
best = + − best (17)
2 2n n
X ′ best = a j + b j − Xbest
(18)
As can be seen from Equation (18), small-hole imaging reverse learning is the correct
general reverse learning strategy when n = 1, but at this time, small-hole imaging learning
is only the current optimal position through general reverse learning to obtain a fixed
reverse point; this fixed position is frequently far away from the global optimal position.
Therefore, by adjusting the distance between the receiving screen and the small-hole screen
to change the adjustment factor n, we can use the algorithm to obtain an optimal solution
closer to the position, making it jump out of the local optimal region and closer to the global
optimal region.
The development phase of the original hippopotamus algorithm describes a hip-
popotamus fleeing from a predator. Another behavior of a hippopotamus facing a predator
occurs when a hippopotamus is unable to repel a predator with its defensive behaviors, so
the hippopotamus tries to get out of the area in order to avoid the predator. This strategy
causes the hippo to find a safe location close to its current position. In the third phase,
the authors simulate this behavior, which improves the algorithm’s local search capabili-
ties. Random places are created close to the hippo’s present location in order to simulate
this behavior.
HippoE HippoE
Xi : xij = xij + r10 · lblocal j + s 1 · ub local − lblocal
j j (19)
(i = 1, 2, . . . , N · j = 1, 2, . . . , m)
lb j ub j
lblocal
j = , ublocal
j = , t = 1, 2, . . . , τ. (20)
t t
→
2 × r11 − 1
s= r12 (21)
r13
Hippoε
where Xi is the position of the hippo when it escaped from the predator, and it is
searched to find the closest safe position. Out of the three s situations, s1 is a randomly
selected vector or number (Equation (21)). Better localized search is encouraged by the
possibilities that the s equations take into account, and r11 represents a random vector
between 0 and 1, while r10 and r13 denote random numbers generated in the range of 0 to 1.
In addition, r12 is a normally distributed random number. t denotes the current iteration
number, while τ denotes the highest iteration number.
X Hippoε , FHippoε < F
i i i
Xi = Hippoε (22)
Xi ,F ≥ Fi i
The fact that the fitness value improved at the new position suggested that the hip-
popotamus had relocated to a safer area close to its original location.
Incorporating the small-hole imaging reverse learning strategy into the HO algorithm
can effectively improve the diversity and optimization efficiency of the algorithm. This
strategy enhances population diversity and expands the search range through chaotic
sequences while mapping the optimal solution dimension by dimension to reduce inter-
dimensional interference and improve global search capability. Additionally, it enhances
Biomimetics 2025, 10, 90 11 of 31
stability, lowers the possibility of a local optimum, dynamically modifies the search range,
and synchronizes the global search with the local exploitation capabilities, all of which help
the algorithm to find a better solution with each iteration.
Start
Create initial population with sine chaos map. Set i=1 and t=1.
YES
i>N/2
YES
i=i+1 Update X i
Calculate X iHippoR
Update Xi
YES
i=i+1 i<N
Set i=1
Calculate X iHippoR
Update X i
Guide
YES Phase 1
i=i+1 i<N
Phase 2
YES t=t+1
t<T
i=1
END MHO
Figure
Figure4.4.Flowchart
Flowchartof
ofMHO
MHOalgorithm.
algorithm.
3. Experiment
2.5. Computational Complexity
Time
In thiscomplexity is a basic
section, a series index to evaluate
of experiments the efficiency
are designed ofthe
to validate algorithms, which
effectiveness is
of the
analyzed in this paper
HO improvement by using
algorithm, andthewemethod of BIG‐O
have chosen [48]. Assuming
23 benchmark that the population
test functions to evaluate
size is P ,algorithm
the MHO the dimension D , andcomparison
and toisperform the numberexperiments is Tnine
of iterationswith , theother
timemeta-heuristic
complexities
of the HO algorithm and the MHO algorithm are analyzed as follows:
Biomimetics 2025, 10, 90 13 of 31
1 S
S ∑ i =1 i
Mean = F (23)
where S is the number of executions and Fi denotes the result of the ith execution.
Standard deviation: the standard deviation calculated by the algorithm after executing
the test functions many times. The smaller the standard deviation, the more stable the
performance of the algorithm, which usually means that the algorithm has better robustness.
The formula is shown in Equation (24):
s
2
1 S 1 S
S ∑ i =1 i S ∑ i =1 i
Std = F − F (24)
Rank: ranks the results of the Friedman test for all algorithms; the lower the mean
and Std, the higher the rank. Algorithms with the same result are given comparative
ranks to each other. “Rank-Count” represents the cumulative sum of the ranks, “Ave-
Rank”’ represents the average of the ranks, and “Overall-Rank” is the final ranking of the
algorithms in all comparisons.
Theoretical
Function Dimension Domain
Optimum
f 1 ( x ) = ∑in=1 xi2 30 [−100,100] 0
f2 (x) = ∑in=1 | xi | + ∏in=1 | xi | 30 [−10,10] 0
n o2
j <i [−100,100]
f3 (x) = ∑in=−01 ∑ j=0 xi 30 0
f 4 ( x ) = maxi {| xi |, 1 ≤ i ≤ n} 30 [−100,100] 0
h 2 i
f 5 ( x ) = ∑in=−11 100 xi+1 − xi2 + ( xi − 1)2 30 [−30,30] 0
+∑in=1 u( xi , 5, 100, 4)
6 −1
f 14 ( x ) = 1/500 + ∑25 1/ j + ∑ 2
x i − a ij
2 [−65,65] 1
j =1 i =1
11
2
f 15 ( x ) = ∑i=1 ai − x1 bi + bi x2 / bi2 + bi x3 + x4
2 4 [−5,5] 0.00003075
Table 2. Cont.
As can be seen in Table 2, for f 7 , f 12 , f 20 the best results for these three functions are
achieved for four different p/t settings. For f 16 , f 19 , f 20 , f 21 , f 23 of these six functions, the
p/t of 20/750 has the smallest value of standard deviation. p/t of 30/500 for the functions
f 14 and f 17 exhibits smaller values of Std. Rank-Count is the sum of the rank values of all
functions for the same set of p/t, where the Rank-Count value of 32.5 for p/t of 30/500 is
the smallest. After the Friedman test, it can be seen that the first place on the final ranking
(Overall-Rank) is p/t of 30/500, so it can be concluded that this experimental result is the
best and is set as a fixed parameter for the experiment in this paper.
f 14 , f 15 and f 20 ∼ f 23 , and while the standard deviation is slightly worse than the other
algorithms for the four functions, the mean values are optimal. For the fixed-dimensional
test functions f 14 ∼ f 23 , MHO outperforms the other algorithms in terms of mean and
standard deviation for the six test functions, while the standard deviation is slightly worse
than the other algorithms for the four functions f 16 ∼ f 19 , but the mean values are optimal.
Table 3. Cont.
Table 3. Cont.
Summarizing the above results, it can be seen that the MHO algorithm shows a clear
advantage in the benchmark function. Whether on single-peak, multi-peak, or hybrid
functions, it shows excellent optimization performance and stability. These results fully
demonstrate the effectiveness and superiority of the MHO algorithm in solving complex
optimization problems.
Function MHO HO HO1 HO2 HO3 HHO HBA DBO PSO WOA
f1 3 3 3 3 3 7 9 6 10 8
f2 1.5 4 3 1.5 5 8 6 7 10 9
f3 3 3 3 3 3 7 6 8 9 10
f4 1.5 3 4 1.5 5 8 6 7 9 10
f6 3 4 5 2 6 1 8 7 10 9
f9 4 4 4 4 4 4 4 9 10 8
f 10 4 4 4 4 4 4 10 4 9 8
f 11 4.5 4.5 4.5 4.5 4.5 4.5 4.5 4.5 10 9
f 12 3 5 6 4 7 1 8 2 10 9
f 14 3 3 3 3 3 8 7 6 10 9
f 15 1 3 5 2 4 6 10 7 8 9
f 16 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5
f 19 3.5 3.5 3.5 3.5 3.5 9 7.5 7.5 3.5 10
f 20 1 5 6 2 4 10 8 9 3 7
f 21 3 3 3 3 3 9 6 8 10 7
Biomimetics 2025, 10, 90 20 of 31
Table 4. Cont.
Function MHO HO HO1 HO2 HO3 HHO HBA DBO PSO WOA
f 22 3 3 3 3 3 10 6 9 7 8
f 23 3 3 3 3 3 10 7 8 6 9
Rank-Count 50.5 63.5 68.5 52.5 70.5 112 118.5 114.5 140 144.5
Ave-Rank 2.1957 2.7609 2.9783 2.2826 3.0652 4.8696 5.1522 4.9783 6.0870 6.2826
Overall-Rank 1 3 4 2 5 6 8 7 9 10
f8 f9 f10
Figure 6.
Figure 6. Convergence
Convergence plots
plots of
of multi-peak
multi‐peak function.
function.
All the
All convergence
convergence curves for the
curves of multi‐peak function
the single-peak are shown
function in Figure
are shown 6. Again,
in Figure the
5. The
curves of MHO and HO2 are similar on the six functions on
initial solution of MHO is always the lowest among the convergence curves on these the graphs. On the func‐
f 8 seven
functions, indicating
tion, the fitness values thatofitMHO
is ableand to find a good
other quality
functions aresolution at the initial
significantly lower stage.
than thoseAmong of
them,
HHO,except
but onforall ffunctions
7 , the variantother HO2than has similar
that, curves to MHO,
the dominance of MHOandisthe convergence
similar to that on speed
the
as well as the
single‐peak accuracy
function. Onis optimal,
f 9 , it canwhich
be seenreflects the effectiveness
that MHO and the greenoflinetheofreverse learning
HO2 converge
strategy for small-aperture
preferentially, followed by imaging. four similar All lines
the curves converge
for HO, to theand
HO1, HO3, same HHOlevel; except for
converging
the
onePSO
afterof f 2 andPSO
another; the WOA
shows of thef 3worst
, all curves tend torate
convergence converge. the f 7 on
On values
and fitness function,
f 8 f 1the
1
.
convergence speed of MHO is not similar to other algorithms, but the value of its optimal
solution is the smallest, so the overall performance is better than other algorithms.
All convergence curves for the multi-peak function are shown in Figure 6. Again,
the curves of MHO and HO2 are similar on the six functions on the graphs. On the f 8
function, the fitness values of MHO and other functions are significantly lower than those of
HHO, but on all functions other than that, the dominance of MHO is similar to that on the
single-peak function. On f 9 , it can be seen that MHO and the green line of HO2 converge
preferentially, followed by four similar lines for HO, HO1, HO3, and HHO converging one
after another; PSO shows the worst convergence rate and fitness values on f 8 ∼ f 11 .
Figure 7 shows the multimodal function with fixed dimensions. There are few overall
differences between all the algorithms in function f 14 ∼ f 19 , but there are noticeable
differences between the curves in the detailed presentation. The same characteristics of
MHO are exhibited in all these functions—a rapid decline in the initial period, showing a
fast rate of convergence—and the other algorithms also show faster convergence on specific
functions, but with lower fitness values for MHO. Among the functions f 20 ∼ f 23 , HHO
has the worst overall performance, and MHO shows good optimization performance with
fast convergence speed and optimal solutions on all functions. The other algorithms also
perform well on specific functions but, overall, the MHO algorithm shows competitiveness
in these tests.
Biomimetics 2025,10,
Biomimetics2025, 10,90x FOR PEER REVIEW 23 22
ofof3431
f 20 f 21 f 22
f 23
Figure7.7. Convergence
Figure Convergence plots
plots of fixed-dimensional
fixed‐dimensional multimodal
multimodal function.
function.
3.7. Stability
Figure 7Analysis
shows the multimodal function with fixed dimensions. There are few overall
differences betweenbox-and-line
In this section, all the algorithms in used
plots are function f 1 4 fthe
to analyze 19
, but thereofare
stability allnoticeable dif‐
the algorithms,
which arebetween
ferences run independently
the curves in 50the
times, againpresentation.
detailed using the experimental results for the of
The same characteristics 23MHO
bench-
are exhibited
mark functions.inFigures
all these8–10
functions—a rapid declineplots
show the box-and-line in the
forinitial period, showing
the single-peak a fast
function, the
rate of convergence—and
multi-peak function, and the the fixed-dimension
other algorithms multimodal
also show faster convergence
function, on specific
respectively. As an
functions,the
example, but with lower
boxplots of thefitness values
functions for MHO.
in Figure 7 areAmong the as
presented functions
an evaluation , HHO
f 20 fmethod
23 of
the
hasboxplots.
the worst overall performance, and MHO shows good optimization performance with
algorithms are more stable. The gray dotted line represents the whisker; the longer the
whisker, the more discrete the data are. The longer whisker of DBO shows that it performs
poorly. The stability of the algorithm can be analyzed by combining the above evaluation
Biomimetics 2025, 10, 90 parameters. As a side note, we have chosen some representative examples to keep the23data
of 31
concise.
Figure8.8.Boxplots
Figure Boxplotsof
ofsingle-peak
single‐peak function.
function.
Among the single‐peak functions shown in Figure 8, MHO has the lowest median,
the smallest outliers, the smallest interquartile spacing, and shows better stability, while
PSO is the least stable; the other four single‐peak functions are not shown, but they are
f10 f11 f13
consistent in their general trend with the representative cases shown.
Figure 9. Boxplots of multi‐peak function.
Among the multi‐peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 1 1 , f 1 3 than
HHO; among the functions not presented, the overall trend is consistent with the repre‐
f10 f11
sentative cases presented, with MHO f13
being slightly weaker in terms of stability than HHO
as well as PSO performing the worst.
Figure9.9.Boxplots
Figure Boxplotsof
ofmulti-peak
multi‐peak function.
function.
Among the multi‐peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 1 1 , f 1 3 than
HHO; among the functions not presented, the overall trend is consistent with the repre‐
sentative cases presented, with MHO being slightly weaker in terms of stability than HHO
as well as PSO performing the worst.
f15 f19 f 23
Figure10.
Figure 10.Boxplots
Boxplots of
of multimodal
multimodal functions
functions with
with fixed
fixed dimensions.
dimensions.
In the box-and-line
Among the multimodal plot,functions
the red horizontal line represents
shown in Figure 10, MHOthe hasmedian,
the lowest with lower
median
values indicating
and outliers and better
the bestperformance of theHHO
stability, while algorithm
performson the thetest function.
worst. It is thetrend
The general primary
in
metric for evaluating
the performance thealgorithms
of the performance in ofthethe algorithms.
non‐shown MHO has
functions a low median,
is consistent with with
the
f15 shown
all functions. The
the algorithms combined
except boxplot
HHO fshowing
19 analysis
similar of the above
performance. algorithms
The blueleads
f 23 boxesto the
for con‐
DBO
clusion
and WOA that MHO
show thehas the best stability.
interquartile range (IQR), where a smaller IQR indicates a more stable
Figure 10. Boxplots of multimodal functions with fixed dimensions.
algorithmic performance. Thus, MHO, HO, HO1, HO2, HO3, HHO, HBA, and WOA all
4. Application
show Among to Engineering
better stability.
the multimodal crossesDesign
The red functions shown Problems
represent inoutliers,
Figure 10, andMHOthe smaller
has thethe number,
lowest medianthe
better the stability.
Three
and outliers typical
and theHere,
bestonly
engineering HHO, HBA,HHO
constraint
stability, while and PSO have outliers,
problems—reducer
performs implying
design
the worst. [54],
The thattrain
gear
general the
trendother
de‐
in
algorithms
sign are more
[55], and stepof
the performance stable.
taper
the pulleyThe gray
design
algorithms dotted
in[56]—are line represents
chosen for
the non‐shown the whisker;
examination
functions the longer
in this section
is consistent the
in
with the
whisker,
order the moreconfirm
showntofunctions.
further discrete theefficacy
the
The combined data are.
of The
boxplot MHO longer ofwhisker
in resolving
analysis the above of DBO
global showsleads
optimization
algorithms that issues.
it
toperforms
Be‐
the con‐
poorly.
cause
clusion The
ofthatstability
theirMHO of
intricate the algorithm
has restrictions can be analyzed by combining the above
and multi‐objective optimization features, these issues
the best stability. evaluation
parameters.
are not only As a side note,
significant we have chosen
in engineering practicesome
but alsorepresentative
make excellent examples
examples to for
keep the
eval‐
data concise.
4. Application
uating to Engineering
the effectiveness of optimization Design Problems
methods. The trials are set up as 50 rounds of
cyclesThree
with atypical
maximum number of iterations per
engineering constraint problems—reducer round of 50, and we will
design compare
[54], MHO’s
gear train de‐
performance with that of other algorithms to confirm its effectiveness.
sign [55], and step taper pulley design [56]—are chosen for examination in this section in
order to further confirm the efficacy of MHO in resolving global optimization issues. Be‐
4.1. Speed Reducer Design Problem
cause of their intricate restrictions and multi‐objective optimization features, these issues
Biomimetics 2025, 10, 90 24 of 31
Among the single-peak functions shown in Figure 8, MHO has the lowest median, the
smallest outliers, the smallest interquartile spacing, and shows better stability, while PSO is
the least stable; the other four single-peak functions are consistent in their general trend
with the representative cases shown.
Among the multi-peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 11 , f 13
than HHO; among the functions not presented, the overall trend is consistent with the
representative cases presented, with MHO being slightly weaker in terms of stability than
HHO as well as PSO performing the worst.
Among the multimodal functions shown in Figure 10, MHO has the lowest median
and outliers and the best stability, while HHO performs the worst. The general trend in the
performance of the algorithms in the non-shown functions is consistent with the shown
functions. The combined boxplot analysis of the above algorithms leads to the conclusion
that MHO has the best stability.
Table 5. Comparison of the results for the speed reducer design problem.
Optimal Value
Algorithm Optimal Cost
x1 x2 x3 x4 x5 x6 x7
MHO 3.5999 7.0000 × 10−1 1.7000 × 101 8.3000 7.7978 3.3985 5.2935 3.0614 × 103
HO 3.5145 7.0000 × 10−1 1.7000 × 101 7.4885 7.9394 3.3538 5.4191 3.0942 × 103
HO1 3.5473 7.0000 × 10−1 1.7000 × 101 7.6056 7.9826 3.7487 5.2885 3.1373 × 103
HO2 3.5747 7.0000 × 10−1 1.7000 × 101 7.3000 7.8843 3.3548 5.4199 3.1155 × 103
HO3 3.5147 7.0000 × 10−1 1.7000 × 101 7.8114 8.1951 3.3515 5.4857 3.1476 × 103
HHO 3.5034 7.0000 × 10−1 1.8508 × 101 8.0993 7.8316 3.7984 5.3924 3.4771 × 103
HBA 3.5000 7.0000 × 10−1 1.7000 × 101 8.2194 7.9955 3.5891 5.2869 3.0754 × 103
DBO 3.6000 7.0000 × 10−1 1.7000 × 101 8.3000 7.7154 3.9000 5.2867 3.2093 × 103
PSO 3.6000 7.0000 × 10−1 1.7000 × 101 7.8063 8.3000 3.9000 5.2869 3.2164 × 103
WOA 3.6000 7.1931 × 10−1 1.7000 × 101 8.2999 7.8366 3.3518 5.2903 3.1389 × 103
Biomimetics 2025, 10, 90 25 of 31
Consider x = [ x1 , x2 , x3 , x4 , x5 , x6 , x7 ] = [b, m, p, l1 , l2 , d1 , d2 ]
f ( x ) = 0.7854x1 x22 3.3333x32 + 14.9334x3 − 43.0934 − 1.508x1 x62 + x72
Minimize
+7.4777 x63 + x73 + 0.7854 x4 x62 + x5 x72
27
Subject to g1 ( x ) = − 1 ⩽ 0,
x1 x22 x3
397
g2 ( x ) = − 1 ⩽ 0,
x1 x22 x32
1.93x43
g3 ( x ) = − 1 ⩽ 0,
x2 x64 x3
1.93x53
g4 ( x ) = − 1 ⩽ 0,
x2 x74 x3
h i1/2
(745( x4 /x2 x3 ))2 + 16.9 × 106
g5 ( x ) = − 1 ⩽ 0,
110x62
h i1/2 (25)
(745( x5 /x2 x3 ))2 + 157.9 × 106
g6 ( x ) = − 1 ⩽ 0,
85x73
x2 x3
g7 ( x ) = − 1 ⩽ 0,
40
5x2
g8 ( x ) = − 1 ⩽ 0,
x1
x
g9 ( x ) = 1 − 1 ⩽ 0,
12x 2 PEER REVIEW
Biomimetics 2025, 10, x FOR 26 of 34
1.5z6 + 1.9
g10 (z) = − 1 ⩽ 0,
z4
involved: face width ( x ), module of teeth ( x ), number of teeth on the pinion ( x ), length
1.5z7 + 1.9 1 2 3
x3 : number of teeth x6
x7
x2 : tooth model
Gear 1
x4
Bearing 1
x5 x1
Gear 1 Bearing 2
Shaft 1
Shaft 2
N1 N4
N3 N2
Figure 12.
TheGear
MHOsystem design problem
algorithm diagram.to optimize the design of gear systems, and its
is employed
results are compared with those of nine other algorithms. The experimental results are
The MHO algorithm is employed to optimize the design of gear systems, and its re‐
shown in Table 6. The optimal value obtained by MHO is lower than that of the other
sults are compared with those of nine other algorithms. The experimental results are
nine algorithms, indicating that MHO achieved a better value and superior performance in
shown in Table 6. The optimal value obtained by MHO is lower than that of the other nine
this problem.
algorithms, indicating that MHO achieved a better value and superior performance in this
problem.
Table 6. Comparison of the results of gear train design problem.
h3 ( x ) = C1 − C4 = 0, h3 ( x ) C1 C4 0,
g i 1,2,3.4 ( x ) Ri2,
gi=1,2,3.4 ( x ) = − Ri ⩽ 2, g i 1,2,3.4 ( x ) (0.75 745.6998) Pi0
(27)
(27)
gi=1,2,3.4 ( x ) = where
(0.75 ,
× 745.6998 ) − Pi ⩽0
2
Ni
where, N 1
x N 2a, i (1, 2,3, 4)
Ni Ci 2 1 N
i i
− 1 2 4 a
1+ N
N
Ci = πx i i
N + + 2a, i = (1, 2, 3, 4)
2 4a oo1 N i x
Ri Nexp 2sin 1 i , i (1, 2,3, 4),
xi
,i = (1, 22,a 3, 4),
n n
Ri = exp µ π − 2 sin − 1 N
N − 1 2a
i
xi N i
Pi = stx5 (1 − Ri ) πx60i Ni , i =Pi (1, 5 13,
stx2, R4i), 60 , i (1, 2,3, 4),
t = 8 mm, s = 1.75 MPa, tµ = 8mm , s ρ
0.35, 1.75
=MPa7200 , kg/m 0.35,3, a 7200 / m3 , a 3mm.
= 3kgmm.
NO5 850
d1 d 2 d3 d4 d5
NO1 150
NO 2 250
NO3 450
NO 4 650
a T1 T2
C5
N i 350
Optimal Value
Algorithm Optimal Cost
d1 d2 d3 d4 ω
MHO 3.9835 × 101 5.4824 × 101 7.3067 × 101 8.7626 × 101 8.8851 × 101 2.7377 × 1092
HO 4.0922 × 103 5.6309 × 101 7.5110 × 101 8.9975 × 101 8.6176 × 101 1.4460 × 1093
HO1 4.0863 × 101 5.6226 × 101 7.4988 × 101 8.9873 × 101 8.8590 × 101 3.3731 × 1092
HO2 4.0683 × 101 5.5973 × 101 7.4724 × 101 8.9455 × 101 8.9458 × 101 5.9786 × 1093
HO3 4.0427 × 101 5.5560 × 101 7.4205 × 101 8.8981 × 101 8.9309 × 101 9.2168 × 1093
HHO 4.1957 × 101 5.5823 × 101 8.3645 × 101 8.6757 × 101 8.8616 × 101 5.2464 × 1097
HBA 4.0818 × 101 5.6155 × 101 7.4881 × 101 8.9761 × 101 8.6001 × 101 4.8097 × 1092
DBO 4.0928 × 101 5.6330 × 101 7.5113 × 101 9.0000 × 101 9.0000 × 101 8.5877 × 1092
PSO 4.0147 × 101 5.5184 × 101 7.3648 × 101 8.8431 × 101 9.0000 × 101 1.2714 × 1094
WOA 4.0969 × 101 5.8129 × 101 7.5737 × 101 8.7173 × 101 8.7229 × 101 8.5726 × 1096
Biomimetics 2025, 10, 90 28 of 31
Author Contributions: T.H.: writing—review and editing, software, formal analysis, and conceptu-
alization. H.W.: writing—review and editing, writing—original draft, software, and methodology.
T.L.: visualization, supervision, resources, and data curation. Q.L.: writing—review and editing,
visualization, funding acquisition, methodology, and conceptualization. Y.H.: supervision, resources,
validation, and funding acquisition. All authors have read and agreed to the published version of
the manuscript.
Funding: This work was supported by the Anhui Provincial Colleges and Universities Collaborative
Innovation Project (GXXT-2023-068), and the Anhui University of Science and Technology Graduate
Innovation Fund Project (2023CX2086).
Biomimetics 2025, 10, 90 29 of 31
Data Availability Statement: The data generated from the analysis in this study can be found in this
article. This study does not report the original code, which is available for academic purposes from
the lead contact. Any additional information required to reanalyze the data reported in this paper is
available from the lead contact upon request.
Acknowledgments: We would like to thank the School of Electrical and Information Engineering at
Anhui University of Science and Technology for providing the laboratory.
References
1. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer.
J. Ambient Intell. Humaniz. Comput. 2021, 12, 8457–8482. [CrossRef]
2. Gharaei, A.; Shekarabi, S.; Karimi, M. Modelling and optimal lot-sizing of the replenishments in constrained, multi-product and
bi-objective EPQ models with defective products: Generalised cross decomposition. Int. J. Syst. Sci. 2020, 7, 262–274. [CrossRef]
3. Sun, Y.; Chen, Y. Multi-population improved whale optimization algorithm for high dimensional optimization. Appl. Soft Comput.
2024, 112, 107854. [CrossRef]
4. Shen, Y.; Zhang, C.; Gharehchopogh, F.; Mirjalili, S. An improved whale optimization algorithm based on multi-population
evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [CrossRef]
5. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020,
32, 12363–12379. [CrossRef]
6. Baluja, S.; Caruana, R. Removing the Genetics from the Standard Genetic Algorithm. In Proceedings of the Twelfth International
Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995.
7. Coelho, L.; Mariani, V. Improved differential evolution algorithms for handling economic dispatch optimization with generator
constraints. Energy Convers. Manag. 2006, 48, 1631–1639. [CrossRef]
8. Ma, H.; Ye, S.; Simon, D.; Fei, M. Conceptual and numerical comparisons of swarm intelligence optimization algorithms. Soft
Comput. 2017, 21, 3081–3100. [CrossRef]
9. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems:
Applications and Trends. IEEE-CAA J. Autom. Sin. 2021, 8, 1627–1643. [CrossRef]
10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural
Networks, Perth, WA, Australia, 27 November–1 December 1995.
11. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [CrossRef]
12. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Technical Report; Erciyes
University: Kayseri, Türkiye, 2005. Available online: https://abc.erciyes.edu.tr/pub/tr06_2005.pdf (accessed on 15 July 2024).
13. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29,
464–483. [CrossRef]
14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [CrossRef]
15. Yu, V.F.; Jewpanya, P.; Redi, A.; Tsao, Y.C. Adaptive neighborhood simulated annealing for the heterogeneous fleet vehicle routing
problem with multiple crossdocks. Comput. Oper. Res. 2021, 129, 105205. [CrossRef]
16. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [CrossRef]
17. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for
solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [CrossRef]
18. Wang, S.H.; Li, Y.Z.; Yang, H.Y. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization.
Appl. Soft Comput. 2019, 81, 105496. [CrossRef]
19. Shanmugapriya, P.; Kumar, T.S.; Kirubadevi, S.; Prasad, P.V. IoT based energy management strategy for hybrid electric storage
system in EV using SAGAN-COA approach. J. Energy Storage 2024, 104, 114315. [CrossRef]
20. Hu, S.J.; Kong, G.Q.; Zhang, C.S.; Fu, J.H.; Li, S.Y.; Yang, Q. Data-driven models for the steady thermal performance prediction of
energy piles optimized by metaheuristic algorithms. Energy 2024, 313, 134000. [CrossRef]
21. Sun, B.; Peng, P.; Tan, G.; Pan, M.; Li, L.; Tian, Y. A fuzzy logic constrained particle swarm optimization algorithm for industrial
design problems. Appl. Soft Comput. 2024, 167, 112456. Available online: https://api.semanticscholar.org/CorpusID:274134625
(accessed on 15 July 2024). [CrossRef]
22. Wu, S.; Dong, A.; Li, Q.; Wei, W.; Zhang, Y.; Ye, Z. Application of ant colony optimization algorithm based on farthest point
optimization and multi-objective strategy in robot path planning. Appl. Soft Comput. 2024, 167, 112433. [CrossRef]
23. Palanisamy, S.K.; Krishnaswamy, M. Optimization and forecasting of reinforced wire ropes for tower crane by using hybrid
HHO-PSO and ANN-HHO algorithms. Int. J. Fatigue 2024, 190, 108663. [CrossRef]
Biomimetics 2025, 10, 90 30 of 31
24. Liu, J.; Zhao, J.; Li, Y.; Zhou, H. HSMAOA: An enhanced arithmetic optimization algorithm with an adaptive hierarchical
structure for its solution analysis and application in optimization problems. Thin-Walled Struct. 2025, 206, 112631. [CrossRef]
25. Cui, X.; Zhu, J.; Jia, L.; Wang, J.; Wu, Y. A novel heat load prediction model of district heating system based on hybrid whale
optimization algorithm (WOA) and CNN-LSTM with attention mechanism. Energy 2024, 312, 133536. [CrossRef]
26. Che, Z.; Peng, C.; Yue, C. Optimizing LSTM with multi-strategy improved WOA for robust prediction of high-speed machine
tests data. Chaos Soliton Fract. 2024, 178, 114394. [CrossRef]
27. Elsisi, M. Optimal design of adaptive model predictive control based on improved GWO for autonomous vehicle considering
system vision uncertainty. Appl. Soft Comput. 2024, 158, 111581. [CrossRef]
28. Karaman, A.; Pacal, I.; Basturk, A.; Akay, B.; Nalbantoglu, U.; Coskun, S.; Sahin, O.; Karaboga, D. Robust real-time polyp detection
system design based on YOLO algorithms by optimizing activation functions and hyper-parameters with artificial bee colony
(ABC). Expert Syst. Appl. 2023, 221, 119741. [CrossRef]
29. Yu, X.; Zhang, W. Address wind farm layout problems by an adaptive Moth-flame Optimization Algorithm. Appl. Soft. Comput.
2024, 167, 112462. [CrossRef]
30. Dong, K.; Yang, D.W.; Sheng, J.B.; Zhang, W.D.; Jing, P.R. Dynamic planning method of evacuation route in dam-break flood
scenario based on the ACO-GA hybrid algorithm. Int. J. Disaster Risk Reduct. 2024, 100, 104219. [CrossRef]
31. Liu, X.; Wang, J.S.; Zhang, S.B.; Guan, X.Y.; Gao, Y.Z. Optimization scheduling of off-grid hybrid renewable energy systems based
on dung beetle optimizer with convergence factor and mathematical spiral. Renew. Energy 2024, 237, 121874. [CrossRef]
32. Beşkirli, A.; Dağ, İ. I-CPA: An Improved Carnivorous Plant Algorithm for Solar Photovoltaic Parameter Identification Problem.
Biomimetics 2023, 8, 569. [CrossRef]
33. Beşkirli, A.; Dağ, İ. Mustafa Servet Kiran. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic
models. Appl. Soft Comput. 2024, 167, 112220. [CrossRef]
34. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-
inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. Available online: https://api.semanticscholar.org/CorpusID:268083241
(accessed on 15 July 2024). [CrossRef] [PubMed]
35. Maurya, P.; Tiwari, P.; Pratap, A. Application of the hippopotamus optimization algorithm for distribution network reconfigura-
tion with distributed generation considering different load models for enhancement of power system performance. Electr. Eng.
2024, SN-1432-0487. [CrossRef]
36. Chen, Y.; Wu, F.; Shi, L.; Li, Y.; Qi, P.; Guo, X. Identification of Sub-Synchronous Oscillation Mode Based on HO-VMD and
SVD-Regularized TLS-Prony Methods. Energies 2024, 17, 5067. [CrossRef]
37. Ribeiro, A.N.; Muñoz, D.M. Neural network controller for hybrid energy management system applied to electric vehicles.
J. Energy Storage 2024, 104, 114502. [CrossRef]
38. Wang, H.; Binti Mansor, N.N.; Mokhlis, H.B. Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction
Using Improved Hippopotamus Algorithm. Appl. Sci. 2024, 14, 7803. [CrossRef]
39. Mashru, N.; Tejani, G.G.; Patel, P.; Khishe, M. Optimal truss design with MOHO: A multi-objective optimization perspective.
PLoS ONE. 2024, 19, e0308474. Available online: https://api.semanticscholar.org/CorpusID:271905232 (accessed on 15 July 2024).
[CrossRef]
40. Abdelaziz, M.A.; Ali, A.A.; Swief, R.A.; Elazab, R. Optimizing energy-efficient grid performance: Integrating electric vehicles,
DSTATCOM, and renewable sources using the Hippopotamus Optimization Algorithm. Sci. Rep. 2024, 14, 28974. [CrossRef]
41. Baihan, A.; Alutaibi, A.I.; Alshehri, M.; Sharma, S.K. Sign language recognition using modified deep learning network and hybrid
optimization: A hybrid optimizer (HO) based optimized CNNSa-LSTM approach. Sci. Rep. 2024, 14, 26111. [CrossRef]
42. Amiri, M.H.; Hashjin, N.M.; Najafabadi, M.K.; Beheshti, A.; Khodadadi, N. An innovative data-driven AI approach for detecting
and isolating faults in gas turbines at power plants. Expert Syst. Appl. 2025, 263, 125497. [CrossRef]
43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. Available
online: https://dl.acm.org/doi/10.1109/4235.585893 (accessed on 15 July 2024). [CrossRef]
44. Wang, M.M.; Song, X.G.; Liu, S.H.; Zhao, X.Q.; Zhou, N.R. A novel 2D Log-Logistic–Sine chaotic map for image encryption.
Nonlinear Dyn. 2025, 113, 2867–2896. [CrossRef]
45. Sedigheh Mahdavi, Shahryar Rahnamayan, Kalyanmoy Deb, Opposition based learning: A literature review. Swarm Evol. Comput.
2018, 39, 1–23. [CrossRef]
46. Jiao, J.; Li, J. Enhanced fireworks algorithm based on particle swarm optimization and reverse learning of small-hole imaging
experiment. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech
Republic, 9–12 October 2022.
47. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy
perturbation. Appl. Soft Comput. 2024, 152, 111211. [CrossRef]
48. Phalke, S.; Vaidya, Y.; Metkar, S. Big-O Time Complexity Analysis Of Algorithm. In Proceedings of the International Conference
on Signal and Information Processing (IConSIP), Pune, India, 26–27 August 2022.
Biomimetics 2025, 10, 90 31 of 31
49. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications.
Future Gener. Comput. Syst. 2019, 97, 849–872. [CrossRef]
50. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic
algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [CrossRef]
51. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J Supercomput. 2022, 79,
7305–7336. [CrossRef]
52. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [CrossRef]
53. Röhmel, J. The permutation distribution of the Friedman test. Comput. Stat. Data. Anal. 1997, 26, 83–99. [CrossRef]
54. Golinski, J. Optimal synthesis problems solved by meansof nonlinear programming and random methods. J. Mech. 1970, 5,
287–309. [CrossRef]
55. Huang, Y.; Liu, Q.; Song, H.; Han, T.; Li, T. CMGWO: Grey wolf optimizer for fusion cell-like P systems. Heliyon 2024, 10, e34496.
[CrossRef]
56. Han, T.; Li, T.; Liu, Q.; Huang, Y.; Song, H. A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems.
Algorithms 2024, 17, 573. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.