0% found this document useful (0 votes)
6 views31 pages

Biomimetics 10 00090

The paper introduces a modified hippopotamus optimization algorithm (MHO) to enhance the traditional hippopotamus optimization (HO) algorithm's performance in global optimization and engineering design problems. MHO improves convergence speed and solution accuracy by utilizing a sine chaotic map for population initialization, adjusting the convergence factor, and applying a small-hole imaging reverse learning strategy. Experimental results demonstrate that MHO outperforms other metaheuristics on benchmark functions and engineering design problems, providing valuable insights for practical applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views31 pages

Biomimetics 10 00090

The paper introduces a modified hippopotamus optimization algorithm (MHO) to enhance the traditional hippopotamus optimization (HO) algorithm's performance in global optimization and engineering design problems. MHO improves convergence speed and solution accuracy by utilizing a sine chaotic map for population initialization, adjusting the convergence factor, and applying a small-hole imaging reverse learning strategy. Experimental results demonstrate that MHO outperforms other metaheuristics on benchmark functions and engineering design problems, providing valuable insights for practical applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Article

MHO: A Modified Hippopotamus Optimization Algorithm for


Global Optimization and Engineering Design Problems
Tao Han , Haiyan Wang, Tingting Li *, Quanzeng Liu and Yourui Huang

School of Electrical & Information Engineering, Anhui University of Science and Technology,
Huainan 232001, China; than@aust.edu.cn (T.H.); 2023200859@aust.edu.cn (H.W.); lqz990709@163.com (Q.L.);
hyr628@163.com (Y.H.)
* Correspondence: 2023200766@aust.edu.cn

Abstract: The hippopotamus optimization algorithm (HO) is a novel metaheuristic algo-


rithm that solves optimization problems by simulating the behavior of hippopotamuses.
However, the traditional HO algorithm may encounter performance degradation and fall
into local optima when dealing with complex global optimization and engineering design
problems. In order to solve these problems, this paper proposes a modified hippopotamus
optimization algorithm (MHO) to enhance the convergence speed and solution accuracy of
the HO algorithm by introducing a sine chaotic map to initialize the population, changing
the convergence factor in the growth mechanism, and incorporating the small-hole imaging
reverse learning strategy. The MHO algorithm is tested on 23 benchmark functions and
successfully solves three engineering design problems. According to the experimental data,
the MHO algorithm obtains optimal performance on 13 of these functions and three design
problems, exits the local optimum faster, and has better ordering and stability than the
other nine metaheuristics. This study proposes the MHO algorithm, which offers fresh
insights into practical engineering problems and parameter optimization.

Keywords: metaheuristic algorithms; hippopotamus optimization; global optimization;


engineering design problems

Academic Editor: Heming Jia 1. Introduction


Received: 7 January 2025 Finding the maximum value of a given objective function under specified constraints
Revised: 3 February 2025
is the goal of optimization problems, which are found in a variety of disciplines, including
Accepted: 3 February 2025
computer science, mathematics, engineering, and economics. All optimization problems
Published: 5 February 2025
consist of three components: the objective function, constraints, and decision variables [1].
Citation: Han, T.; Wang, H.; Li, T.;
Traditional optimization algorithms, such as linear programming, quadratic program-
Liu, Q.; Huang, Y. MHO: A Modified
Hippopotamus Optimization
ming, dynamic programming, etc., provide a solid mathematical foundation and efficient
Algorithm for Global Optimization solutions for solving deterministic, convex, and well-structured optimization problems, but
and Engineering Design Problems. they usually require the problem to have a specific mathematical structure, and they are
Biomimetics 2025, 10, 90. https:// prone to fall into locally optimal solutions, especially in the case of multi-peak problems,
doi.org/10.3390/biomimetics10020090
where it is difficult to find the globally optimal solution, and the results of the solution
Copyright: © 2025 by the authors. strongly depend on the initial values [2]. The rise of metaheuristic algorithms compensates
Licensee MDPI, Basel, Switzerland. for the limitations of conventional optimization algorithms, as they are very flexible and
This article is an open access article
adaptable and offer new tools and techniques for resolving challenging optimization issues
distributed under the terms and
in the real world. These algorithms are independent of the problem’s form and do not
conditions of the Creative Commons
Attribution (CC BY) license
require knowledge of the objective function’s derivatives [3].
(https://creativecommons.org/ Metaheuristics are high-level algorithms that model social behaviors or natural phe-
licenses/by/4.0/). nomena to discover an approximate optimal solution to complex optimization problems.

Biomimetics 2025, 10, 90 https://doi.org/10.3390/biomimetics10020090


Biomimetics 2025, 10, 90 2 of 31

There are a wide variety of metaheuristic algorithms, which can be categorized into three
groups based on their inspiration and working principles: evolution-based algorithms,
group intelligence-based algorithms, and algorithms based on physical principles [4].
Evolution-based algorithms are mainly used to realize the overall progress of the popu-
lation and finally complete the optimal solution by simulating the evolutionary law of
superiority and inferiority in nature (Darwin’s law) [5]. Among the most prominent exam-
ples of these are genetic algorithms (GA) [6] and differential evolution (DE) [7]. Genetic
algorithms simulate the process of biological evolution and optimize the solution through
selection, crossover and mutation operations, with strong global search abilities which are
suitable for discrete optimization problems. Differential evolution algorithms generate
new solutions through the different operations of individuals in a population, which is
excellent in dealing with nonlinear and multimodal optimization problems. By simulating
a group’s intelligence, group intelligence-based algorithms [8,9] aim to produce a globally
optimal solution. Each group in this algorithm is a biological population, and the most rep-
resentative examples are the particle swarm optimization algorithm [10] and the ant colony
algorithm [11], which use the cooperative behavior of a population to accomplish tasks that
individuals are unable to complete. The PSO simulates the social behavior of bird or fish
flocks and achieves global optimization through collaboration among individuals, which is
simple, efficient, and suitable for continuous optimization problems. The ACO simulates
the foraging behavior of ants and optimizes the paths through a pheromone mechanism,
which is excellent in path optimization problems. There are also many other popular
algorithms, such as the artificial bee colony algorithm [12], which simulates the foraging be-
havior of bees to optimize solutions through information sharing and collaboration, the bat
optimization algorithm [13], which simulates the echolocation behavior of bats to optimize
solutions through frequency and amplitude adjustment, and the gray wolf optimization
algorithm [14], which simulates the collaboration and competition between leaders and
followers in gray wolf packs. All of these algorithms have strong global search capabilities.
The firefly algorithm (FA), which simulates the behavior of fireflies glowing to attract mates,
optimizes solutions through light intensity and movement rules for multi-peak optimiza-
tion problems. The fundamental concept of physical principle-based algorithms, of which
simulated annealing (SA) [15] is the best example, is to use natural processes or physics
principles as the basis for search techniques used to solve complex optimization problems.
It mimics the annealing process of solids and performs well in combinatorial optimization
problems by controlling the “temperature” parameter to balance global exploration and
local exploitation in the search process. In addition to the above algorithms, others include
the gravitational search algorithm (GSA) [16] and the water cycle algorithm (WCA) [17].
The GSA optimizes the solution by simulating gravitational interactions between celestial
bodies and using mutual attraction between masses, demonstrating a strong global search
capability. The WCA, on the other hand, simulates water cycle processes in nature and uses
the convergence and dispersion mechanism of water flow to optimize the solution, which
also has excellent global search performance. In addition, there are special types of hybrid
optimization algorithms, which combine the features of two or more metaheuristics to
enhance the performance of the algorithms by incorporating different search mechanisms.
For example, the hybrid particle swarm optimization algorithm with differential evolution
(DEPSO [18]) combines the population intelligence of the particle swarm optimization
algorithm and the variability capability of differential evolution, which enables DEPSO to
efficiently balance global and local searches and to improve the efficiency and effectiveness
of the optimization process, especially for global optimization problems in continuous
space. Based on a three-phase model that includes hippopotamus positioning in rivers and
ponds, defense strategies against predators, and escape strategies, the HO is a new algo-
Biomimetics 2025, 10, 90 3 of 31

rithm inspired by hippopotamus population behaviors which was proposed by Amiri [19]
et al. in 2024. In the optimization sector, the hippopotamus optimization (HO) algorithm
stands out for its excellent performance, which is able to quickly identify and converge
to the optimal solution and effectively avoid falling into local minima. The algorithm’s
efficient local search strategy and fast optimality-finding speed enable it to excel in solving
complex problems. It effectively balances global exploration and local exploitation and is
able to quickly find high-quality solutions, making it an effective tool for solving complex
optimization problems.
Currently, metaheuristic algorithms have a wide range of application prospects in
the field of engineering optimization. Hu [20] et al. used four metaheuristic algorithms,
namely, the African vulture optimization algorithm (AVOA), the teaching–learning-based
optimization algorithm (TLBO), the sparrow search algorithm (SSA), and the gray wolf
optimization algorithm (GWO), to optimize a hybrid model and proposed integrated
prediction of steady-state thermal performance prediction data for an energy pile-driven
model. Sun [21] et al. responded to most of the industrial design problems and proposed
a fuzzy logic particle swarm optimization algorithm based on the associative constraints
processing method. A particle swarm optimization algorithm was used as a searcher, and a
set of fuzzy logic rules integrating the feasibility of the individual was designed to enhance
its searching ability. Wu [22] et al. responded to the ant colony optimization algorithm’s
limitations, such as early blind searching, slow convergence, low path smoothness, and
other limitations, and proposed an ant colony optimization algorithm based on farthest
point optimization and a multi-objective strategy. Palanisamy and Krishnaswamy [23]
used hybrid HHO-PSO (hybrid particle swarm optimization) for failure testing of wire
ropes for hardness, wear and tear analysis, tensile strength, and fatigue life and adopted a
hybrid HHO-based artificial neural network-based HHO (Hybrid ANN-HHO) to predict
the performance of the experimental wire ropes. Liu [24] et al. proposed an improved
adaptive hierarchical optimization algorithm (HSMAOA) in response to problems such
as premature convergence and falling into local optimization when dealing with complex
optimization problems in arithmetic optimization algorithms. Cui [25] et al. combined
the whale optimization algorithm (WOA) with attention-to-the-technology (ATT) and
convolutional neural networks (CNNs) to optimize the hyperparameters of the LSTM
model and proposed a new load prediction model to address the over-reliance of most
methods on the default hyperparameter settings. Che [26] et al. used a circular chaotic map
as well as a nonlinear function for multi-strategy improvement of the whale optimization
algorithm (WOA) and used the improved WOA to optimize the key parameters of the LSTM
to improve its performance and modeling time. Elsisi [27] used a different learning process
based on the improved gray wolf optimizer (IGWO) and fitness–distance balancing (FDB)
methodology to balance the original gray wolf optimizer’s exploration and development
approach and design a new automated adaptive model predictive control (AMPC) for
self-driving cars to solve the rectification problem of self-driving car parameters and the
uncertainty of the vision system. Karaman [28] et al. used the artificial bee colony (ABC)
optimization algorithm to go in search of the optimal solution for the hyperparameters and
activation function of the YOLOv5 algorithm and enhance the accuracy of colonoscopy.
Yu and Zhang [29], in order to minimize the wake flow effect, proposed an adaptive moth
flame optimization algorithm with enhanced detection exploitation capability (MFOEE)
to optimize the turbine layout of wind farms. Dong [30] et al. optimized the genetic
algorithm (GA) based on the characteristics of flood avoidance path planning and proposed
an improved ant colony genetic optimization hybrid algorithm (ACO-GA) to achieve
dynamic planning of evacuation paths for dam-breaking floods. Shanmugapriya [31]
et al. proposed an IoT-based HESS energy management strategy for electric vehicles by
Biomimetics 2025, 10, 90 4 of 31

optimizing the weight parameters of a neural network using the COA technique to improve
the SAGAN algorithm in order to improve the battery life of electric vehicles. Beşkirli
and Dağ [32] proposed an improved CPA algorithm (I-CPA) based on the instructional
factor strategy and applied it to the problem of solar photovoltaic (PV) module parameter
identification in order to improve the accuracy and efficiency of PV model parameter
estimation. Beşkirli and Dağ [33] proposed a multi-strategy-based tree seed algorithm (MS-
TSA) which effectively improves the global search capability and convergence performance
of the algorithm by introducing an adaptive weighting mechanism, a chaotic elite learning
method, and an experience-based learning strategy. It performs well in both CEC2017 and
CEC2020 benchmark tests and achieves significant optimization results in solar PV model
parameter estimation. Liu [34] et al. proposed an improved DBO algorithm and applied it
to the optimal design of off-grid hybrid renewable energy systems to evaluate the energy
cost with life cycle cost as the objective function. However, the above algorithms face the
challenges of data size and complexity in practical applications and still suffer from the
problem of easily falling into local optima, low efficiency, and insufficient robustness, which
limit the performance and applicability of the algorithms.
When solving real-world problems, the HO algorithm excels due to its adaptability and
robustness and is able to maintain stable performance in a wide range of optimization prob-
lems, making it an ideal choice for fast and efficient optimization problems. Maurya [35]
et al. used the hippopotamus optimization algorithm (HO) to optimize distributed genera-
tion planning and network reconfiguration in the consideration of different loading models
in order to improve the performance of a power grid. Chen [36] et al. addressed the limita-
tions of the VMD algorithm and improved it by using the excellent optimization capability
of the HO algorithm to achieve preliminary denoising, and in doing so, proposed a single-
sign-on modal identification method based on hippopotamus optimization-variational
modal decomposition (HO-VMD) and singular value decomposition-regularized total least
squares-Prony (SVD-RTLS-Prony) algorithms. Ribeiro and Muñoz [37] used particle swarm
optimization, hippopotamus optimization, and differential evolution algorithms to tune a
controller with the aim of minimizing the root mean square (RMS) current of the batteries
in an integrated vehicle simulation, thus mitigating battery stress events and prolonging
its lifetime. Wang [38] et al. used an improved hippopotamus optimization algorithm
(IHO) to improve solar photovoltaic (PV) output prediction accuracy. The IHO algorithm
addresses the limitations of traditional algorithms in terms of search efficiency, convergence
speed, and global searching. Mashru [39] et al. proposed the multi-objective hippopotamus
optimizer (MOHO), which is a unique approach that excels in solving complex structural
optimization problems. Abdelaziz [40] et al. used the hippopotamus optimization algo-
rithm (HO) to optimize two key metrics and proposed a new optimization framework to
cope with the problem of the volatility of renewable energy generation and unpredictable
electric vehicle charging demand to enhance the performance of the grid. Baihan [41] et al.
proposed an optimizer-optimized CNN-LSTM approach that hybridizes the hippopotamus
optimization algorithm (HOA) and the pathfinder algorithm (PFA) with the aim of improv-
ing the accuracy of sign language recognition. Amiri [42] et al. designed and trained two
new neuro-fuzzy networks using the hippopotamus optimization algorithm with the aim of
creating an anti-noise network with high accuracy and low parameter counts for detecting
and isolating faults in gas turbines in power plants. In addition to the above applications,
there are many global optimization and engineering design problems. However, the theory
of “no-free-lunch” (NFL) states that no optimization algorithm can solve all problems [43],
and each existing optimization algorithm can only achieve the expected results on certain
types of problems, so improvement of the HO algorithm is still necessary. Although the
HO algorithm has many advantages, its performance level decreases when dealing with
Biomimetics 2025, 10, 90 5 of 31

complex global optimization and engineering design problems, and it cannot avoid falling
into local optima. It is still necessary to adjust the algorithm parameters and strategies
according to specific problems in practical applications in order to fully utilize its potential.
Therefore, we propose the MHO algorithm to enhance the ability of HO to solve these
problems. The main contributions of this paper are as follows:
• Use the method of the sine chaotic map to replace the original population initialization
method in order to prevent the HO algorithm from settling into local optimal solutions
and to produce high-quality starting solutions.
• Introduce a new convergence factor to alter the growth mechanism of hippopotamus
populations during the exploration phase improves the global search capability of HO.
• Incorporate a small-hole imaging reverse learning strategy into the hippopotamus
escaping predator stage to avoid interference between dimensions, expand the search
range of the algorithm to avoid falling into a local optimum, and thus improve the
performance of the algorithm.
• The MHO model is tested on 23 benchmark functions, the optimization ability of the
model is tested by comparing it with other algorithms, and three engineering design
problems are successfully solved.
The structure of this paper is as follows: Section 2 presents the hippopotamus al-
gorithm and three methods for enhancing the hippopotamus optimization algorithm;
Section 3 presents experiments and analysis, including evaluating the experimental results
and comparing the MHO algorithm with other algorithms; Section 4 applies MHO to three
engineering design problems; and Section 5 provides a summary of the entire work.

2. Improved Algorithm
2.1. Sine Chaotic Map
A sine chaotic map [44] is a kind of chaotic system that generates chaotic sequences by
nonlinear transformation of a sinusoidal function, which becomes a typical representative
of a chaotic map due to the advantages of simple structure and high efficiency, and its
mathematical expression is
xk+1 = αsin( xk ) (1)

where k is a non-negative integer; xk ∈ [0, 1] denotes the value of the current iteration step;
and α ∈ [0, 1] is the chaos coefficient control parameter.
The sine map starts chaotic behavior when the parameter α is close to 0.87, and
superior chaotic properties can be observed when α is close to 1. Therefore, the intro-
duction of the sine chaotic map into the random initialization of the initial value of the
hippopotamus optimization (HO) algorithm can make the hippopotamus population uni-
formly distributed throughout the search space, which improves the diversity of the initial
population, enhances the global search capability of the HO algorithm, and effectively
avoids falling into the local optimal solution. Figure 1 shows the population distribution
initialized by the algorithm:
In the HO algorithm, a hippopotamus is a candidate solution to the optimization
problem, which means that each hippopotamus’ position in the search space is updated to
represent the values of the decision variables. Thus, each hippopotamus is represented as a
vector and the population of hippopotamuses is mathematically characterized by a matrix.
Similar to traditional optimization algorithms, the initialization phase of HO involves the
generation of a random initial solution, and the vector of decision variables is generated
as follows:

Xi : xi,j = lb j + r × ub j − lb j , i = 1, 2, . . . , N; j = 1, 2, . . . , m (2)
Biomimetics 2025, 10, 90 6 of 31

where Xi denotes the location of the ith candidate solution, r is a random number in the
range of 0~1, and lb and ub represent the lower and upper limits of the jth decision variable,
respectively. Let N denote the population size of hippopotamus in the herd, while m
denotes the number of decision variables in the problem and the population matrix is
composed by Equation (3).
   
x1 x1,1 · · · x1,j ··· x1,m
 .   . .. .. . .. 
 ..   .. . . .. . 
   
x =  xi  =  xi,1 · · · xi,j ··· xi,m  (3)
   
 ..   .. . .. .. .. 
   
 .   . .. . . . 
x N N ×m x N,1 · · · x N,j ··· x N,m N ×m

Consequently, the improved expression for the initialization phase is



Xi : xi,j = lb j + Sine_chaos ub j − lb j (4)

and furthermore,
Biomimetics 2025, 10, x FOR PEER REVIEW Sine_chaos = αsin(kπx ) 6 of (5)
34

where k is a parameter that controls the chaotic behavior and x is an initial value.

(a) (b)

(c) (d)

Figure
Figure 1.1. Comparison
Comparison of the the distribution
distribution of algorithmic
algorithmic initialization: (a) (a) histogram
histogram of of frequency
frequency
distribution
distributionof ofconventional
conventionalrandom
random initialization;
initialization; (b)
(b) scatter plot of
of the
the distribution
distribution of
of conventional
conventional
random
random initialization in two‐dimensional
initialization in two-dimensionalspace;
space;(c)(c)histogram
histogramof of frequency
frequency distribution
distribution of sinu-
of sinusoi‐
soidal
dal chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic mapmap
chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic ini‐
initialization in two-dimensional space.
tialization in two‐dimensional space.

In the HO algorithm, a hippopotamus is a candidate solution to the optimization prob‐


lem, which means that each hippopotamus’ position in the search space is updated to repre‐
sent the values of the decision variables. Thus, each hippopotamus is represented as a vector
and the population of hippopotamuses is mathematically characterized by a matrix. Similar
Biomimetics 2025, 10, 90 7 of 31

2.2. Change Growth Mechanism


The growth mechanism is a key component of the hippopotamus optimization algo-
rithm that determines how the search strategy is updated to find better solutions based on
current information.
In the original growth mechanism, the exploration phase of the HO algorithm models
the activity of the hippopotamus itself in the entire herd. The authors subdivided the whole
population into four segments, i.e., adult female hippopotamus, young hippopotamus,
adult male hippopotamus, and the dominant male hippopotamus (the leader of the herd).
The dominant hippopotamus was determined iteratively based on the value of the objective
function (minimizing the minimum value of the problem and maximizing the maximum
value of the problem).
In a typical hippopotamus herd, several females are positioned around the males, and
the herd leader defends the herd and territory from possible attacks. When hippopotamus
calves reach adulthood, the dominant male ejects them from the herd. Subsequently,
these expelled males are asked to attract females or compete for dominance with other
established male members. The location of the herd’s male hippopotamus in a lake or pond
is represented mathematically by Equation (6).
 
Mhipoo Mhippo
Xi : xi = xij + y1 Dhippo − I1 xij (6)

Mhipoo
In Equation (6), Xi denotes the position of the male hippopotamus and Dhippo

indicates the location of the dominant hippopotamus. As shown in Equation (7), r 1,...,4 is
a random vector between 0 and 1, r5 is a random number between 0 and 1, I1 and I2 are
integers between 1 and 2. MGi is the average of a number of randomly selected hippopota-
muses, which includes the currently considered hippopotamus with equal probability, y1 is
a random number between 0 and 1, and e1 and e2 are random integers that can be either 1
or 0.
 →

 I2 × r 1 + (∼ e1 )
 →
 2× r2−1




h= r3 (7)
 →
I1 × r 4 + (∼ e2 )





r5

 
t
T = exp − (8)
Max_iterations
(  
FBhippo FBhippo xij + h1 · Dhippo − I2 MGi , T > 0.6
Xi : xi = (9)
Ξ , else
  
 xij + h2 · MGi − D , r6 > 0.5
hippo
Ξ=
(10)

 lb j + r7 ub j − lb j , else
h i
f or i = 1, 2, . . . , N2 and j = 1, 2, . . . , m

Equations (9) and (10) describe the position of the female or immature hippopotamus
FBhippo
in the herd (Xi ). The majority of immature hippos are with their mothers, but due
to curiosity, sometimes immature hippos are separated from the herd or stay away from
their mothers.
If the convergence factor T is greater than 0.6, this means that the immature hippo has
distanced itself from its mother (Equation (9)). If r6 is greater than 0.5, this means that the
Biomimetics 2025, 10, 90 8 of 31

immature hippopotamus has distanced itself from its mother but is still in or near the herd;
otherwise, it has left the herd. Equations (9) and (10) are based on modeling this behavior
for immature and female hippos. Randomly chosen numbers or vectors, denoted as I1 and
I2 , are extracted from the set of five scenarios outlined in equation h. In Equation (10), r7 is
a random number between 0 and 1. Equations (11) and (12) describe the position update of
female or immature hippos. The objective function value is denoted by Fi :
(
Mhippo Mhippo
Xi Fi < Fi
Xi = (11)
Xi else
(
FBhippo FBhippo
Xi Fi
Xi = (12)
Xi
Using h-vectors, I1 and I2 scenarios enhance the algorithm’s global search and improve
its exploration capabilities.
The growth mechanism is improved by introducing a new convergence factor T, which
is specifically designed to dynamically adjust the behavioral patterns of immature hippos,
and the following equation is an improved formulation of T:
 6
t
T = 1− (13)
Max_iterations
(  
FBhippo FBhippo xij + h1 · Dhippo − I2 MGi , T > 0.95
Xi : xij = (14)
Ξ , else
  
 xij + h2 · MGi − D , r6 > 0.5
hippo
Ξ=
(15)

 lb j + r7 · ub j − lb j , else
h i
f or i = 1, 2, . . . , N2 and j = 1, 2, . . . , m

where t is the current iteration number and Max_iterations is the maximum iteration numbers.
Plots of the functions of Equations (8) and (13) before and after the improvement are
shown in Figure 2. The simulated immature hippopotamus individuals will show a higher
propensity to explore within the hippopotamus population or within the surrounding area
when T > 0.95 (Equation (14)). This behavior promotes the algorithm to refine its search in
a local region close to the current optimal solution, thus enhancing the algorithm’s search
accuracy and efficiency in that region. The immature hippo attempts to move away from
the present optimal solution when T ≤ 0.95 and r6 > 0.5. This is a method intended to
prolong the search in order to lower the possibility that the algorithm would fall into a
local optimum and to enable a more thorough investigation of the global solution space
(Equation (15)). The algorithm is able to identify and escape potential local optimality
traps more efficiently this way, thus increasing the probability of finding a globally optimal
solution. When r6 ≤ 0.5, immature hippos perform random exploration, allowing the
algorithm to maintain diversity and avoid premature convergence. This improvement
enhances the HO algorithm’s search capability and adaptability by better simulating the
natural behavior of hippos.

2.3. Small-Hole Imaging Reverse Learning Strategy


Many academics have proposed the reverse learning strategy to address the issue
that most intelligent optimization algorithms are prone to local extremes [45]. The core
idea behind this strategy is to create a corresponding reverse solution for the current
solution during population optimization, compare the objective function values of these
would fall into a local optimum and to enable a more thorough investigation of the global
Biomimetics 2025, 10, 90
solution space (Equation (15)). The algorithm is able to identify and escape potential 9local
of 31

optimality traps more efficiently this way, thus increasing the probability of finding a
globally optimal solution. When r6  0 .5 , immature hippos perform random exploration,
two solutions, and choose the better solution to move on to the next iteration. Based on
allowing
this the algorithm
approach, to presents
this study maintainsmall-hole
diversity and avoidreverse
imaging premature convergence.
learning This im‐
[46] technique to
provement enhances the HO algorithm’s search capability and adaptability by better
enhance population variety, which enhances the algorithm’s global search capability and sim‐
ulating
more the natural
accurately behavior of the
approximates hippos.
global optimal solution.

Biomimetics 2025, 10, x FOR PEER REVIEW 10 of 34

a j  bj 
 X best
2 h
 (16)
 
X best
 a j  bj  h
2
h
Let  n ; through the transformation to obtain X b e s t , the expression is Equation
h Original T
Improved T
(17), and Equation (18) is obtained when n  1 .
   
Figure 2. Plots of convergence factor T before and after improvement.
a j  band
Figure 2. Plots of convergence factor T before j aj 
after bj X
improvement.
 
X best   best (17)
2 2
The principle of small-hole imaging is shown in Figure n3, which is a combined method
n
2.3. Small‐Hole Imaging Reverse Learning Strategy
combining pinhole imaging with    a j  b j   X best
dimension-by-dimension
X best
inverse learning derived from
(18)
Many academics have proposed the reverse
LensOBL [47]. The aim is to find an inverse solution for each learning strategy to address
dimension of thethe issue
feasible
that mostthus
solution, intelligent
reducingoptimization algorithms
the risk of the algorithm are proneinto
falling to alocal
localextremes
optimum. [45]. The core
idea behind this strategy is to create a corresponding reverse solution for the current so‐
small
lution during population optimization, compare the objective function values of these two
hole
solutions, and choose the better solution to move on to the next iteration.
screen Based on this
receiver
approach, this study presents small‐hole imaging reverse learning [46] screen technique to en‐
P
hance population variety, which enhances the algorithm’s P
global  search capability and
more accurately approximates the global optimal solution.
h
Theh principle of small‐hole imaging is shown in Figure 3, which is a combined
method combining
X best pinhole imaging with O dimension‐by‐dimension 
X best inverse learning de‐
bj
aj
rived from LensOBL [47]. The aim is to find an inverse solution for each dimension of the
feasible solution, thus reducing the risk of the algorithm falling into a local optimum.
flame
Assume that in a certain space, there is a flame p with height h whose projection
j
on the X‐axis is X best (the jth dimensional optimal solution), the upper and lower
bounds of the coordinate axes are a j and b j (the upper and lower bounds of the jth

dimensional
Figure
Figure solution),
3.3.Schematic
Schematic and
diagram
diagram ofasmall‐hole
of screen with
small-hole a small
imaging
imagingreverse
reverse is placed on the base O . The flame
holelearning.
learning.
passing through the small hole can receive an inverted image p  with height h  on the
AsAssume
receiving bethat
canscreen.
seen infrom
The aflame
certain space,
Equation
passing there
(18), is athe
small‐hole
through flame p hole
withreverse
imaging
small height
can h whose
learning
receive projection
is the correct
an inverted imageon
j
the
generalX-axis
p  of heightis X (the jth dimensional
 on the strategy
reversehbest
learning receivingwhenscreen, n and
1 , but
optimal solution),
thenatathis the upper
time, small‐hole
reversed and lower
point X b e s imaging
t
bounds
learn‐
(the reversed
of
ing the
is coordinate
only the axes
current are
optimala j and b
position
j (the upper
through and lower
general bounds
reverse
solution of the jth dimensional solution) is obtained on the X‐axis through small‐hole of
learningthe jth
to dimensional
obtain a fixed
solution),
reverse and
point; a screen
this fixed with a
positionsmall hole is
is frequently placed on the base
far awayimaging, O. The flame
from the Equation passing
global optimal through
position.
imaging. Therefore, from the principle of small‐hole (16) can be de‐
the small hole
Therefore, by can receive
adjusting the an inverted
distance image the
between p′ with
receiving h′ on the
heightscreen and receiving
the screen.
small‐hole
rived.
The flame
screen passing
to change thethrough
adjustment the factor
small hole n , we
cancan receive analgorithm
use the inverted image
to obtain p′ of optimalh
anheight

on the receiving
solution closer to screen, and then
the position, a reversed
making it jumppoint
out ofX ′the (the reversed
best local solution
optimal region the jth
of closer
and
dimensional solution)
to the global optimal region. is obtained on the X-axis through small-hole imaging. Therefore,
fromThe thedevelopment
principle of small-hole
phase of the imaging,
originalEquation (16) can algorithm
hippopotamus be derived.describes a hippo‐
potamus fleeing from a predator. Another behavior of a hippopotamus facing a predator
( a j −b j )
occurs when a hippopotamus is unable 2 to − Xbesta predator
repel h with its defensive behaviors,
= ′ (16)
so the hippopotamus tries to get out ( a − b ) h
X ′of the area
best −
j
2
in
j order to avoid the predator. This strat‐

egy causes the hippo to find a safe location close to its current position. In the third phase,
the authors simulate this behavior, which improves the algorithm’s local search capabili‐
ties. Random places are created close to the hippo’s present location in order to simulate
this behavior.
HippoE HippoE local local local
Biomimetics 2025, 10, 90 10 of 31

Let hh′ = n; through the transformation to obtain X ′ best , the expression is Equation (17),
and Equation (18) is obtained when n = 1.
 
a j + bj a j + bj X
X ′
best = + − best (17)
2 2n n

X ′ best = a j + b j − Xbest

(18)

As can be seen from Equation (18), small-hole imaging reverse learning is the correct
general reverse learning strategy when n = 1, but at this time, small-hole imaging learning
is only the current optimal position through general reverse learning to obtain a fixed
reverse point; this fixed position is frequently far away from the global optimal position.
Therefore, by adjusting the distance between the receiving screen and the small-hole screen
to change the adjustment factor n, we can use the algorithm to obtain an optimal solution
closer to the position, making it jump out of the local optimal region and closer to the global
optimal region.
The development phase of the original hippopotamus algorithm describes a hip-
popotamus fleeing from a predator. Another behavior of a hippopotamus facing a predator
occurs when a hippopotamus is unable to repel a predator with its defensive behaviors, so
the hippopotamus tries to get out of the area in order to avoid the predator. This strategy
causes the hippo to find a safe location close to its current position. In the third phase,
the authors simulate this behavior, which improves the algorithm’s local search capabili-
ties. Random places are created close to the hippo’s present location in order to simulate
this behavior.
  
HippoE HippoE
Xi : xij = xij + r10 · lblocal j + s 1 · ub local − lblocal
j j (19)
(i = 1, 2, . . . , N · j = 1, 2, . . . , m)

lb j ub j
lblocal
j = , ublocal
j = , t = 1, 2, . . . , τ. (20)
t t
 →
2 × r11 − 1

s= r12 (21)

r13

Hippoε
where Xi is the position of the hippo when it escaped from the predator, and it is
searched to find the closest safe position. Out of the three s situations, s1 is a randomly
selected vector or number (Equation (21)). Better localized search is encouraged by the
possibilities that the s equations take into account, and r11 represents a random vector
between 0 and 1, while r10 and r13 denote random numbers generated in the range of 0 to 1.
In addition, r12 is a normally distributed random number. t denotes the current iteration
number, while τ denotes the highest iteration number.

 X Hippoε , FHippoε < F
i i i
Xi = Hippoε (22)
 Xi ,F ≥ Fi i

The fact that the fitness value improved at the new position suggested that the hip-
popotamus had relocated to a safer area close to its original location.
Incorporating the small-hole imaging reverse learning strategy into the HO algorithm
can effectively improve the diversity and optimization efficiency of the algorithm. This
strategy enhances population diversity and expands the search range through chaotic
sequences while mapping the optimal solution dimension by dimension to reduce inter-
dimensional interference and improve global search capability. Additionally, it enhances
Biomimetics 2025, 10, 90 11 of 31

stability, lowers the possibility of a local optimum, dynamically modifies the search range,
and synchronizes the global search with the local exploitation capabilities, all of which help
the algorithm to find a better solution with each iteration.

2.4. Algorithmic Process


The program details of MHO are shown in the flowchart in Figure 4. Firstly, create an
initial population using the sine chaotic map, and set the iteration counter to i = 1 and the
time counter t = 1. Secondly, it is divided into three phases: When i ≤ N/2, enter the first
phase (Phase 1), which is the position update of the hippopotamus in the river or pond
(exploration phase). Use Equations (9) and (14) to calculate the positions of male and female
hippos, respectively, and update the positions of the hippos using Equations (11) and (12).
When i > N/2, it enters the second phase (phase 2), i.e., hippopotamus defense against
predators, which is consistent with the original hippopotamus algorithm. The third phase
begins when i > N, where the hippopotamus escapes from the predator, and the final
position of the hippopotamus is calculated and the hippo’s nearest safe position is updated
using Equation (17). Finally, if the time counter is at t < T, increase t and reset the iteration
counter i = 1 to continue the iteration. The optimal objective function solution discovered
by the MHO algorithm is output once the maximum number of iterations T has been
reached.

2.5. Computational Complexity


Time complexity is a basic index to evaluate the efficiency of algorithms, which is
analyzed in this paper by using the method of BIG-O [48]. Assuming that the population
size is P, the dimension is D, and the number of iterations is T, the time complexities of the
HO algorithm and the MHO algorithm are analyzed as follows:
The standard HO algorithm consists of two phases: a random population initializa-
tion phase and a subsequent hippo position update. In the initialization phase, the time
complexity of HO can be expressed as T1 = O( P × D ). In the position updating phase,
the hippopotamus employs position updating in rivers or ponds for defense and escape
from predatory mechanisms. The computational complexity of each iteration is O( P × D ),
and after T iterations, the computational complexity accumulates as T2 = O( T × P × D ).
Therefore, the total time complexity of HO can be expressed as THO = T1 + T2 with O( P)
time complexity.
The proposed MHO algorithm consists of three phases: population initialization
based on sine chaotic mapping, hippopotamus position updating, and the small-hole
imaging reverse learning phase. In the sine chaotic mapping-based population initialization
phase, the time complexity of the MHO initialization is denoted as T1′ and is expressed as
T1′ = O( P × D ). The hippopotamus position update phase is very similar to the HO phase,
with a time complexity which is consistent with HO. In the small-hole imaging reverse
learning phase, the time complexity of this phase, denoted as T3′ = O( P × D ), is executed
in each iteration, resulting in a total complexity of O( T × P × D ). Thus, the overall time
complexity of the MHO algorithm can be summarized as TMHO = T1′ + T2 + T3′ with a final
time complexity of O( P). It is worth noting that the time complexity of MHO is comparable
to HO, which indicates that the enhancement strategy proposed in this study does not
affect the solution efficiency of the algorithm.
Biomimetics
Biomimetics2025,
2025,10,
10,x90FOR PEER REVIEW 1212ofof34
31

Start

Input all information of optimization problem

Set number of hippopotamus N and total number of iterations T

Create initial population with sine chaos map. Set i=1 and t=1.

Calculate objective function.

Update dominated hippopotamus based on the comparsion of the objective function.

YES
i>N/2

Calculate X iMhippo and X iFBhippo

YES
i=i+1 Update X i

Generate position of the predator at random

Calculate X iHippoR

Update Xi

YES
i=i+1 i<N

Set i=1

Calculate X iHippoR

Update X i
Guide

YES Phase 1
i=i+1 i<N
Phase 2

Save the best candidate solution found so far Phase 3

YES t=t+1
t<T
i=1

Output the best solution of the objective function found by MHO

END MHO

Figure
Figure4.4.Flowchart
Flowchartof
ofMHO
MHOalgorithm.
algorithm.

3. Experiment
2.5. Computational Complexity
Time
In thiscomplexity is a basic
section, a series index to evaluate
of experiments the efficiency
are designed ofthe
to validate algorithms, which
effectiveness is
of the
analyzed in this paper
HO improvement by using
algorithm, andthewemethod of BIG‐O
have chosen [48]. Assuming
23 benchmark that the population
test functions to evaluate
size is P ,algorithm
the MHO the dimension D , andcomparison
and toisperform the numberexperiments is Tnine
of iterationswith , theother
timemeta-heuristic
complexities
of the HO algorithm and the MHO algorithm are analyzed as follows:
Biomimetics 2025, 10, 90 13 of 31

algorithms. In addition, ablation experiments of the algorithm were conducted to explore


the contribution and impact of different components in the MHO algorithm.

3.1. Experimental Setup and Evaluation Criteria


To ensure the fairness and validity of the experiments, this paper proposes that the
HO-based improved algorithm MHO, as well as other nature-inspired algorithms, are
programmed and implemented in an experimental environment on Windows 10, all on a
computer configured with 12th Gen Intel (R) Core (TM) i5-12600KF 3.70 GHz processor,
16 Gb RAM, and using the programming language MATLAB 2019b. The performance of
the algorithms is evaluated using the following evaluation criteria:
Mean: the average value computed by the algorithm after executing it several times
for the benchmark function tested. The mean value indicates the general effectiveness of
the algorithm in finding the optimal solution, i.e., the desired performance of the algorithm.
A lower mean value indicates that the algorithm is able to find a better solution on average
over multiple runs. The formula is calculated as in Equation (23):

1 S
S ∑ i =1 i
Mean = F (23)

where S is the number of executions and Fi denotes the result of the ith execution.
Standard deviation: the standard deviation calculated by the algorithm after executing
the test functions many times. The smaller the standard deviation, the more stable the
performance of the algorithm, which usually means that the algorithm has better robustness.
The formula is shown in Equation (24):
s
 2
1 S 1 S
S ∑ i =1 i S ∑ i =1 i
Std = F − F (24)

Rank: ranks the results of the Friedman test for all algorithms; the lower the mean
and Std, the higher the rank. Algorithms with the same result are given comparative
ranks to each other. “Rank-Count” represents the cumulative sum of the ranks, “Ave-
Rank”’ represents the average of the ranks, and “Overall-Rank” is the final ranking of the
algorithms in all comparisons.

3.2. Test Function


In order to test the improved performance of the MHO algorithm, 23 benchmark
functions with different characteristics are used for testing and the specific function infor-
mation is shown in Table 1, which contains the dimensionality (Dim), the domain, and
the known theoretical optimum of the function. These test functions are grouped into
three categories: single-peak test functions for f 1 ∼ f 7 , multimodal functions for f 8 ∼ f 13 ,
and fixed-dimension functions for f 14 ∼ f 23 . The single-peak benchmark function is char-
acterized by the existence of only one global optimum solution and is monotonic and
deterministic, so it is suitable for evaluating the speed of convergence and the development
capability of optimization algorithms. The multimodal function has multiple local optimal
solutions but only one global optimal solution, which makes it commonly used to test
the global search capability of optimization algorithms and their ability to avoid falling
into local optima. Fixed-dimension multimodal functions, on the other hand, are usually
defined in a specific dimension, meaning that their complexity and difficulty are fixed
and do not change as the dimension changes. This ensures consistency and comparability
of tests.
Biomimetics 2025, 10, 90 14 of 31

Table 1. Benchmark functions.

Theoretical
Function Dimension Domain
Optimum
f 1 ( x ) = ∑in=1 xi2 30 [−100,100] 0
f2 (x) = ∑in=1 | xi | + ∏in=1 | xi | 30 [−10,10] 0
n o2
j <i [−100,100]
f3 (x) = ∑in=−01 ∑ j=0 xi 30 0

f 4 ( x ) = maxi {| xi |, 1 ≤ i ≤ n} 30 [−100,100] 0
h 2 i
f 5 ( x ) = ∑in=−11 100 xi+1 − xi2 + ( xi − 1)2 30 [−30,30] 0

f 6 ( x ) = ∑in=1 ([ xi + 0.5])2 30 [−100,100] 0

f 7 ( x ) = ∑in=1 ixi4 + random[0, 1) 30 [−1.28,1.28] 0


p 
f 8 ( x ) = ∑in=1 − xi sin | xi | 30 [−500,500] −12,569.4
n
 2
f 9 ( x ) = ∑i=1 xi − 10 cos(2πxi ) + 10

30 [−5.12,5.12] 0
 q 
f 10 ( x ) = −20 exp −0.2 (1/n) × ∑in=1 xi2 30 [−32,32] 0
− exp((1/n) × ∑in=1 cos(2πxi )) + 20 + e

= (1/4000) × ∑in=1 xi2 − ∏in=1 cos xi / x + 1 + 1

f 11 30 [−600,600] 0
2
f 12 ( x ) = (π/n) × {10 sin(πy1 )+ (yn − 1)
+∑in=−11 (yi − 1)2 1 + 10 sin2 (πyi+1 ) }
 
30 [−50,50] 0
yi = 1 + ( xi + 1)/4
 m
 k ( xi − a )


xi > a
u( xi , a, k, m) = 0 − a < xi < 1
k(− xi − a)m xi < − a

f 13 ( x ) = 0.1{∑in=1 ( xi − 1)2 1 + sin2 (3πx1 + 1)


 
30 [−50,50] 0
sin2 (3πx1 ) + ( xn − 1)2 1 + sin2 (2πxni ) }
 

+∑in=1 u( xi , 5, 100, 4)
   6 −1
f 14 ( x ) = 1/500 + ∑25 1/ j + ∑ 2
x i − a ij
2 [−65,65] 1
j =1 i =1
11
2
f 15 ( x ) = ∑i=1 ai − x1 bi + bi x2 / bi2 + bi x3 + x4
2 4 [−5,5] 0.00003075
 

f 16 ( x ) = 4x12 − 2.1x14 + x16 /3 + x1 x2 − 4x22 + 4x24 2 [−5,5] −1.0316285


2
x2 − 5.1/4π 2 x12 + (5/π ) x1 − 6

f 17 ( x ) = 2 [−5,5] 0.398
+10(1 − (1/8π )) cos x1 + 10
h i
f 18 ( x ) = 1 + ( x1 + x2 + 1)2 × 19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 2 [−2,2] 3
h i
· 30 + (2x1 − 3x2 )2 × 18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22
 2 
f 19 ( x ) = −∑4i=1 ci exp −∑3j=1 aij x j − pij 3 [0,1] −3.86
 2 
f 20 ( x ) = −∑4i=1 ci exp −∑6j=1 aij x j − pij 6 [0,1] −3.32
h i −1
f 21 ( x ) = −∑5i=1 ( x − ai )( x − ai ) T + ci 4 [0,10] −10
h i −1
f 22 ( x ) = −∑7i=1 ( x − ai )( x − ai ) T + ci 4 [0,10] −10
h i −1
f 23 ( x ) = −∑10 T 4 [0,10] −10
i =1 ( x − ai )( x − ai ) + ci
Biomimetics 2025, 10, 90 15 of 31

3.3. Sensitivity Analysis


MHO is a population-based optimizer that performs the optimization process through
iterative computation. Therefore, it can be expected that the experimental results are
usually influenced by the number of fitness evaluations (FEs = P ∗ t), where P is the
population size and t is the number of iterations. Most of the studies in the literature fix
FEs at 15,000 iterations, i.e., when P = 30 and t = 500. However, different P and t settings
can have an impact on the algorithm’s performance. Therefore, we chose three different
p/t combinations (20/75000, 30/500, and 60/250) to analyze their effects on the MHO
algorithm. Seventeen test functions were randomly selected for sensitivity analysis, and
the experimental results are shown in Table 2.

Table 2. Sensitivity analysis of p/t.

p/t p/t p/t


Function Criterion
20/750 30/500 60/250
Mean 0.0000 0.0000 0.0000
f1 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 0.0000 0.0000 0.0000
f2 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 0.0000 0.0000 0.0000
f3 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 0.0000 0.0000 0.0000
f4 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 3.2851 × 10−5 2.8911 × 10−5 2.7563 × 10−5
f7 Std 3.4492 × 10−5 3.1692 × 10−5 2.6425 × 10−5
Rank 3 2 1
Mean 0.0000 0.0000 0.0000
f9 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 9.5923 × 10−16 8.8818 × 10−16 8.8818 × 10−16
f 10 Std 5.0243 × 10−16 0.0000 0.0000
Rank 3 1.5 1.5
Mean 0.0000 0.0000 0.0000
f 11 Std 0.0000 0.0000 0.0000
Rank 2 2 2
Mean 3.6758 × 10−4 1.5255 × 10−4 2.9019 × 10−4
f 12 Std 5.1790 × 10−4 3.5965 × 10−4 4.3157 × 10−4
Rank 3 1 2
Mean 9.9800 × 10−1 9.9800 × 10−1 9.9800 × 10−1
f 14 Std 2.7532 × 10−13 1.5468 × 10−13 5.5052 × 10−13
Rank 2 2 2
Mean −1.0316 −1.0316 −1.0316
f 16 Std 3.2394 × 10−11 3.6381 × 10−10 1.5708 × 10−9
Rank 2 2 2
Mean 3.9789 × 10−1 3.9789 × 10−1 3.9789 × 10−1
f 17 Std 4.4950 × 10−10 3.9781 × 10−10 2.7622 × 10−9
Rank 2 2 2
Mean −3.8628 −3.8628 −3.8628
f 19 Std 1.1802 × 10−8 5.0575 × 10−8 6.2302 × 10−8
Rank 2 2 2
Biomimetics 2025, 10, 90 16 of 31

Table 2. Cont.

p/t p/t p/t


Function Criterion
20/750 30/500 60/250
Mean −3.2904 −3.2849 −3.2794
f 20 Std 5.4536 × 102 5.8998 × 102 6.7029 × 102
Rank 1 2 3
Mean −1.0153 × 101 −1.0153 × 101 −1.0153 × 101
f 21 Std 8.8671 × 10−7 1.7912 × 10−6 3.8080 × 10−6
Rank 2 2 2
Mean −1.0403 × 10−1 −1.0403 × 10−1 −1.0403 × 10−1
f 22 Std 6.7735 × 10−7 1.5216 × 10−6 6.3626 × 10−6
Rank 2 2 2
Mean −1.053 × 101 −1.056 × 101 −1.056 × 101
f 23 Std 7.0815 × 10−7 1.5856 × 10−6 9.3393 × 10−6
Rank 2 2 2
Rank-Count 36 32.5 33.5
Ave-Rank 2.11 1.91 1.97
Overall-Rank 3 1 2

As can be seen in Table 2, for f 7 , f 12 , f 20 the best results for these three functions are
achieved for four different p/t settings. For f 16 , f 19 , f 20 , f 21 , f 23 of these six functions, the
p/t of 20/750 has the smallest value of standard deviation. p/t of 30/500 for the functions
f 14 and f 17 exhibits smaller values of Std. Rank-Count is the sum of the rank values of all
functions for the same set of p/t, where the Rank-Count value of 32.5 for p/t of 30/500 is
the smallest. After the Friedman test, it can be seen that the first place on the final ranking
(Overall-Rank) is p/t of 30/500, so it can be concluded that this experimental result is the
best and is set as a fixed parameter for the experiment in this paper.

3.4. Experimental Results


Comparative experiments were conducted on the above twenty-three test functions
for HO as well as variants of HO (HO1, HO2, and HO3) and comparing them with the
Harris hawk algorithm (HHO) [49], honey badger algorithm (HBA) [50], dung beetle opti-
mization algorithm (DBO) [51], particle swarm optimization algorithm (PSO), and whale
optimization algorithm (WOA) [52], where HO1 is the HO with the introduction of the sine
chaotic map after HO, using Equation (4) to replace the population initialization method,
HO2 incorporates the small-hole imaging reverse learning strategy, using Equation (17)
to add a reverse learning process, and HO3 is to improve the growth mechanism of HO
(Equation (13)). The evaluation process is set with uniform parameters to ensure fairness,
and each algorithm will perform 50 cycles each time. The experimental results are shown
in Table 3.
Observing the data in Table 3 yields the performance of our MHO algorithm and its
comparative algorithms on several benchmark functions. The MHO algorithm outperforms
the other algorithms in terms of mean and standard deviation on the functions f 1 ∼ f 4 and
is slightly inferior to the HHO algorithm on the functions f 5 ∼ f 6 , but the MHO algorithm
is second only to the HHO algorithm in terms of mean and standard deviation on the
function f 6 . The MHO algorithm is superior to all other algorithms except the variant HO2
on the function f 7 . For the multimodal benchmark function f 8 ∼ f 13 , the MHO algorithm
performs optimally on the function f 9 , f 10 , f 11 . It is inferior to the HHO on both the f 12
and f 13 functions, but the mean value of f 13 is second only to the HHO and is superior to
the other algorithms. For the fixed-dimensional test functions f 14 ∼ f 23 , it outperforms
the other algorithms in terms of mean and standard deviation for the six test functions
Biomimetics 2025, 10, 90 17 of 31

f 14 , f 15 and f 20 ∼ f 23 , and while the standard deviation is slightly worse than the other
algorithms for the four functions, the mean values are optimal. For the fixed-dimensional
test functions f 14 ∼ f 23 , MHO outperforms the other algorithms in terms of mean and
standard deviation for the six test functions, while the standard deviation is slightly worse
than the other algorithms for the four functions f 16 ∼ f 19 , but the mean values are optimal.

Table 3. Experimental results of MHO and its comparison algorithms.

Function Algorithm Mean Std Function Algorithm Mean Std


MHO 0.0000 0.0000 MHO 5.2459 × 109−3 1.2318 × 10−2
HO 0.0000 0.0000 HO 4.0138 × 108−3 9.4581 × 10−3
HO1 0.0000 0.0000 HO1 1.7004 × 104−3 4.5484 × 10−3
HO2 0.0000 0.0000 HO2 4.3263 × 103−3 9.7711 × 10−3
f1 HO3 0.0000 0.0000 f 13 HO3 1.0953 × 10−2 1.4091 × 10−2
HHO 3.3722 × 10−93 2.3825 × 10−92 HHO 9.9801 × 10−5 1.3388 × 10−4
HBA 6.9026 × 10−72 3.3528 × 10−71 HBA 4.6997 × 10−1 3.3318 × 10−1
DBO 5.1200 × 10−113 2.6559 × 10−112 DBO 5.9622 × 102−1 4.7776 × 10−1
PSO 2.5274 1.0479 PSO 6.1253 × 103−1 2.3179 × 10−1
WOA 1.0234 × 10−72 6.5245 × 10−72 WOA 5.7769 × 10−1 3.1337 × 10−1
MHO 0.0000 0.0000 MHO 9.9800 × 10−1 1.5468 × 10−13
HO 4.0358 × 10−182 0.0000 HO 9.9800 × 10−1 3.0556 × 10−13
HO1 2.8707 × 10−182 0.0000 HO1 9.9800 × 10−1 2.4026 × 10−13
HO2 0.0000 0.0000 HO2 9.9800 × 10−1 1.2184 × 10−12
f2 HO3 1.1752 × 10−180 0.0000 f 14 HO3 9.9800 × 10−1 8.5786 × 10−12
HHO 9.2603 × 10−51 4.8939 × 10−50 HHO 1.6880 1.7750
HBA 2.9432 × 10−72 1.2928 × 10−71 HBA 1.3517 1.5159
DBO 1.5566 × 10−54 1.1007 × 10−53 DBO 1.4741 8.7881 × 10−1
PSO 4.5005 1.4881 PSO 3.3460 2.3845
WOA 8.9092 × 10−50 5.0084 × 10−49 WOA 2.1392 2.3978
MHO 0.0000 0.0000 MHO 3.0751 × 10−4 4.0690 × 10−8
HO 0.0000 0.0000 HO 3.0752 × 10−4 1.0325 × 10−7
HO1 0.0000 0.0000 HO1 3.0755 × 10−4 3.2657 × 10−7
HO2 0.0000 0.0000 HO2 3.0752 × 10−4 4.7547 × 10−8
f3 HO3 0.0000 0.0000 f 15 HO3 3.0753 × 10−4 1.1442 × 10−7
HHO 3.1810 × 10−72 2.2493 × 10−71 HHO 3.7963 × 10−4 1.8487 × 10−4
HBA 3.2286 × 10−93 2.2092 × 10−92 HBA 4.9321 × 10−3 8.4966 × 10−3
DBO 6.6703 × 10−40 4.7166 × 10−39 DBO 7.2540 × 10−4 3.1412 × 10−4
PSO 1.7653 × 102 4.5978 × 101 PSO 8.9744 × 10−4 2.2012 × 10−4
WOA 4.1941 × 104 1.5808 × 104 WOA 6.9236 × 10−4 4.7036 × 10− 4
MHO 0.0000 0.0000 MHO −1.0316 3.6381 × 10−10
HO 2.8900 × 10−184 0.0000 HO −1.0316 2.6830 × 10−10
HO1 6.8888 × 10−181 0.0000 HO1 −1.0316 1.5659 × 10−10
HO2 0.0000 0.0000 HO2 −1.0316 6.7215 × 10−11
f4 HO3 5.9113 × 10−179 0.0000 f 16 HO3 −1.0316 6.3616 × 10−11
HHO 8.0996 × 10−49 5.6060 × 10−48 HHO −1.0316 4.6598 × 10−9
HBA 1.9110 × 10−56 7.6981 × 10−56 HBA −1.0316 3.2812 × 10−16
DBO 1.2250 × 10−49 8.6389 × 10−49 DBO −1.0316 3.3269 × 10−16
PSO 1.9645 2.5026 × 10−1 PSO −1.0316 4.2439 × 10−16
WOA 5.2418 × 101 2.5506 × 101 WOA −1.0316 7.1329 × 10−10
MHO 1.1509 × 10−1 1.7645 × 10−1 MHO 3.9789 × 10−1 3.9781 × 10−10
HO 5.4281 × 10−2 8.3571 × 10−2 HO 3.9789 × 10−1 2.3634 × 10−9
HO1 3.4397 × 10−2 6.5186 × 10−2 HO1 3.9789 × 10−1 2.4502 × 10−9
HO2 6.2677 × 10−2 1.0042 × 10−1 HO2 3.9789 × 10−1 1.6152 × 10−9
f5 HO3 9.1590 × 10−2 1.1923 × 10−1 f 17 HO3 3.9789 × 10−1 1.8410 × 10−9
HHO 1.2253 × 10−2 1.8448 × 10−2 HHO 3.9790 × 10−1 2.4452 × 10−5
HBA 2.4033 × 101 8.3381 × 10−1 HBA 3.9789 × 10−1 3.3645 × 10−16
DBO 2.5734 × 101 2.7141 × 10−1 DBO 3.9789 × 10−1 3.3645 × 10−16
PSO 9.2719 × 102 4.7898 × 102 PSO 3.9789 × 10−1 3.3645 × 10−16
WOA 2.7988 × 101 4.9259 × 10−1 WOA 3.9790 × 10−1 1.2215 × 10−5
Biomimetics 2025, 10, 90 18 of 31

Table 3. Cont.

Function Algorithm Mean Std Function Algorithm Mean Std


MHO 9.4915 × 10−3 9.7607 × 10−3 MHO 3.0000 8.0461 × 10−9
HO 8.1739 × 10−3 1.1091 × 10−2 HO 3.0000 6.2997 × 10−9
HO1 1.1798 × 10−2 1.3268 × 10−2 HO1 3.0000 1.4408 × 10−9
HO2 8.6772 × 10−3 9.5263 × 10−3 HO2 3.0000 7.5526 × 10−8
f6 HO3 2.2698 × 10−2 1.0268 × 10−2 f 18 HO3 3.0000 2.8452 × 10−9
HHO 1.5709 × 10−4 2.4626 × 10−4 HHO 3.0000 6.7092 × 10−7
HBA 4.7001 × 10−2 9.6949 × 10−2 HBA 4.0800 5.3446 × 10+0
DBO 6.2521 × 10−3 2.9742 × 10−2 DBO 3.0000 2.4628 × 10−15
PSO 2.3410 × 10+0 1.1406 × 10+0 PSO 3.0000 5.3245 × 10−15
WOA 4.1539 × 10−1 2.5095 × 10−1 WOA 3.0001 3.0702 × 10−4
MHO 2.8911 × 10−5 3.1692 × 10−5 MHO −3.8628 5.0575 × 10−8
HO 3.2820 × 10−5 3.3002 × 10−5 HO -3.8628 4.5360 × 10−8
HO1 3.5287 × 10−5 3.9834 × 10−5 HO1 −3.8628 1.2794 × 10−8
HO2 2.5891 × 10−5 2.8914 × 10−5 HO2 −3.8628 3.8218 × 10−8
f7 HO3 3.6700 × 10−5 3.4275 × 10−5 f 19 HO3 −3.8628 2.8635 × 10−8
HHO 6.9272 × 10−5 6.3855 × 10−5 HHO −3.8607 2.8916 × 10−3
HBA 8.5039 × 10−5 9.4110 × 10−5 HBA −3.8615 2.9188 × 10−3
DBO 8.2668 × 10−5 8.4546 × 10−5 DBO −3.8615 2.9188 × 10−3
PSO 1.2883 × 10−4 1.7180 × 10−4 PSO −3.8628 2.4066 × 10−15
WOA 6.6807 × 10−5 7.6012 × 10−5 WOA −3.8564 1.1435 × 10−2
MHO −8.9787 × 103 2.1339 × 103 MHO −3.2849 5.8998 × 10−2
HO −7.9594 × 103 1.5232 × 103 HO −3.2687 6.6724 × 10−2
HO1 −1.9759 × 104 2.9040 × 103 HO1 −3.2230 6.0775 × 10−2
HO2 −8.9256 × 103 2.0041 × 103 HO2 −3.2752 6.3694 × 10−2
f8 HO3 −7.9414 × 103 1.5311 × 103 f 20 HO3 −3.2722 6.5502 × 10−2
HHO −1.2553 × 104 8.7389 × 101 HHO −3.1380 9.6909 × 10−2
HBA −8.6957 × 103 8.9116 × 102 HBA −3.2458 1.4477 × 10−1
DBO −8.8398 × 103 1.7724 × 103 DBO −3.2273 1.2686 × 10−1
PSO −6.0159 × 103 1.3308 × 103 PSO −3.2673 5.9858 × 10−2
WOA −1.0110 × 104 1.7827 × 103 WOA −3.2425 9.8594 × 10−2
MHO 0.0000 0.0000 MHO −1.0153 × 101 1.7912 × 10−6
HO 0.0000 0.0000 HO −1.0153 × 101 3.3627 × 10−6
HO1 0.0000 0.0000 HO1 −1.0153 × 101 2.4596 × 10−6
HO2 0.0000 0.0000 HO2 −1.0153 × 101 2.7432 × 10−6
f9 HO3 0.0000 0.0000 f 21 HO3 −1.0153 × 101 1.1416 × 10−5
HHO 0.0000 0.0000 HHO −5.4448 1.3442
HBA 0.0000 0.0000 HBA −9.6319 2.0943
DBO 3.1725 × 10−1 1.9833 DBO −8.0235 2.6724
PSO 1.6470 × 10+2 3.3948 × 101 PSO −6.5915 3.2019
WOA 2.2737 × 10−15 1.1252 × 10−14 WOA −8.3749 2.6669
MHO 8.8818 × 10−16 0.0000 MHO −1.0403 × 101 1.5216 × 10−6
HO 8.8818 × 10−16 0.0000 HO −1.0403 × 101 1.6082 × 10−6
HO1 8.8818 × 10−16 0.0000 HO1 −1.0403 × 101 2.2791 × 10−6
HO2 8.8818 × 10−16 0.0000 HO2 −1.0403 × 101 1.7314 × 10−6
f 10 HO3 8.8818 × 10−16 0.0000 f 22 HO3 −1.0403 × 101 2.3335 × 10−5
HHO 8.8818 × 10−16 0.0000 HHO −5.1867 7.2997 × 10−1
HBA 3.9818 × 10−1 2.8156 HBA −9.5440 2.3556
DBO 8.8818 × 10−16 0.0000 DBO −8.0311 2.7098
PSO 2.6298 4.0400 × 10−1 PSO −8.9425 2.5016
WOA 4.9383 × 10−15 2.3816 × 10−15 WOA −8.3157 2.8065
MHO 0.0000 0.0000 MHO −1.0536 × 101 1.5856 × 106
HO 0.0000 0.0000 HO −1.0536 × 101 1.9358 × 106
HO1 0.0000 0.0000 HO1 −1.0536 × 101 1.6338 × 106
f 11 HO2 0.0000 0.0000 HO2 −1.0536 × 101 2.1182 × 106
HO3 0.0000 0.0000 f 23 HO3 −1.0536 × 101 2.4098 × 106
HHO 0.0000 0.0000 HHO −5.2693 1.1627
HBA 0.0000 0.0000 HBA −9.2044 2.8850
DBO 0.0000 0.0000 DBO −8.8744 2.7522
Biomimetics 2025, 10, 90 19 of 31

Table 3. Cont.

Function Algorithm Mean Std Function Algorithm Mean Std


PSO 1.2258 × 10−1 4.9294 × 10−2 PSO −9.6646 2.2171
f 11
WOA 5.7736 × 10−3 2.8737 × 10−2 WOA −7.5317 3.3854
MHO 1.5255 × 10−4 3.5965 × 10−4
HO 3.1657 × 10−4 6.1589 × 10−4
HO1 3.9595 × 10−4 6.2673 × 10−4
HO2 2.5974 × 10−4 4.5742 × 10−4
f 12 HO3 7.4742 × 10−4 6.2937 × 10−4
HHO 6.7478 × 10−6 1.0577 × 10−5
HBA 2.3525 × 10−3 1.4694 × 10−2
DBO 4.4967 × 10−5 1.3964 × 10−4
PSO 4.0556 × 10−2 3.2624 × 10−2
WOA 2.6887 × 10−2 2.8830 × 10−2

Summarizing the above results, it can be seen that the MHO algorithm shows a clear
advantage in the benchmark function. Whether on single-peak, multi-peak, or hybrid
functions, it shows excellent optimization performance and stability. These results fully
demonstrate the effectiveness and superiority of the MHO algorithm in solving complex
optimization problems.

3.5. Friedman Test


The Friedman test [53] provides an effective tool for performance comparison of
optimization algorithms, statistical significance analysis, robustness assessment, and multi-
objective optimization, which allows us to make scientific and reasonable algorithm selec-
tions and applications. Through the Friedman test, we can fairly compare the performance
of different algorithms and reduce the bias caused by the selection of specific problems, so
as to make objective evaluations and scientific decisions.
Therefore, in order to further compare the overall performance of these 10 algorithms,
the algorithms are ranked using the Friedman test, and Table 4 shows the performance
rankings of the 10 algorithms, including MHO, on 17 randomly selected functions out of the
23 benchmark functions mentioned above. From the table, it can be concluded that MHO
has a sum of 50.5 ranking numbers, an average ranking of 2.1597, and a final ranking of 1,
which indicates that MHO has the best overall performance. The results of the Friedman
test once again prove that MHO performs better than the other algorithms.

Table 4. Performance ratings of MHO and its comparative algorithms.

Function MHO HO HO1 HO2 HO3 HHO HBA DBO PSO WOA
f1 3 3 3 3 3 7 9 6 10 8
f2 1.5 4 3 1.5 5 8 6 7 10 9
f3 3 3 3 3 3 7 6 8 9 10
f4 1.5 3 4 1.5 5 8 6 7 9 10
f6 3 4 5 2 6 1 8 7 10 9
f9 4 4 4 4 4 4 4 9 10 8
f 10 4 4 4 4 4 4 10 4 9 8
f 11 4.5 4.5 4.5 4.5 4.5 4.5 4.5 4.5 10 9
f 12 3 5 6 4 7 1 8 2 10 9
f 14 3 3 3 3 3 8 7 6 10 9
f 15 1 3 5 2 4 6 10 7 8 9
f 16 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5
f 19 3.5 3.5 3.5 3.5 3.5 9 7.5 7.5 3.5 10
f 20 1 5 6 2 4 10 8 9 3 7
f 21 3 3 3 3 3 9 6 8 10 7
Biomimetics 2025, 10, 90 20 of 31

Table 4. Cont.

Function MHO HO HO1 HO2 HO3 HHO HBA DBO PSO WOA
f 22 3 3 3 3 3 10 6 9 7 8
f 23 3 3 3 3 3 10 7 8 6 9
Rank-Count 50.5 63.5 68.5 52.5 70.5 112 118.5 114.5 140 144.5
Ave-Rank 2.1957 2.7609 2.9783 2.2826 3.0652 4.8696 5.1522 4.9783 6.0870 6.2826
Overall-Rank 1 3 4 2 5 6 8 7 9 10

3.6. Convergence Analysis


The convergence curve usually indicates the process of the algorithm gradually ap-
proaching the optimal solution during the iteration process, which can simply and intu-
itively show the advantages and disadvantages of one or more algorithms, so the conver-
gence analysis is a key step in verifying whether the MHO algorithm can stably find the
optimal solution or near-optimal solution to the optimization problem.
In this study, the average fitness value of the objective function is used as the criterion
for evaluating the convergence of the algorithms, and each algorithm is iterated up to
500 times. We visualize the experimental results of ten algorithms, including MHO, on
23 benchmark functions, and the obtained convergence curves are subjected to convergence
analysis. As shown in Figures 5–7, the convergence plots are shown for the single-peak
function, the multimodal function, and the fixed-dimension function, respectively.

Figure 5. Convergence plots of single-peak function.


Biomimetics2025,
Biomimetics 2025,10,
10,90
x FOR PEER REVIEW 2221ofof34
31

f8 f9 f10

f11 f12 f13

Figure 6.
Figure 6. Convergence
Convergence plots
plots of
of multi-peak
multi‐peak function.
function.

All the
All convergence
convergence curves for the
curves of multi‐peak function
the single-peak are shown
function in Figure
are shown 6. Again,
in Figure the
5. The
curves of MHO and HO2 are similar on the six functions on
initial solution of MHO is always the lowest among the convergence curves on these the graphs. On the func‐
f 8 seven
functions, indicating
tion, the fitness values thatofitMHO
is ableand to find a good
other quality
functions aresolution at the initial
significantly lower stage.
than thoseAmong of
them,
HHO,except
but onforall ffunctions
7 , the variantother HO2than has similar
that, curves to MHO,
the dominance of MHOandisthe convergence
similar to that on speed
the
as well as the
single‐peak accuracy
function. Onis optimal,
f 9 , it canwhich
be seenreflects the effectiveness
that MHO and the greenoflinetheofreverse learning
HO2 converge
strategy for small-aperture
preferentially, followed by imaging. four similar All lines
the curves converge
for HO, to theand
HO1, HO3, same HHOlevel; except for
converging
the
onePSO
afterof f 2 andPSO
another; the WOA
shows of thef 3worst
, all curves tend torate
convergence converge. the f 7 on
On values
and fitness function,
f 8  f 1the
1
.
convergence speed of MHO is not similar to other algorithms, but the value of its optimal
solution is the smallest, so the overall performance is better than other algorithms.
All convergence curves for the multi-peak function are shown in Figure 6. Again,
the curves of MHO and HO2 are similar on the six functions on the graphs. On the f 8
function, the fitness values of MHO and other functions are significantly lower than those of
HHO, but on all functions other than that, the dominance of MHO is similar to that on the
single-peak function. On f 9 , it can be seen that MHO and the green line of HO2 converge
preferentially, followed by four similar lines for HO, HO1, HO3, and HHO converging one
after another; PSO shows the worst convergence rate and fitness values on f 8 ∼ f 11 .
Figure 7 shows the multimodal function with fixed dimensions. There are few overall
differences between all the algorithms in function f 14 ∼ f 19 , but there are noticeable
differences between the curves in the detailed presentation. The same characteristics of
MHO are exhibited in all these functions—a rapid decline in the initial period, showing a
fast rate of convergence—and the other algorithms also show faster convergence on specific
functions, but with lower fitness values for MHO. Among the functions f 20 ∼ f 23 , HHO
has the worst overall performance, and MHO shows good optimization performance with
fast convergence speed and optimal solutions on all functions. The other algorithms also
perform well on specific functions but, overall, the MHO algorithm shows competitiveness
in these tests.
Biomimetics 2025,10,
Biomimetics2025, 10,90x FOR PEER REVIEW 23 22
ofof3431

f14 f15 f16

f17 f18 f19

f 20 f 21 f 22

f 23

Figure7.7. Convergence
Figure Convergence plots
plots of fixed-dimensional
fixed‐dimensional multimodal
multimodal function.
function.

3.7. Stability
Figure 7Analysis
shows the multimodal function with fixed dimensions. There are few overall
differences betweenbox-and-line
In this section, all the algorithms in used
plots are function f 1 4  fthe
to analyze 19
, but thereofare
stability allnoticeable dif‐
the algorithms,
which arebetween
ferences run independently
the curves in 50the
times, againpresentation.
detailed using the experimental results for the of
The same characteristics 23MHO
bench-
are exhibited
mark functions.inFigures
all these8–10
functions—a rapid declineplots
show the box-and-line in the
forinitial period, showing
the single-peak a fast
function, the
rate of convergence—and
multi-peak function, and the the fixed-dimension
other algorithms multimodal
also show faster convergence
function, on specific
respectively. As an
functions,the
example, but with lower
boxplots of thefitness values
functions for MHO.
in Figure 7 areAmong the as
presented functions
an evaluation , HHO
f 20  fmethod
23 of
the
hasboxplots.
the worst overall performance, and MHO shows good optimization performance with
algorithms are more stable. The gray dotted line represents the whisker; the longer the
whisker, the more discrete the data are. The longer whisker of DBO shows that it performs
poorly. The stability of the algorithm can be analyzed by combining the above evaluation
Biomimetics 2025, 10, 90 parameters. As a side note, we have chosen some representative examples to keep the23data
of 31
concise.

Biomimetics 2025, 10, x FOR PEER REVIEW 25 of 34

Biomimetics 2025, 10, x FORf PEER REVIEW f2 f7 25 of 34


1

Figure8.8.Boxplots
Figure Boxplotsof
ofsingle-peak
single‐peak function.
function.

Among the single‐peak functions shown in Figure 8, MHO has the lowest median,
the smallest outliers, the smallest interquartile spacing, and shows better stability, while
PSO is the least stable; the other four single‐peak functions are not shown, but they are
f10 f11 f13
consistent in their general trend with the representative cases shown.
Figure 9. Boxplots of multi‐peak function.

Among the multi‐peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 1 1 , f 1 3 than
HHO; among the functions not presented, the overall trend is consistent with the repre‐
f10 f11
sentative cases presented, with MHO f13
being slightly weaker in terms of stability than HHO
as well as PSO performing the worst.
Figure9.9.Boxplots
Figure Boxplotsof
ofmulti-peak
multi‐peak function.
function.

Among the multi‐peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 1 1 , f 1 3 than
HHO; among the functions not presented, the overall trend is consistent with the repre‐
sentative cases presented, with MHO being slightly weaker in terms of stability than HHO
as well as PSO performing the worst.

f15 f19 f 23

Figure10.
Figure 10.Boxplots
Boxplots of
of multimodal
multimodal functions
functions with
with fixed
fixed dimensions.
dimensions.

In the box-and-line
Among the multimodal plot,functions
the red horizontal line represents
shown in Figure 10, MHOthe hasmedian,
the lowest with lower
median
values indicating
and outliers and better
the bestperformance of theHHO
stability, while algorithm
performson the thetest function.
worst. It is thetrend
The general primary
in
metric for evaluating
the performance thealgorithms
of the performance in ofthethe algorithms.
non‐shown MHO has
functions a low median,
is consistent with with
the
f15 shown
all functions. The
the algorithms combined
except boxplot
HHO fshowing
19 analysis
similar of the above
performance. algorithms
The blueleads
f 23 boxesto the
for con‐
DBO
clusion
and WOA that MHO
show thehas the best stability.
interquartile range (IQR), where a smaller IQR indicates a more stable
Figure 10. Boxplots of multimodal functions with fixed dimensions.
algorithmic performance. Thus, MHO, HO, HO1, HO2, HO3, HHO, HBA, and WOA all
4. Application
show Among to Engineering
better stability.
the multimodal crossesDesign
The red functions shown Problems
represent inoutliers,
Figure 10, andMHOthe smaller
has thethe number,
lowest medianthe
better the stability.
Three
and outliers typical
and theHere,
bestonly
engineering HHO, HBA,HHO
constraint
stability, while and PSO have outliers,
problems—reducer
performs implying
design
the worst. [54],
The thattrain
gear
general the
trendother
de‐
in
algorithms
sign are more
[55], and stepof
the performance stable.
taper
the pulleyThe gray
design
algorithms dotted
in[56]—are line represents
chosen for
the non‐shown the whisker;
examination
functions the longer
in this section
is consistent the
in
with the
whisker,
order the moreconfirm
showntofunctions.
further discrete theefficacy
the
The combined data are.
of The
boxplot MHO longer ofwhisker
in resolving
analysis the above of DBO
global showsleads
optimization
algorithms that issues.
it
toperforms
Be‐
the con‐
poorly.
cause
clusion The
ofthatstability
theirMHO of
intricate the algorithm
has restrictions can be analyzed by combining the above
and multi‐objective optimization features, these issues
the best stability. evaluation
parameters.
are not only As a side note,
significant we have chosen
in engineering practicesome
but alsorepresentative
make excellent examples
examples to for
keep the
eval‐
data concise.
4. Application
uating to Engineering
the effectiveness of optimization Design Problems
methods. The trials are set up as 50 rounds of
cyclesThree
with atypical
maximum number of iterations per
engineering constraint problems—reducer round of 50, and we will
design compare
[54], MHO’s
gear train de‐
performance with that of other algorithms to confirm its effectiveness.
sign [55], and step taper pulley design [56]—are chosen for examination in this section in
order to further confirm the efficacy of MHO in resolving global optimization issues. Be‐
4.1. Speed Reducer Design Problem
cause of their intricate restrictions and multi‐objective optimization features, these issues
Biomimetics 2025, 10, 90 24 of 31

Among the single-peak functions shown in Figure 8, MHO has the lowest median, the
smallest outliers, the smallest interquartile spacing, and shows better stability, while PSO is
the least stable; the other four single-peak functions are consistent in their general trend
with the representative cases shown.
Among the multi-peak functions presented in Figure 9, MHO has the lowest median
and performs better in terms of stability, with only slightly more outliers on f 11 , f 13
than HHO; among the functions not presented, the overall trend is consistent with the
representative cases presented, with MHO being slightly weaker in terms of stability than
HHO as well as PSO performing the worst.
Among the multimodal functions shown in Figure 10, MHO has the lowest median
and outliers and the best stability, while HHO performs the worst. The general trend in the
performance of the algorithms in the non-shown functions is consistent with the shown
functions. The combined boxplot analysis of the above algorithms leads to the conclusion
that MHO has the best stability.

4. Application to Engineering Design Problems


Three typical engineering constraint problems—reducer design [54], gear train de-
sign [55], and step taper pulley design [56]—are chosen for examination in this section
in order to further confirm the efficacy of MHO in resolving global optimization issues.
Because of their intricate restrictions and multi-objective optimization features, these is-
sues are not only significant in engineering practice but also make excellent examples for
evaluating the effectiveness of optimization methods. The trials are set up as 50 rounds of
cycles with a maximum number of iterations per round of 50, and we will compare MHO’s
performance with that of other algorithms to confirm its effectiveness.

4.1. Speed Reducer Design Problem


Reducers are key components in mechanical drive systems. As shown in Figure 11,
the design of a speed reducer is challenging. This is because seven design variables are
involved: face width (x1 ), module of teeth (x2 ), number of teeth on the pinion (x3 ), length
of the first shaft between the bearings (x4 ), length of the second shaft between the bearings
(x5 ), diameter of the first shaft (x6 ), and diameter of the second shaft (x7 ). The objective is
to minimize the total weight of the gearbox while satisfying 11 constraints. The constraints
include bending stresses in the gear teeth, surface stresses, lateral deflections of shaft 1 and
shaft 2 due to transmitted forces, and stresses in shaft 1 and shaft 2. The mathematical
model is shown in Equation (25):
MHO is compared to nine other algorithms in order to address the speed reducer
design problem. Table 5 displays the results of the experiment, and it is evident that MHO
is the least costly algorithm.

Table 5. Comparison of the results for the speed reducer design problem.

Optimal Value
Algorithm Optimal Cost
x1 x2 x3 x4 x5 x6 x7
MHO 3.5999 7.0000 × 10−1 1.7000 × 101 8.3000 7.7978 3.3985 5.2935 3.0614 × 103
HO 3.5145 7.0000 × 10−1 1.7000 × 101 7.4885 7.9394 3.3538 5.4191 3.0942 × 103
HO1 3.5473 7.0000 × 10−1 1.7000 × 101 7.6056 7.9826 3.7487 5.2885 3.1373 × 103
HO2 3.5747 7.0000 × 10−1 1.7000 × 101 7.3000 7.8843 3.3548 5.4199 3.1155 × 103
HO3 3.5147 7.0000 × 10−1 1.7000 × 101 7.8114 8.1951 3.3515 5.4857 3.1476 × 103
HHO 3.5034 7.0000 × 10−1 1.8508 × 101 8.0993 7.8316 3.7984 5.3924 3.4771 × 103
HBA 3.5000 7.0000 × 10−1 1.7000 × 101 8.2194 7.9955 3.5891 5.2869 3.0754 × 103
DBO 3.6000 7.0000 × 10−1 1.7000 × 101 8.3000 7.7154 3.9000 5.2867 3.2093 × 103
PSO 3.6000 7.0000 × 10−1 1.7000 × 101 7.8063 8.3000 3.9000 5.2869 3.2164 × 103
WOA 3.6000 7.1931 × 10−1 1.7000 × 101 8.2999 7.8366 3.3518 5.2903 3.1389 × 103
Biomimetics 2025, 10, 90 25 of 31

Consider x = [ x1 , x2 , x3 , x4 , x5 , x6 , x7 ] = [b, m, p, l1 , l2 , d1 , d2 ]
f ( x ) = 0.7854x1 x22 3.3333x32 + 14.9334x3 − 43.0934 − 1.508x1 x62 + x72
 
Minimize
+7.4777 x63 + x73 + 0.7854 x4 x62 + x5 x72
 

27
Subject to g1 ( x ) = − 1 ⩽ 0,
x1 x22 x3
397
g2 ( x ) = − 1 ⩽ 0,
x1 x22 x32
1.93x43
g3 ( x ) = − 1 ⩽ 0,
x2 x64 x3
1.93x53
g4 ( x ) = − 1 ⩽ 0,
x2 x74 x3
h i1/2
(745( x4 /x2 x3 ))2 + 16.9 × 106
g5 ( x ) = − 1 ⩽ 0,
110x62
h i1/2 (25)
(745( x5 /x2 x3 ))2 + 157.9 × 106
g6 ( x ) = − 1 ⩽ 0,
85x73
x2 x3
g7 ( x ) = − 1 ⩽ 0,
40
5x2
g8 ( x ) = − 1 ⩽ 0,
x1
x
g9 ( x ) = 1 − 1 ⩽ 0,
12x 2 PEER REVIEW
Biomimetics 2025, 10, x FOR 26 of 34
1.5z6 + 1.9
g10 (z) = − 1 ⩽ 0,
z4
involved: face width ( x ), module of teeth ( x ), number of teeth on the pinion ( x ), length
1.5z7 + 1.9 1 2 3

g11 (z) = of − ⩽ shaft


the1first 0, between the bearings ( x ), length of the second shaft between the bearings
4
z5 ( x ), diameter of the first shaft ( x ), and diameter of the second shaft ( x ). The objective
5 6 7
where is to minimize the total weight of the gearbox while satisfying 11 constraints. The con‐
2.6 ⩽ x1 ⩽ 3.6, 0.7 ⩽ x2 ⩽include
straints 0.8, 17 ⩽ x3stresses
bending ⩽ 28,in7.3
the ⩽ x4teeth,
gear ⩽ 8.3,
surface stresses, lateral deflections of
shaft 1 and shaft 2 due to transmitted forces, and stresses in shaft 1 and shaft 2. The math‐
7.3 ⩽ z5 ⩽ 8.3, ⩽ x6 ⩽
2.9ematical 3.9,is5.0
model ⩽ in
shown x7Equation
⩽ 5.5. (25):

x3 : number of teeth  x6

 x7

x2 : tooth model

Gear 1
x4

Bearing 1
x5 x1

Gear 1 Bearing 2
Shaft 1
Shaft 2

Figure 11. Speed reducer design


Figure problem
11. Speed diagram.
reducer design problem diagram.
4.2. Gear Train Design Problem
Biomimetics 2025, 10, 90 26 of 31
The gear train design problem aims to minimize the cost of the gear ratios shown in
Figure 12. This problem has four integer decision variables, where N 1 , N 2 , N 3 , and N 4
represent theTrain
4.2. Gear number of Problem
Design teeth of four different gears. The mathematical model is shown
in Equation (26):
The gear train design problem aims to minimize the cost of the gear ratios shown

Consider
in Figure 12. This problem x   xinteger
has four 1 , x2 , x3 , x4    N1 , N 2 , N 3 , N 4 
decision variables, where N1 , N2 , N3 , and N4
represent the number of teeth of four different gears. 2The mathematical model is shown in
  1 x x 
Equation (26): Minimize f  x     2 3 , (26)
→  6.931 x1 x4 
Consider x = [ x1 , x2 , x3 , x4 ] = [ N1 , N2 , N3 , N4 ]
→ 12  xi  60, i = 1,2,3,
Variable range 2 4.
Minimize f x = 6.931 1
− xx12 xx43 , (26)
Variable range 12 ≤ xi ≤ 60, i = 1, 2, 3, 4.

N1 N4
N3 N2

Figure 12. Gear system design problem diagram.

Figure 12.
TheGear
MHOsystem design problem
algorithm diagram.to optimize the design of gear systems, and its
is employed
results are compared with those of nine other algorithms. The experimental results are
The MHO algorithm is employed to optimize the design of gear systems, and its re‐
shown in Table 6. The optimal value obtained by MHO is lower than that of the other
sults are compared with those of nine other algorithms. The experimental results are
nine algorithms, indicating that MHO achieved a better value and superior performance in
shown in Table 6. The optimal value obtained by MHO is lower than that of the other nine
this problem.
algorithms, indicating that MHO achieved a better value and superior performance in this
problem.
Table 6. Comparison of the results of gear train design problem.

Table 6. Comparison of the results of gearOptimal Value


train design problem. Optimal Cost
Algorithm
N1 N2 N3 N4
Optimal Value
Algorithm
MHO 44 13 21 43 Optimal
1.5450 × Cost
10−10
N1 N2 N3 N4
8.8876 × 10−10−10
MHOHO 44 57
13 12
2137 54
43 1.5450 × 10 −10
HO1 59 15 21 37 3.0676 × 10−10
HO 57 12 37 54 8.8876 × 10 −10
HO2 56 23 13 37 6.6021 × 10−10
HO1 59 15 21 37 3.0676 × 10 −8
HO3 55 12 37 56 1.5247 × 10
HO2 56 23 13 37 6.6021 × 10−10
HHO 47 12 26 46 9.9216 × 10−10
HO3HBA 55 34 12 15 3717 56
52 1.5247 ×
2.3576 × 10 −8
10−9
HHODBO 47 60 12 15 2615 46
26 9.9216 × 10 −10
2.3576 × 10−9
HBAPSO 34 57 15 37 1712 52
54 2.3576×
8.8876 × 10 −9
10−10
DBOWOA 60 52 15 35 1512 26
56 2.3576 × 10−9−9
× 10
PSO 57 37 12 54 8.8876 × 10−10
WOA 52 35 12 56 2.3576 × 10−9
4.3. Step-Cone Pulley Problem
A stepped conical pulley is a pulley consisting of two or more conical pulleys con-
nected as shown in Figure 13 with five design variables: di , the diameter of the pulley at step
i ∈ [1, 4], and ω, the width of the belt and the pulleys at each step. The goal of the system is
to minimize the weight of the step conical pulley, and the problem contains 11 nonlinear
constraints to ensure that the transmission power is at least 0.75 hp. Equation (27) is the
mathematical model of the stepped conical pulley problem:
The stepped tapered pulley problem is solved using MHO, and it is compared to nine
alternative techniques. A maximum of 50 iterations and 50 training rounds were used in
Biomimetics 2025, 10, x FOR PEER REVIEW 29 of 34

Biomimetics 2025, 10, 90 4.3. Step‐Cone Pulley Problem 27 of 31


A stepped conical pulley is a pulley consisting of two or more conical pulleys con‐
nected as shown in Figure 13 with five design variables: d i , the diameter of the pulley at
 best,
step i is
each experiment. MHO 1, 4 , and  , the width
according to theofexperiment results,
the belt and the pulleyswhich
at each are
step.displayed
The goal of in
the
Table 7. system is to minimize the weight of the step conical pulley, and the problem contains 11
nonlinear constraints to ensure that the transmission power is at least 0.75 hp. Equation
→ (27) is the mathematical model of the stepped conical pulley problem:
Consider x = [ x1 , x2 , x3 , x4 , x5 ] = [d1 , d2 , d3 , d4 , ω ]
  Consider   2
x   x1 , x2 , x3
, x 4 , x5   d1 
2
, d3 , d 4 ,  
, d2 
2        2
N1
Minimize f ( x ) = ρx5 x12 11 + N + x22 2 1 +  NNN122  +  x32 N12 +
2 
2 N
 N23  N+ x2 42 12 +  NNN44 2 
3  
Minimize f ( x )   x5  x1 11      x2 1      x3 1      x4 1    
   N     N     N     N   
Subject to h1 ( x ) = C1 − C2 = 0,
Subject to h1 ( x )  C1  C2  0,
h2 ( x ) = C1 − C3 = 0, h2 ( x )  C1  C3  0,

h3 ( x ) = C1 − C4 = 0, h3 ( x )  C1  C4  0,
g i 1,2,3.4 ( x )   Ri2,
gi=1,2,3.4 ( x ) = − Ri ⩽ 2, g i 1,2,3.4 ( x )  (0.75  745.6998)  Pi0
(27)
(27)
gi=1,2,3.4 ( x ) = where
(0.75 ,
× 745.6998 ) − Pi ⩽0
2
 Ni 
where,   N  1
x  N     2a, i  (1, 2,3, 4)
Ni Ci 2 1  N  
  i i
  − 1 2   4 a
1+ N
N
Ci = πx i i
N + + 2a, i = (1, 2, 3, 4)
2 4a   oo1  N i  x  
Ri Nexp      2sin   1 i   , i  (1, 2,3, 4),
  xi
,i = (1, 22,a 3, 4),
 n n
Ri = exp µ π − 2 sin − 1 N
N − 1  2a
i

 xi N i
Pi = stx5 (1 − Ri ) πx60i Ni , i =Pi (1, 5 13,
stx2, R4i), 60 , i  (1, 2,3, 4),
t = 8 mm, s = 1.75 MPa, tµ = 8mm , s ρ
0.35, 1.75
=MPa7200 , kg/m 0.35,3, a 7200 / m3 , a  3mm.
= 3kgmm.



 
NO5  850

d1 d 2 d3 d4 d5

NO1  150
NO 2  250
NO3  450
NO 4  650
a T1 T2

C5

din 5 d in 4 din 3 d in 2 d in1

N i  350

Figure 13. Step-cone Figure


pulley13.problem
Step‐conediagram.
pulley problem diagram.

Table 7. Comparison of the results of step-cone pulley problem.

Optimal Value
Algorithm Optimal Cost
d1 d2 d3 d4 ω
MHO 3.9835 × 101 5.4824 × 101 7.3067 × 101 8.7626 × 101 8.8851 × 101 2.7377 × 1092
HO 4.0922 × 103 5.6309 × 101 7.5110 × 101 8.9975 × 101 8.6176 × 101 1.4460 × 1093
HO1 4.0863 × 101 5.6226 × 101 7.4988 × 101 8.9873 × 101 8.8590 × 101 3.3731 × 1092
HO2 4.0683 × 101 5.5973 × 101 7.4724 × 101 8.9455 × 101 8.9458 × 101 5.9786 × 1093
HO3 4.0427 × 101 5.5560 × 101 7.4205 × 101 8.8981 × 101 8.9309 × 101 9.2168 × 1093
HHO 4.1957 × 101 5.5823 × 101 8.3645 × 101 8.6757 × 101 8.8616 × 101 5.2464 × 1097
HBA 4.0818 × 101 5.6155 × 101 7.4881 × 101 8.9761 × 101 8.6001 × 101 4.8097 × 1092
DBO 4.0928 × 101 5.6330 × 101 7.5113 × 101 9.0000 × 101 9.0000 × 101 8.5877 × 1092
PSO 4.0147 × 101 5.5184 × 101 7.3648 × 101 8.8431 × 101 9.0000 × 101 1.2714 × 1094
WOA 4.0969 × 101 5.8129 × 101 7.5737 × 101 8.7173 × 101 8.7229 × 101 8.5726 × 1096
Biomimetics 2025, 10, 90 28 of 31

5. Conclusions and Outlook


In this paper, we propose a modified hippopotamus optimization algorithm that aims
to further improve the algorithm’s performance and address the issue of the algorithm’s
easy descent into local optima.
The introduction of the sine chaotic map to initialize the population improves the
diversity and randomness of the hippopotamus population, which enables the hippopota-
mus optimization algorithm to achieve a better balance between global and local searching,
thus improving the initial solution quality as well as the convergence speed.
Premature convergence can be avoided by optimizing the hippo’s position update
technique with a new convergence factor. In addition, a small-hole imaging reverse learning
strategy is incorporated to improve the performance of the algorithm by mapping the
current optimal solution of the algorithm dimension by dimension, avoiding interference
between the dimensions, and at the same time expanding the search range of the algorithm.
Also, the proposed algorithm was experimented on with 23 test functions, and the per-
formance of MHO was compared with HO and its variants as well as other metaheuristics,
and the mean and standard deviation of the algorithm’s optimized search were calculated.
The experimental results show that MHO is optimal in terms of mean and standard devi-
ation for thirteen test functions, while failing to optimize in terms of mean and standard
deviation for only five test functions. After analyzing the experimental results by using
sensitivity analysis and the Friedman test for stability and convergence, respectively, it is
concluded that MHO has a higher ranking and stability and can jump out of local optima
faster. In order to further verify the ability of MHO in solving global optimization problems,
it is applied to three engineering design problems and compared with other algorithms,
and the results show that MHO obtains very impressive outcomes. The above experi-
ments fully demonstrate that compared with other existing algorithms, MHO possesses a
stronger global search capability and is able to explore the solution space more efficiently,
thus searching for potential optimal solutions more comprehensively. In addition, MHO
significantly improves its adaptability to complex optimization problems by dynamically
adjusting the search direction and step size, thus achieving faster convergence. In terms of
local searching, MHO is able to locate the optimal solution more accurately, especially for
high-dimensional complex optimization problems, and its unique mechanism enables it to
avoid falling into the local optimal trap. MHO also demonstrates higher robustness and
outperforms the other nine compared algorithms in both parameter optimization and real
engineering problems.
Nevertheless, MHO still has a tendency to converge to locally optimal solutions for
certain functions when working with global optimization issues. On the complicated
reducer design challenges, MHO’s solution performance is also not very steady. Therefore,
we will continue to improve the exploration and production capability of MHO in our
future research. Meanwhile, we will apply MHO to a wider range of problems, such as
multi-objective optimization and current popular neural networks.

Author Contributions: T.H.: writing—review and editing, software, formal analysis, and conceptu-
alization. H.W.: writing—review and editing, writing—original draft, software, and methodology.
T.L.: visualization, supervision, resources, and data curation. Q.L.: writing—review and editing,
visualization, funding acquisition, methodology, and conceptualization. Y.H.: supervision, resources,
validation, and funding acquisition. All authors have read and agreed to the published version of
the manuscript.

Funding: This work was supported by the Anhui Provincial Colleges and Universities Collaborative
Innovation Project (GXXT-2023-068), and the Anhui University of Science and Technology Graduate
Innovation Fund Project (2023CX2086).
Biomimetics 2025, 10, 90 29 of 31

Data Availability Statement: The data generated from the analysis in this study can be found in this
article. This study does not report the original code, which is available for academic purposes from
the lead contact. Any additional information required to reanalyze the data reported in this paper is
available from the lead contact upon request.

Acknowledgments: We would like to thank the School of Electrical and Information Engineering at
Anhui University of Science and Technology for providing the laboratory.

Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer.
J. Ambient Intell. Humaniz. Comput. 2021, 12, 8457–8482. [CrossRef]
2. Gharaei, A.; Shekarabi, S.; Karimi, M. Modelling and optimal lot-sizing of the replenishments in constrained, multi-product and
bi-objective EPQ models with defective products: Generalised cross decomposition. Int. J. Syst. Sci. 2020, 7, 262–274. [CrossRef]
3. Sun, Y.; Chen, Y. Multi-population improved whale optimization algorithm for high dimensional optimization. Appl. Soft Comput.
2024, 112, 107854. [CrossRef]
4. Shen, Y.; Zhang, C.; Gharehchopogh, F.; Mirjalili, S. An improved whale optimization algorithm based on multi-population
evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [CrossRef]
5. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020,
32, 12363–12379. [CrossRef]
6. Baluja, S.; Caruana, R. Removing the Genetics from the Standard Genetic Algorithm. In Proceedings of the Twelfth International
Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995.
7. Coelho, L.; Mariani, V. Improved differential evolution algorithms for handling economic dispatch optimization with generator
constraints. Energy Convers. Manag. 2006, 48, 1631–1639. [CrossRef]
8. Ma, H.; Ye, S.; Simon, D.; Fei, M. Conceptual and numerical comparisons of swarm intelligence optimization algorithms. Soft
Comput. 2017, 21, 3081–3100. [CrossRef]
9. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems:
Applications and Trends. IEEE-CAA J. Autom. Sin. 2021, 8, 1627–1643. [CrossRef]
10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural
Networks, Perth, WA, Australia, 27 November–1 December 1995.
11. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [CrossRef]
12. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Technical Report; Erciyes
University: Kayseri, Türkiye, 2005. Available online: https://abc.erciyes.edu.tr/pub/tr06_2005.pdf (accessed on 15 July 2024).
13. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29,
464–483. [CrossRef]
14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [CrossRef]
15. Yu, V.F.; Jewpanya, P.; Redi, A.; Tsao, Y.C. Adaptive neighborhood simulated annealing for the heterogeneous fleet vehicle routing
problem with multiple crossdocks. Comput. Oper. Res. 2021, 129, 105205. [CrossRef]
16. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [CrossRef]
17. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for
solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [CrossRef]
18. Wang, S.H.; Li, Y.Z.; Yang, H.Y. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization.
Appl. Soft Comput. 2019, 81, 105496. [CrossRef]
19. Shanmugapriya, P.; Kumar, T.S.; Kirubadevi, S.; Prasad, P.V. IoT based energy management strategy for hybrid electric storage
system in EV using SAGAN-COA approach. J. Energy Storage 2024, 104, 114315. [CrossRef]
20. Hu, S.J.; Kong, G.Q.; Zhang, C.S.; Fu, J.H.; Li, S.Y.; Yang, Q. Data-driven models for the steady thermal performance prediction of
energy piles optimized by metaheuristic algorithms. Energy 2024, 313, 134000. [CrossRef]
21. Sun, B.; Peng, P.; Tan, G.; Pan, M.; Li, L.; Tian, Y. A fuzzy logic constrained particle swarm optimization algorithm for industrial
design problems. Appl. Soft Comput. 2024, 167, 112456. Available online: https://api.semanticscholar.org/CorpusID:274134625
(accessed on 15 July 2024). [CrossRef]
22. Wu, S.; Dong, A.; Li, Q.; Wei, W.; Zhang, Y.; Ye, Z. Application of ant colony optimization algorithm based on farthest point
optimization and multi-objective strategy in robot path planning. Appl. Soft Comput. 2024, 167, 112433. [CrossRef]
23. Palanisamy, S.K.; Krishnaswamy, M. Optimization and forecasting of reinforced wire ropes for tower crane by using hybrid
HHO-PSO and ANN-HHO algorithms. Int. J. Fatigue 2024, 190, 108663. [CrossRef]
Biomimetics 2025, 10, 90 30 of 31

24. Liu, J.; Zhao, J.; Li, Y.; Zhou, H. HSMAOA: An enhanced arithmetic optimization algorithm with an adaptive hierarchical
structure for its solution analysis and application in optimization problems. Thin-Walled Struct. 2025, 206, 112631. [CrossRef]
25. Cui, X.; Zhu, J.; Jia, L.; Wang, J.; Wu, Y. A novel heat load prediction model of district heating system based on hybrid whale
optimization algorithm (WOA) and CNN-LSTM with attention mechanism. Energy 2024, 312, 133536. [CrossRef]
26. Che, Z.; Peng, C.; Yue, C. Optimizing LSTM with multi-strategy improved WOA for robust prediction of high-speed machine
tests data. Chaos Soliton Fract. 2024, 178, 114394. [CrossRef]
27. Elsisi, M. Optimal design of adaptive model predictive control based on improved GWO for autonomous vehicle considering
system vision uncertainty. Appl. Soft Comput. 2024, 158, 111581. [CrossRef]
28. Karaman, A.; Pacal, I.; Basturk, A.; Akay, B.; Nalbantoglu, U.; Coskun, S.; Sahin, O.; Karaboga, D. Robust real-time polyp detection
system design based on YOLO algorithms by optimizing activation functions and hyper-parameters with artificial bee colony
(ABC). Expert Syst. Appl. 2023, 221, 119741. [CrossRef]
29. Yu, X.; Zhang, W. Address wind farm layout problems by an adaptive Moth-flame Optimization Algorithm. Appl. Soft. Comput.
2024, 167, 112462. [CrossRef]
30. Dong, K.; Yang, D.W.; Sheng, J.B.; Zhang, W.D.; Jing, P.R. Dynamic planning method of evacuation route in dam-break flood
scenario based on the ACO-GA hybrid algorithm. Int. J. Disaster Risk Reduct. 2024, 100, 104219. [CrossRef]
31. Liu, X.; Wang, J.S.; Zhang, S.B.; Guan, X.Y.; Gao, Y.Z. Optimization scheduling of off-grid hybrid renewable energy systems based
on dung beetle optimizer with convergence factor and mathematical spiral. Renew. Energy 2024, 237, 121874. [CrossRef]
32. Beşkirli, A.; Dağ, İ. I-CPA: An Improved Carnivorous Plant Algorithm for Solar Photovoltaic Parameter Identification Problem.
Biomimetics 2023, 8, 569. [CrossRef]
33. Beşkirli, A.; Dağ, İ. Mustafa Servet Kiran. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic
models. Appl. Soft Comput. 2024, 167, 112220. [CrossRef]
34. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-
inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. Available online: https://api.semanticscholar.org/CorpusID:268083241
(accessed on 15 July 2024). [CrossRef] [PubMed]
35. Maurya, P.; Tiwari, P.; Pratap, A. Application of the hippopotamus optimization algorithm for distribution network reconfigura-
tion with distributed generation considering different load models for enhancement of power system performance. Electr. Eng.
2024, SN-1432-0487. [CrossRef]
36. Chen, Y.; Wu, F.; Shi, L.; Li, Y.; Qi, P.; Guo, X. Identification of Sub-Synchronous Oscillation Mode Based on HO-VMD and
SVD-Regularized TLS-Prony Methods. Energies 2024, 17, 5067. [CrossRef]
37. Ribeiro, A.N.; Muñoz, D.M. Neural network controller for hybrid energy management system applied to electric vehicles.
J. Energy Storage 2024, 104, 114502. [CrossRef]
38. Wang, H.; Binti Mansor, N.N.; Mokhlis, H.B. Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction
Using Improved Hippopotamus Algorithm. Appl. Sci. 2024, 14, 7803. [CrossRef]
39. Mashru, N.; Tejani, G.G.; Patel, P.; Khishe, M. Optimal truss design with MOHO: A multi-objective optimization perspective.
PLoS ONE. 2024, 19, e0308474. Available online: https://api.semanticscholar.org/CorpusID:271905232 (accessed on 15 July 2024).
[CrossRef]
40. Abdelaziz, M.A.; Ali, A.A.; Swief, R.A.; Elazab, R. Optimizing energy-efficient grid performance: Integrating electric vehicles,
DSTATCOM, and renewable sources using the Hippopotamus Optimization Algorithm. Sci. Rep. 2024, 14, 28974. [CrossRef]
41. Baihan, A.; Alutaibi, A.I.; Alshehri, M.; Sharma, S.K. Sign language recognition using modified deep learning network and hybrid
optimization: A hybrid optimizer (HO) based optimized CNNSa-LSTM approach. Sci. Rep. 2024, 14, 26111. [CrossRef]
42. Amiri, M.H.; Hashjin, N.M.; Najafabadi, M.K.; Beheshti, A.; Khodadadi, N. An innovative data-driven AI approach for detecting
and isolating faults in gas turbines at power plants. Expert Syst. Appl. 2025, 263, 125497. [CrossRef]
43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. Available
online: https://dl.acm.org/doi/10.1109/4235.585893 (accessed on 15 July 2024). [CrossRef]
44. Wang, M.M.; Song, X.G.; Liu, S.H.; Zhao, X.Q.; Zhou, N.R. A novel 2D Log-Logistic–Sine chaotic map for image encryption.
Nonlinear Dyn. 2025, 113, 2867–2896. [CrossRef]
45. Sedigheh Mahdavi, Shahryar Rahnamayan, Kalyanmoy Deb, Opposition based learning: A literature review. Swarm Evol. Comput.
2018, 39, 1–23. [CrossRef]
46. Jiao, J.; Li, J. Enhanced fireworks algorithm based on particle swarm optimization and reverse learning of small-hole imaging
experiment. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech
Republic, 9–12 October 2022.
47. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy
perturbation. Appl. Soft Comput. 2024, 152, 111211. [CrossRef]
48. Phalke, S.; Vaidya, Y.; Metkar, S. Big-O Time Complexity Analysis Of Algorithm. In Proceedings of the International Conference
on Signal and Information Processing (IConSIP), Pune, India, 26–27 August 2022.
Biomimetics 2025, 10, 90 31 of 31

49. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications.
Future Gener. Comput. Syst. 2019, 97, 849–872. [CrossRef]
50. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic
algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [CrossRef]
51. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J Supercomput. 2022, 79,
7305–7336. [CrossRef]
52. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [CrossRef]
53. Röhmel, J. The permutation distribution of the Friedman test. Comput. Stat. Data. Anal. 1997, 26, 83–99. [CrossRef]
54. Golinski, J. Optimal synthesis problems solved by meansof nonlinear programming and random methods. J. Mech. 1970, 5,
287–309. [CrossRef]
55. Huang, Y.; Liu, Q.; Song, H.; Han, T.; Li, T. CMGWO: Grey wolf optimizer for fusion cell-like P systems. Heliyon 2024, 10, e34496.
[CrossRef]
56. Han, T.; Li, T.; Liu, Q.; Huang, Y.; Song, H. A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems.
Algorithms 2024, 17, 573. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy