0% found this document useful (0 votes)
13 views30 pages

HMRFO

Uploaded by

Gauri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

HMRFO

Uploaded by

Gauri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

International Journal of Computational Intelligence Systems (2023) 16:114

https://doi.org/10.1007/s44196-023-00289-4

RESEARCH ARTICLE

Hierarchical Manta Ray Foraging Optimization with Weighted


Fitness-Distance Balance Selection
Zhentao Tang1 · Kaiyu Wang2 · Sichen Tao2 · Yuki Todo3 · Rong-Long Wang4 · Shangce Gao2

Received: 19 March 2023 / Accepted: 17 June 2023


© The Author(s) 2023

Abstract
Manta ray foraging optimization (MRFO) tends to get trapped in local optima as it relies on the direction provided by the
previous individual and the best individual as guidance to search for the optimal solution. As enriching population diversity
can effectively solve this problem, in this paper, we introduce a hierarchical structure and weighted fitness-distance balance
selection to improve the population diversity of the algorithm. The hierarchical structure allows individuals in different
groups of the population to search for optimal solutions in different places, expanding the diversity of solutions. In MRFO,
greedy selection based solely on fitness can lead to local solutions. We innovatively incorporate a distance metric into the
selection strategy to increase selection diversity and find better solutions. A hierarchical manta ray foraging optimization with
weighted fitness-distance balance selection (HMRFO) is proposed. Experimental results on IEEE Congress on Evolutionary
Computation 2017 (CEC2017) functions show the effectiveness of the proposed method compared to seven competitive
algorithms, and the proposed method has little effect on the algorithm complexity of MRFO. The application of HMRFO
to optimize real-world problems with large dimensions has also obtained good results, and the computational time is very
short, making it a powerful alternative for very high-dimensional problems. Finally, the effectiveness of this method is further
verified by analyzing the population diversity of HMRFO.

Keywords Manta ray foraging optimization · Local optima · Population diversity · Hierarchical structure · Greedy selection ·
Weighted fitness-distance balance selection · Algorithm complexity

Abbreviations
MRFO Manta ray foraging optimization
B Shangce Gao
gaosc@eng.u-toyama.ac.jp HMRFO Hierarchical manta ray foraging optimization
with weighted fitness-distance balance selec-
Zhentao Tang
18305263456@163.com tion
EA Evolution-based algorithms
Kaiyu Wang
greskofairy@gmail.com SI Swarm-based intelligence
PM Physics-based methods
Sichen Tao
terrysc777@gmail.com HM Human-based behaviors
OBL Opposition-based learning
Yuki Todo
yktodo@ec.t.kanazawa-u.ac.jp FO Fractional-order
GBO Gradient-based optimizer
Rong-Long Wang
wang@u-fukui.ac.jp FW Fitness-distance balance selection method
with functional weight
1 Jiangsu Agri-animal Husbandry Vocational College, Taizhou CEC Congress on Evolutionary Computation
225300, China
NFEs The maximum number of function evalua-
2 Faculty of Engineering, University of Toyama, Toyama tions
930-8555, Japan
3 Faculty of Electrical, Information and Communication
4 Faculty of Engineering, University of Fukui, Fukui 910-8507,
Engineering, Kanazawa University, Kanazawa 9201192,
Japan Japan

0123456789().: V,-vol 123


114 Page 2 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

AWDO Adaptive wind-driven optimization troller. In [61], fractional-order (FO) was utilized in MRFO
CMAES Covariance matrix adaptive evolutionary strat- to escape from local optima, and the proposed algorithm was
egy applied to image segmentation. In [62], a hybrid algorithm
PSOGSA A hybrid algorithm that combines particle based on MRFO and gradient-based optimizer (GBO) was
swarm optimization and gravitational search adopted to solve economic emission dispatch problems. In
algorithm [63], the global exploration ability of MRFO was enhanced
CPU Central processing unit by combining control parameter adjustment, wavelet muta-
HSP Hydrothermal scheduling problem tion, and quadratic interpolation strategy, and the improved
DED Dynamic economic dispatch problem algorithm was used to optimize complex curve shapes. From
LSTPP Large-scale transmission pricing problem these references, it can be concluded that MRFO tends to
ELD Static economic load dispatch problem converge prematurely and fall into the local optimal solu-
FCMRFO Fractional-order Caputo manta ray foraging tion.
optimization The search operators [64] of meta-heuristic algorithms
include exploration and exploitation [65, 66]. Exploration
involves using randomly generated individuals to produce
1 Introduction different solutions in the search space, which increases the
diversity of the population and improves the quality of the
Meta-heuristic algorithms [37] are inspired by nature [38]. solution. Exploitation involves conducting a local search
Based on their sources of inspiration, these algorithms can around the best individual, relying on the advantages of the
be classified into five categories [39, 40]: evolution-based current optimal solution to accelerate the convergence of
algorithms (EA) [41], swarm-based intelligence (SI) [42], the algorithm. Meta-heuristic algorithms based on swarm
physics-based methods (PM) [43], human-based behaviors intelligence are prone to falling into local optima and pre-
(HB) [44], and other optimization algorithms, as shown in mature convergence. Solving this problem is our motivation
Table 1. They mainly simulate physical or biological phe- for improving the algorithm.
nomena in nature and establish mathematical models to solve To improve MRFO, it is necessary to maintain population
optimization problems [45, 46]. These algorithms possess the diversity. Population diversity refers to having many non-
characteristics of self-organization, self-adaptation, and self- neighboring individuals in the search space that can generate
learning, and have been widely used in many fields, such as different solutions. By maintaining population diversity, indi-
biology [47, 48], feature selection [49], optimization comput- viduals can be dispersed instead of being gathered around
ing [50], image classification [51], and artificial intelligence the local optimal solution, thus escaping local optima and
[52, 53]. generating better solutions to improve solution quality.
There are many improved meta-heuristic methods, such This paper proposes adding a hierarchical structure [67]
as incorporating competitive memory and dynamic strategy and a fitness-distance balance selection method with func-
into the mean shift algorithm to optimize dynamic multi- tional weight (FW) [68] to increase population diversity.
modal functions [54], balancing exploration and exploitation To verify the performance of the proposed algorithm, we
by adding synchronous–asynchronous strategy to the grey compared the hierarchical manta ray foraging optimization
wolf optimizer [55], and reformulating the search factor in with weighted fitness-distance balance selection (HMRFO)
the algorithm [56]. Manta ray foraging optimization (MRFO) with seven competitive algorithms on the IEEE CEC2017
[11] is the latest swarm-based intelligence algorithm pro- benchmark functions. The results show that HMRFO has
posed in 2020. It has few adjustable parameters, is easy superior performance and fast convergence speed. Addi-
to implement, and can find solutions with specified preci- tionally, HMRFO has the same time complexity as MRFO.
sion at low computational cost [11]. Therefore, it has great The performance of HMRFO in four large-dimensional real-
research potential. In [57], the opposition-based learning world problems illustrates the practicality of HMRFO in
(OBL) method was introduced into MRFO to achieve an solving large-dimensional problems. By comparing the pop-
effective structure for the optimization problem. In [58], ulation diversity of HMRFO, MRFO and the latest variant
both OBL and self-adaptive methods were applied to MRFO of MRFO on different types of functions from the IEEE
to optimize energy consumption in residential buildings. CEC2017 benchmark suite, the effectiveness of the proposed
In [59], the Lévy flight mechanism and chaos theory were method in this paper is visually verified.
added to solve the problem of premature convergence of The contributions of the present study can be summarized
MRFO, and the improved algorithm was applied to the pro- as follows: (1) The hierarchical structure and FW selec-
ton exchange membrane fuel cell system. In [60], hybrid tion method are effective in improving population diversity
simulated annealing and MRFO were utilized to optimize and avoiding falling into local optima. (2) The hierarchical
the parameters of the proportional–integral–derivative con- structure and FW selection method have little effect on the

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 3 of 30 114

Table 1 A list of types of


Evolution-inspired
meta-heuristic algorithms
Genetic algorithm (GA) [1] Evolutionary strategies (ES) [2]
Differential evolution (DE) [3] Memetic algorithm (MA) [4]
Evolutionary programming (EP) [5] Bacterial foraging optimization (BFO) [6]
Monkey king evolutionary (MKE) [7] Artificial algae algorithm (AAA) [8]
Bat algorithm (BA) [9] Artificial immune system (AIS) [10]
Swarm-inspired
Manta ray foraging optimization (MRFO) [11] Reptile search algorithm (RSA) [12]
Grey wolf optimizer (GWO) [13] Particle swarm optimization (PSO) [14]
Brain storm optimization (BSO) [15] Sailfish optimizer (SFO) [16]
Whale optimization algorithm (WOA) [17] Ant colony optimization (ACO) [18]
Spotted hyena optimization (SHO) [19] Firefly algorithm (FA) [20]
Red fox optimization algorithm (RFO) [21] Salp swarm algorithm (SSA) [22]
Social spider optimization (SSO) [23] Selfish herd optimizer (SHO) [24]
Physics-inspired
Gravitational search algorithm (GSA) [25] Wind driven optimization (WDO) [26]
Water evaporation optimization (WEO) [27] Atom search optimization (ASO) [28]
Henry gas solubility optimization (HGSO) [29] Vortex search algorithm (VS) [30]
Human-inspired
Teaching–learning-based optimization (TLBO) [31] Artificial human optimization (AHO) [32]
Poor and rich optimization (PRO) [33] Ideology algorithm (IA) [34]
Other optimization algorithms
Yin-Yang-pair optimization (YYPO) [35] I-Ching divination evolutionary algorithm (IDEA)
[36]

algorithm’s complexity. (3) HMRFO demonstrates superior on the three foraging behaviors of the manta ray population.
search performance, fast convergence speed, and high com- The specific behavioral models are as follows.
putational efficiency when dealing with large-dimensional
problems, making it applicable to such problems.
2.1.1 Chain Foraging
The remaining sections of this paper are organized as fol-
lows:
When manta rays find food, each manta ray follows the previ-
Section 2 introduces the original MRFO and some selec-
ous manta ray in a row and swims towards the food location.
tion methods. Section 3 proposes HMRFO. Section 4
Therefore, except for the first manta ray, the movement direc-
presents the experimental results and analysis. Section 5 dis-
tion of other manta rays not only moves towards the food but
cusses the parameters and population diversity of HMRFO.
also towards the front manta rays, forming a chain foraging
Section 6 concludes the paper and suggests future research
behavior. The mathematical model is expressed as follows:
directions.
⎧ d  d 

⎪ xi (t) + r · xbest (t) − xid (t)
⎪  
⎨ +α · x d (t) − x d (t) , i = 1
xid (t + 1) =  d
best i  (1)

⎪ xid (t) + r · xi−1 (t) − xid (t)

⎩  d 
2 Preliminaries +α · xbest (t) − xid (t) , i = 2, . . . , N ,

α = 2 · r · | log(r )|, (2)
2.1 Manta Ray Foraging Optimization
   1
A manta ray is defined as X i = xi1 , xi2 , . . . , xid , where where the best individual is defined as X best = xbest , xbest
2 ,

i ∈ 1, 2, . . . , N , and xid represents the position of the ith . . . , xbest , where xbest (t) indicates the position of the best
d d

individual in the dth dimension. Here, N is the total number individual in the dth dimension at time t, r is a random vector
of manta rays. MRFO establishes mathematical models based within [0, 1], and α is the weight coefficient.

123
114 Page 4 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Fig. 1 Flowchart of MRFO

2.1.2 Cyclone Foraging search space to update the next generation. This improves
the algorithm’s ability to explore the global search space and
Manta rays not only follow the manta ray in front of them but increases the diversity of solutions. The update equations are
also move spirally towards the food. This foraging behavior as follows:
is called cyclone foraging, and its mathematical equations
are expressed as follows: d
xrand = Lbd + r · U bd − Lbd , (5)

⎧ ⎪ d d
⎪ d d d ⎪ xrand
⎪ + r · xrand (t) − xid (t)
⎪ xbest + r · xbest (t) − xi (t)






⎪ ⎨ +β · x d (t) − x d (t) , i = 1

⎨ +β · x d (t) − x d (t) , i = 1 rand i
best i xid (t + 1) = (6)
d
xi (t + 1) = (3) ⎪
⎪ xrand + r · xi−1 (t) − xid (t)
d d
⎪ ⎪

⎪ xbest + r · xi−1 (t) − xid (t)

d d ⎪


⎪ ⎩ +β · x d (t) − x d (t) , i = 2, . . . , N ,

⎪ rand i
⎩ +β · x d (t) − x d (t) , i = 2, . . . , N ,
best i
T −t+1
β = 2er1 T · sin (2πr1 ) , (4) where r is a random vector in [0, 1], xrand
d represents a random

position, and Lbd and U bd denote the lower and upper limits
where r1 is a random number in the range of [0, 1], T repre- of the dth dimension, respectively.
sents the maximum number of iterations, and β denotes the
weight coefficient. 2.1.3 Somersault Foraging
This process iterates around the position of the best indi-
vidual. To avoid getting trapped in local optima, a new When manta rays approach food, they perform somersaults
position is randomly selected as the best position in the and circle around the food to pull it towards themselves. This

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 5 of 30 114

Fig. 2 Schematic diagram of


the hierarchical structure

foraging behavior takes the food (i.e., the best individual) as through somersault foraging and ultimately returns the best
a pivot, and each individual swims back and forth around the solution.
pivot. In other words, the search space is limited between the MRFO has few adjustable parameters, low computational
current position and its symmetrical position with respect to cost, and is less affected by the increase in problem size,
the best individual. As the distance between the individual making it a powerful alternative for solving very high-
and the best individual decreases, the search space is also dimensional problems [69].
reduced, and the individual gradually approaches the best
individual. Therefore, in the later stages of iteration, the range
of somersault foraging is adaptively reduced. The expression 2.2 Selection Methods and Discussion
is given below:
There are currently six basic selection methods, which are:

xid (t + 1) = xid (t) + S · r2 · xbest


d
− r3 · xid (t) ,
(7) 1. Random selection In MRFO, for example, an individual
i = 1, . . . , N , is randomly generated in the search space and used as
the reference position for other individuals in the cyclone
where r2 and r3 are two random numbers in [0, 1], S repre- foraging.
sents the somersault factor, which determines the range of 2. Greedy selection based on fitness value In MRFO, X best
somersaulting, and is set to 2. represents the best individual based on fitness.
The workflow diagram of MRFO is presented in Fig. 1. 3. Adaptive selection based on ordinal: In wind-driven opti-
The value of rand is used to switch between cyclone for- mization (WDO) [70], i denotes the rank of an individual
aging and chain foraging. In cyclone foraging, the value among all population members based on the pressure
of t/T is employed to determine whether to use the best value.
individual position (for local exploitation) or a random posi- 4. Probabilistic selection Both roulette wheel and tourna-
tion (for global exploration) as the reference position. As ment strategies are probabilistic selection methods. For
the number of iterations (t) increases, the value of t/T example, in ant colony optimization (ACO) [18], the next
gradually rises, and the algorithm shifts from exploration city for the ants to visit is chosen through roulette wheel
to exploitation. Then, each individual updates its position selection.

123
114 Page 6 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Fig. 3 Flowchart of HMRFO

5. Fitness-distance balance selection [64] This is the latest The traditional selection methods evaluate the quality of
selection method that has been successfully applied in individuals based on the magnitude of their fitness, which
several algorithms [64, 69, 71, 72]. The fitness-distance can improve the convergence speed of the algorithm, but it
balance with functional weight (FW) selection added in also easily leads to local optima. Therefore, an increasing
this paper is an improved variant of this selection method number of alternative selection methods are being used to
[68]. replace traditional ones. In [73], maximum clique and edge
6. Combined selection This is a combination of at least two centrality are used to select genes with maximum relevancy
of the other selection methods. and minimum redundancy. In [74], users are clustered into
different groups using graph clustering, food ingredients are

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 7 of 30 114

Table 2 Parameter settings of HMRFO and other algorithms selection method, fitness-distance balance with functional
Algorithms Parameters weight (FW) [68], and employs a hierarchical structure [67]
to update the population, resulting in the proposed algorithm,
HMRFO S = 2, μ = 3/4, σ = 1/12, P R1 = 0.8, P R2 = 0.6 HMRFO.
MRFO S=2
AWDO u max = 0.3
3.2 Description of HMRFO
PSOGSA G 0 = 100, α = 20, w1 (t) = 0.5, w2 (t) = 1.5
RSA α = 0.1, β = 0.005 In MRFO, the fitness value of each individual is calculated
GWO α linearly decreases from 2 to 0 using the following equations:
WOA α linearly decreases from 2 to 0
BSO m = 5, p5a = 0.2, p6b = 0.8, p6biii = 0.4, p6c = 0.5 G i = f xi1 , xi2 , . . . , xid , (8)
N F = if goal is minimization: Fi = 1 − nor mG i
∀i=1 i (9)
Table 3 Friedman ranks of HMRFO and seven competitive algorithms if goal is maximization: Fi = nor mG i ,
on IEEE CEC2017
Algorithms Dimension Dimension Dimen Dimen where G i represents the objective function value of the ith
=10 =30 sion=50 sion=100 individual, nor mG i is the normalized value of G i , and Fi is
HMRFO 1.2586 1.2069 1.3793 1.3793 the fitness value of the ith individual.
MRFO 2.2931 2.6552 2.6207 2.4483
In MRFO, only the position of the best individual based on
fitness is obtained, which can easily lead to falling into a local
AWDO 6.069 6.5172 6.6897 6.6897
solution. To address this issue, the fitness-distance balance
PSOGSA 4.1552 4.4483 4.6897 5.2069
with functional weight (FW) selection method is added to
RSA 7.8276 7.931 7.8621 7.7586
HMRFO. This selection method considers both fitness and
GWO 3.931 3.2069 3.2414 3.3793
the distance between each individual and the best individual,
WOA 5.8621 5.5862 5.1724 4.9655
which effectively maintains population diversity, increases
BSO 4.6034 4.4483 4.3448 4.1724
the number of solutions, and improves solution quality. As
a result, the algorithm can escape local optima and enhance
its exploration ability. The mathematical equations for this
embedded using deep learning techniques, and based on user selection method are as follows:
and food information, the top few foods are recommended
to the target customers. Adding other selection techniques ∀i=1
N
, i  = best,
and criteria to select more accurate individuals has become  1 2  2 2  2
Di = xi − xbest
1 + xi − xbest
2 + · · · + xid − xbest
d
,(10)
increasingly common. Similarly, the FW algorithm in this
article also added distance metrics. With only a single fitness ∀i=1
N
Si = ω · Fi + (1 − ω) · Di , (11)
 
evaluation, it is easy to find a local solution. The introduc- ω ∼ N μ, σ 2 , (12)
tion of distance metrics enables the algorithm to escape local
solutions and find better solutions with higher fitness values where Di represents the distance between the ith individ-
farther away, thus increasing the diversity of the solutions ual and the best individual, Fi is fitness value, Si denotes
obtained. the score of the ith individual, ω represents the functional
weight, and ω is randomly generated by Gaussian distribu-
3 Proposed HMRFO tion, according to [68], where μ = 3/4, σ = 1/12.
After obtaining the score Si , the population is sorted based
3.1 Motivation on the score Si . The higher the score, the greater the contri-
bution of the individual to the optimization problem, and the
According to the No-Free-Lunch theorem [75, 76], no sin- higher its rank, the more likely it is to be the optimal indi-
gle algorithm can find the best global optimal solution for vidual. This approach overcomes the disadvantage of relying
all optimization problems. Similarly, MRFO has its own solely on fitness to obtain the optimal individual, improves
limitations in optimization. In the case of swarm-based intel- population diversity, and prevents the algorithm from being
ligence MRFO, since the back manta rays are influenced by trapped in local optima. The effectiveness of this method has
most of the front manta rays, the swarm is prone to being been demonstrated in [64, 68, 69, 71].
attracted by local points, leading to falling into local optima. MRFO updates the population only based on X best in
To overcome this problem, enriching the population diversity somersault foraging. To enhance population diversity, a hier-
is an effective solution. Therefore, this paper proposes a new archical structure is added to the somersault foraging of

123
114 Page 8 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 4 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 10
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std

F1 9.994E+02 1.387E+03 2.139E+03 2.261E+03 + 3.236E+08 7.511E+08 + 2.030E+03 2.345E+03 +


F3 0.000E+00 0.000E+00 0.000E+00 0.000E+00 ≈ 2.494E+03 3.883E+03 + 0.000E+00 2.189E–14 +
F4 1.061E+00 3.814E–01 4.596E–01 3.096E–01 − 3.754E+01 6.817E+01 ≈ 5.577E+00 1.581E+01 +
F5 9.198E+00 4.198E+00 1.898E+01 9.286E+00 + 5.035E+01 1.931E+01 + 2.351E+01 9.815E+00 +
F6 9.612E–06 5.243E–05 1.374E–01 5.294E–01 + 1.969E+01 1.103E+01 + 2.075E+00 3.470E+00 +
F7 2.033E+01 4.924E+00 3.827E+01 1.323E+01 + 6.382E+01 2.663E+01 + 3.597E+01 1.235E+01 +
F8 1.371E+01 6.309E+00 2.092E+01 7.834E+00 + 3.851E+01 1.753E+01 + 2.372E+01 1.013E+01 +
F9 1.066E–02 6.460E–02 7.377E–01 1.624E+00 + 2.488E+02 2.304E+02 + 8.135E+01 1.599E+02 +
F10 4.542E+02 2.673E+02 6.906E+02 2.855E+02 + 1.342E+03 3.245E+02 + 8.304E+02 2.990E+02 +
F11 5.584E+00 3.326E+00 9.063E+00 7.264E+00 + 1.560E+02 2.023E+02 + 2.032E+01 1.621E+01 +
F12 8.951E+03 7.568E+03 1.401E+04 1.434E+04 ≈ 1.129E+07 2.258E+07 + 1.738E+05 7.861E+05 +
F13 8.656E+02 5.499E+02 5.536E+02 3.958E+02 − 1.743E+05 2.619E+05 + 7.546E+03 8.092E+03 +
F14 5.687E+01 1.275E+01 5.337E+01 1.475E+01 ≈ 4.350E+02 4.463E+02 + 7.486E+02 5.106E+02 +
F15 1.034E+02 5.146E+01 8.350E+01 4.771E+01 − 2.925E+03 2.307E+03 + 2.068E+03 2.814E+03 +
F16 3.490E+00 1.697E+01 4.455E+01 7.282E+01 + 2.398E+02 1.198E+02 + 1.573E+02 1.410E+02 +
F17 2.303E+01 1.107E+01 3.695E+01 2.699E+01 + 9.487E+01 3.726E+01 + 7.033E+01 4.737E+01 +
F18 3.253E+03 2.649E+03 3.851E+03 4.189E+03 ≈ 8.540E+05 1.227E+06 + 5.334E+03 5.265E+03 ≈
F19 5.647E+01 4.373E+01 6.837E+01 4.001E+01 + 6.282E+03 7.021E+03 + 3.799E+03 3.479E+03 +
F20 6.149E+00 7.765E+00 1.080E+01 1.035E+01 + 1.211E+02 4.644E+01 + 8.442E+01 5.644E+01 +
F21 1.001E+02 5.121E–01 1.122E+02 3.455E+01 + 1.263E+02 3.050E+01 + 1.914E+02 5.712E+01 +
F22 8.850E+01 3.062E+01 9.524E+01 2.067E+01 + 1.176E+02 4.185E+01 + 9.804E+01 1.823E+01 +
F23 3.108E+02 5.558E+00 3.237E+02 1.168E+01 + 3.553E+02 3.003E+01 + 3.299E+02 1.553E+01 +
F24 1.793E+02 1.151E+02 2.401E+02 1.252E+02 + 2.474E+02 1.109E+02 + 3.401E+02 6.193E+01 +
F25 4.249E+02 2.289E+01 4.269E+02 2.291E+01 ≈ 4.601E+02 8.386E+01 + 4.349E+02 3.315E+01 ≈
F26 2.844E+02 6.378E+01 3.215E+02 7.091E+01 + 4.113E+02 1.722E+02 + 3.428E+02 1.585E+02 +
F27 3.965E+02 3.165E+00 4.012E+02 1.347E+01 + 4.256E+02 2.194E+01 + 4.187E+02 2.736E+01 +
F28 3.293E+02 1.006E+02 4.090E+02 1.325E+02 + 4.583E+02 1.242E+02 + 5.078E+02 1.931E+02 +
F29 2.610E+02 1.527E+01 2.883E+02 3.422E+01 + 3.944E+02 6.209E+01 + 3.377E+02 7.040E+01 +
F30 6.655E+04 2.385E+05 2.633E+05 4.876E+05 ≈ 2.377E+06 2.314E+06 + 2.074E+06 2.343E+06 +
W/T/L –/–/– 20/6/3 28/1/0 27/2/0

RSA GWO WOA BSO


Mean Std Mean Std Mean Std Mean Std

F1 1.096E+10 3.584E+09 + 1.798E+06 5.512E+06 + 1.840E+05 3.947E+05 + 1.241E+03 1.912E+03 ≈


F3 8.621E+03 2.450E+03 + 5.673E+02 1.064E+03 + 2.547E+02 3.845E+02 + 0.000E+00 1.608E–14 +
F4 7.560E+02 4.851E+02 + 1.129E+01 1.254E+01 + 1.886E+01 3.417E+01 + 4.203E+00 8.820E+00 +
F5 9.056E+01 1.478E+01 + 1.271E+01 7.305E+00 + 4.583E+01 1.818E+01 + 3.842E+01 1.180E+01 +
F6 4.805E+01 8.060E+00 + 3.252E–01 5.226E–01 + 2.774E+01 1.221E+01 + 2.645E+01 9.167E+00 +
F7 1.074E+02 9.927E+00 + 2.777E+01 8.654E+00 + 7.444E+01 2.177E+01 + 5.797E+01 2.165E+01 +
F8 5.214E+01 6.716E+00 + 1.127E+01 4.280E+00 ≈ 3.558E+01 1.452E+01 + 2.111E+01 9.926E+00 +
F9 6.450E+02 3.000E+02 + 5.574E+00 1.421E+01 + 4.522E+02 4.465E+02 + 2.422E+02 1.148E+02 +
F10 1.539E+03 2.300E+02 + 5.324E+02 2.803E+02 ≈ 9.258E+02 3.849E+02 + 1.089E+03 2.807E+02 +
F11 2.921E+03 2.041E+03 + 2.988E+01 2.595E+01 + 6.956E+01 5.300E+01 + 5.599E+01 3.521E+01 +
F12 5.453E+08 4.599E+08 + 5.416E+05 7.734E+05 + 2.654E+06 2.894E+06 + 4.727E+04 5.382E+04 +

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 9 of 30 114

Table 4 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std

F13 3.152E+07 3.915E+07 + 9.294E+03 6.154E+03 + 1.823E+04 1.362E+04 + 6.318E+03 3.702E+03 +


F14 3.312E+03 6.809E+03 + 9.556E+02 1.530E+03 + 1.595E+02 1.990E+02 + 1.621E+02 2.234E+02 +
F15 7.823E+03 8.698E+03 + 1.239E+03 1.376E+03 + 2.304E+03 2.546E+03 + 1.769E+03 1.805E+03 +
F16 4.787E+02 1.095E+02 + 8.239E+01 7.286E+01 + 2.310E+02 1.091E+02 + 2.623E+02 1.287E+02 +
F17 1.414E+02 4.512E+01 + 4.507E+01 2.660E+01 + 9.369E+01 5.254E+01 + 6.817E+01 3.089E+01 +
F18 1.857E+07 2.179E+07 + 2.376E+04 1.443E+04 + 1.483E+04 1.247E+04 + 6.144E+03 6.905E+03 ≈
F19 5.402E+05 1.202E+06 + 2.820E+03 4.796E+03 ≈ 1.588E+04 1.700E+04 + 1.713E+03 2.340E+03 +
F20 2.746E+02 6.186E+01 + 5.589E+01 3.957E+01 + 1.562E+02 7.265E+01 + 1.282E+02 6.811E+01 +
F21 1.640E+02 4.028E+01 + 1.919E+02 4.524E+01 + 2.042E+02 6.092E+01 + 1.966E+02 6.176E+01 +
F22 9.229E+02 3.166E+02 + 1.049E+02 1.425E+01 + 1.144E+02 7.721E+00 + 9.949E+01 8.500E+00 ≈
F23 3.934E+02 1.162E+01 + 3.167E+02 9.004E+00 + 3.475E+02 1.769E+01 + 3.858E+02 3.915E+01 +
F24 4.343E+02 3.514E+01 + 3.428E+02 1.179E+01 + 3.545E+02 8.024E+01 + 3.583E+02 1.348E+02 +
F25 9.865E+02 2.771E+02 + 4.334E+02 1.646E+01 + 4.349E+02 5.103E+01 + 4.222E+02 5.061E+01 −
F26 1.337E+03 3.167E+02 + 3.502E+02 1.941E+02 + 6.901E+02 4.308E+02 + 7.219E+02 3.452E+02 +
F27 4.328E+02 1.059E+01 + 3.975E+02 1.757E+01 − 4.191E+02 3.051E+01 + 4.898E+02 4.448E+01 +
F28 8.923E+02 1.795E+02 + 5.558E+02 8.894E+01 + 5.562E+02 1.602E+02 + 4.698E+02 1.563E+02 +
F29 4.640E+02 8.091E+01 + 2.758E+02 3.608E+01 ≈ 4.052E+02 8.460E+01 + 3.458E+02 6.359E+01 +
F30 3.938E+06 7.758E+06 + 7.147E+05 7.529E+05 + 4.692E+05 8.845E+05 + 2.278E+05 5.087E+05 +
W/T/L 29/0/0 24/4/1 29/0/0 25/3/1

MRFO. This hierarchical structure is divided into three lay- shown in Fig. 3 and Algorithm 1 presents the pseudocode of
ers, as described below: HMRFO.

• 60% of the population in the next generation is updated


using Eq. (13), where x dP R1 is the position of an individual
randomly selected from the top P R1 individuals sorted 4 Experimental Results and Analysis
by FW.
• 30% of the population in the next generation is updated 4.1 Experimental Settings
using Eq. (14), where x dP R2 is the position of an individual
randomly selected from the top P R2 individuals sorted In this study, the proposed algorithm’s performance is veri-
by FW. fied using the IEEE Congress on Evolutionary Computation
• The remaining 10% of the population in the next genera- 2017 (CEC2017) benchmark functions [77]. The IEEE
tion is updated using Eq. (7), where xbestd still represents CEC2017 test suite includes 30 functions; however, F2 is
the position of the individual with the best fitness value. excluded due to instability. Among the 29 benchmark func-
tions, there are two unimodal functions (F1 and F3), seven
The schematic diagram of this hierarchical structure is simple multimodal functions (F4–F10), ten hybrid functions
shown in Fig. 2. Equations (13) and (14) are as follows: (F11–F20), and ten composition functions (F21–F30).
In all experiments, the population size (N ) was set to 100.
xid (t + 1) = xid (t) + S · r2 · x dP R1 − r3 · xid (t) , (13) The maximum number of function evaluations (NFEs) was
set to 10, 000 ∗ D, where D represents the dimension of
xid (t + 1) = xid (t) + S · r2 · x dP R2 − r3 · xid (t) . (14) the problem. The search range was set to [−100, 100]. Each
algorithm was run 51 times on each function. The best values
Using a hierarchical structure to update the population is among all compared algorithms in the following tables are
more effective in maintaining population diversity. Updating shown in bold. All algorithms were implemented in MAT-
the population only with X best in somersault foraging is too LAB R2021b on a PC with a 2.60GHz Intel(R) Core(TM)
simplistic and may lead to local optima, which cannot achieve i7-9750H CPU and 16GB RAM. The source code of HMRFO
the goal of global optimization. The workflow of HMRFO is is released at https://toyamaailab.github.io/sourcedata.html.

123
114 Page 10 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 5 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 30 dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std

F1 2.602E+03 3.305E+03 3.738E+03 4.479E+03 ≈ 1.462E+09 7.671E+09 ≈ 1.661E+07 8.017E+07 +


F3 5.474E+01 4.114E+01 4.774E+00 1.034E+01 − 5.826E+04 3.786E+04 + 2.429E+03 4.748E+03 +
F4 3.092E+01 3.623E+01 4.668E+01 3.728E+01 + 6.177E+02 1.342E+03 + 4.581E+02 3.170E+02 +
F5 6.126E+01 1.628E+01 1.535E+02 4.344E+01 + 2.862E+02 8.823E+01 + 1.457E+02 4.729E+01 +
F6 2.106E–01 2.939E–01 1.782E+01 1.013E+01 + 6.337E+01 1.764E+01 + 2.502E+01 8.602E+00 +
F7 1.001E+02 2.681E+01 2.562E+02 6.989E+01 + 4.812E+02 1.525E+02 + 2.569E+02 5.750E+01 +
F8 6.723E+01 2.349E+01 1.289E+02 3.559E+01 + 2.131E+02 8.869E+01 + 1.273E+02 2.980E+01 +
F9 4.386E+01 4.830E+01 2.295E+03 1.048E+03 + 7.755E+03 3.281E+03 + 3.154E+03 1.325E+03 +
F10 3.267E+03 5.664E+02 3.660E+03 6.414E+02 + 6.321E+03 1.655E+03 + 3.524E+03 7.312E+02 ≈
F11 5.858E+01 2.750E+01 8.667E+01 3.654E+01 + 2.128E+03 2.426E+03 + 3.566E+02 2.507E+02 +
F12 4.639E+04 2.344E+04 5.969E+04 2.759E+04 + 3.675E+08 9.400E+08 + 8.585E+07 1.659E+08 +
F13 8.655E+03 8.503E+03 1.270E+04 1.528E+04 ≈ 4.914E+08 1.007E+09 + 1.201E+07 4.615E+07 +
F14 1.377E+03 1.495E+03 2.781E+03 3.262E+03 + 5.991E+05 6.454E+05 + 1.356E+05 2.768E+05 +
F15 2.870E+03 3.218E+03 7.138E+03 9.188E+03 + 2.385E+07 5.110E+07 + 2.104E+05 1.366E+06 +
F16 7.597E+02 2.849E+02 9.020E+02 2.411E+02 + 2.108E+03 8.925E+02 + 1.328E+03 3.729E+02 +
F17 1.753E+02 1.090E+02 4.656E+02 2.195E+02 + 8.931E+02 3.991E+02 + 5.103E+02 2.027E+02 +
F18 8.106E+04 4.836E+04 8.554E+04 3.711E+04 ≈ 6.744E+06 9.174E+06 + 6.259E+05 2.004E+06 ≈
F19 5.560E+03 6.391E+03 8.213E+03 9.996E+03 ≈ 3.485E+07 7.881E+07 + 1.493E+04 1.536E+04 +
F20 2.041E+02 8.186E+01 4.444E+02 1.743E+02 + 7.680E+02 1.893E+02 + 6.274E+02 2.118E+02 +
F21 2.558E+02 1.590E+01 3.120E+02 3.436E+01 + 4.574E+02 1.045E+02 + 3.351E+02 4.075E+01 +
F22 1.002E+02 6.694E–01 1.004E+02 1.204E+00 + 5.402E+02 1.244E+03 + 2.591E+03 2.065E+03 +
F23 4.197E+02 2.649E+01 4.874E+02 3.666E+01 + 8.229E+02 1.483E+02 + 6.291E+02 1.227E+02 +
F24 4.838E+02 3.014E+01 5.773E+02 5.378E+01 + 8.634E+02 1.680E+02 + 7.538E+02 1.073E+02 +
F25 3.875E+02 3.210E+00 3.944E+02 1.861E+01 ≈ 6.845E+02 5.473E+02 + 5.397E+02 1.008E+02 +
F26 1.284E+03 9.921E+02 1.937E+03 1.464E+03 + 3.435E+03 2.356E+03 + 3.121E+03 1.143E+03 +
F27 5.413E+02 1.867E+01 5.703E+02 2.719E+01 + 9.095E+02 1.987E+02 + 8.132E+02 9.760E+01 +
F28 3.210E+02 4.581E+01 3.397E+02 5.776E+01 + 7.549E+02 7.936E+02 + 7.495E+02 2.564E+02 +
F29 6.762E+02 1.605E+02 9.128E+02 2.067E+02 + 2.137E+03 6.002E+02 + 1.322E+03 4.000E+02 +
F30 4.920E+03 2.032E+03 5.900E+03 2.593E+03 + 5.599E+07 8.321E+07 + 1.681E+06 4.255E+06 +
W/T/L –/–/– 23/5/1 28/1/0 27/2/0

RSA GWO WOA BSO


Mean Std Mean Std Mean Std Mean Std

F1 5.384E+10 9.292E+09 + 9.328E+08 8.157E+08 + 2.746E+06 1.911E+06 + 2.330E+03 2.461E+03 ≈


F3 7.423E+04 5.505E+03 + 2.803E+04 8.092E+03 + 1.572E+05 6.036E+04 + 3.010E+01 1.671E+02 −
F4 1.455E+04 4.561E+03 + 1.442E+02 3.276E+01 + 1.478E+02 4.097E+01 + 9.504E+01 1.982E+01 +
F5 3.891E+02 3.309E+01 + 9.191E+01 2.354E+01 + 2.652E+02 5.322E+01 + 1.938E+02 4.090E+01 +
F6 8.634E+01 7.461E+00 + 4.693E+00 3.004E+00 + 6.718E+01 9.739E+00 + 5.287E+01 6.375E+00 +
F7 6.725E+02 6.730E+01 + 1.474E+02 3.596E+01 + 5.369E+02 9.060E+01 + 5.065E+02 1.110E+02 +
F8 3.116E+02 2.206E+01 + 8.182E+01 2.299E+01 + 2.046E+02 5.146E+01 + 1.446E+02 3.197E+01 +
F9 8.533E+03 1.196E+03 + 4.242E+02 2.347E+02 + 6.142E+03 2.159E+03 + 3.353E+03 6.755E+02 +
F10 7.021E+03 3.593E+02 + 2.909E+03 7.536E+02 − 5.044E+03 8.331E+02 + 4.300E+03 6.155E+02 +
F11 7.770E+03 2.806E+03 + 3.375E+02 3.685E+02 + 4.008E+02 1.325E+02 + 1.434E+02 4.700E+01 +
F12 1.703E+10 4.363E+09 + 3.088E+07 5.277E+07 + 3.873E+07 3.200E+07 + 1.565E+06 8.133E+05 +
F13 1.187E+10 4.906E+09 + 6.233E+05 3.575E+06 + 1.320E+05 1.024E+05 + 5.135E+04 2.439E+04 +
F14 3.074E+06 3.588E+06 + 1.484E+05 2.439E+05 + 6.977E+05 6.893E+05 + 4.069E+03 3.243E+03 +
F15 6.736E+08 5.747E+08 + 9.164E+04 2.899E+05 + 7.679E+04 5.329E+04 + 3.132E+04 2.033E+04 +
F16 3.898E+03 6.862E+02 + 7.493E+02 2.635E+02 ≈ 1.905E+03 4.504E+02 + 1.485E+03 3.277E+02 +
F17 5.306E+03 6.866E+03 + 2.779E+02 1.651E+02 + 7.679E+02 2.678E+02 + 8.051E+02 2.388E+02 +
F18 3.277E+07 3.071E+07 + 6.120E+05 5.727E+05 + 2.939E+06 2.656E+06 + 1.218E+05 1.110E+05 +

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 11 of 30 114

Table 5 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std

F19 2.323E+09 1.694E+09 + 8.827E+05 1.949E+06 + 2.711E+06 2.059E+06 + 1.299E+05 5.256E+04 +


F20 8.636E+02 1.426E+02 + 3.404E+02 1.337E+02 + 7.124E+02 2.021E+02 + 7.168E+02 1.907E+02 +
F21 6.431E+02 4.269E+01 + 2.834E+02 2.986E+01 + 4.510E+02 5.528E+01 + 3.987E+02 3.959E+01 +
F22 5.253E+03 1.008E+03 + 1.762E+03 1.485E+03 + 4.401E+03 2.062E+03 + 4.112E+03 1.624E+03 +
F23 1.039E+03 1.089E+02 + 4.321E+02 2.277E+01 + 7.198E+02 9.751E+01 + 9.984E+02 1.115E+02 +
F24 1.177E+03 2.453E+02 + 5.032E+02 4.536E+01 + 7.705E+02 8.819E+01 + 1.103E+03 9.604E+01 +
F25 2.224E+03 8.605E+02 + 4.570E+02 2.613E+01 + 4.454E+02 2.942E+01 + 3.903E+02 9.092E+00 ≈
F26 7.933E+03 1.124E+03 + 1.837E+03 3.100E+02 + 4.895E+03 9.781E+02 + 5.827E+03 1.109E+03 +
F27 9.409E+02 2.313E+02 + 5.326E+02 1.520E+01 − 6.501E+02 6.477E+01 + 1.193E+03 2.609E+02 +
F28 3.985E+03 8.850E+02 + 5.472E+02 5.758E+01 + 4.988E+02 3.214E+01 + 3.765E+02 5.041E+01 +
F29 4.146E+03 1.609E+03 + 7.531E+02 1.339E+02 + 1.885E+03 4.113E+02 + 1.619E+03 3.690E+02 +
F30 2.239E+09 9.259E+08 + 5.504E+06 5.643E+06 + 9.920E+06 6.755E+06 + 5.262E+05 3.095E+05 +
W/T/L 29/0/0 26/1/2 29/0/0 26/2/1

Table 6 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 50
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std

F1 3.470E+03 4.390E+03 5.278E+03 5.292E+03 + 7.360E+09 2.171E+10 ≈ 9.919E+08 3.411E+09 +


F3 2.904E+04 6.447E+03 1.624E+04 4.623E+03 − 2.041E+05 4.305E+04 + 4.166E+04 5.777E+04 −
F4 9.640E+01 5.786E+01 8.911E+01 5.231E+01 ≈ 1.528E+03 4.977E+03 + 3.289E+03 1.935E+03 +
F5 1.710E+02 3.890E+01 3.250E+02 3.837E+01 + 4.251E+02 1.586E+02 + 3.091E+02 6.603E+01 +
F6 4.248E+00 4.477E+00 3.900E+01 1.227E+01 + 8.072E+01 1.928E+01 + 3.588E+01 8.211E+00 +
F7 2.393E+02 5.414E+01 6.367E+02 1.526E+02 + 9.171E+02 1.878E+02 + 6.691E+02 1.175E+02 +
F8 1.668E+02 4.674E+01 3.263E+02 5.333E+01 + 5.012E+02 1.882E+02 + 3.035E+02 6.896E+01 +
F9 8.124E+02 6.671E+02 8.282E+03 2.568E+03 + 3.127E+04 1.240E+04 + 1.151E+04 3.209E+03 +
F10 6.208E+03 1.003E+03 6.575E+03 8.382E+02 + 1.173E+04 3.014E+03 + 5.973E+03 1.019E+03 ≈
F11 1.276E+02 3.428E+01 1.568E+02 3.970E+01 + 6.883E+03 8.539E+03 + 4.105E+03 4.079E+03 +
F12 5.458E+05 3.348E+05 7.625E+05 4.837E+05 + 4.109E+09 1.312E+10 + 1.296E+09 2.252E+09 +
F13 2.634E+03 3.545E+03 4.208E+03 4.911E+03 ≈ 1.521E+09 3.723E+09 + 2.332E+06 1.638E+07 +
F14 2.226E+04 1.446E+04 2.091E+04 2.042E+04 − 6.639E+06 7.101E+06 + 4.044E+06 6.734E+06 +
F15 4.370E+03 4.208E+03 7.108E+03 6.514E+03 + 3.622E+08 6.853E+08 + 3.162E+05 1.642E+06 +
F16 1.357E+03 4.273E+02 1.647E+03 4.693E+02 + 3.691E+03 1.479E+03 + 2.631E+03 7.796E+02 +
F17 1.073E+03 2.865E+02 1.350E+03 3.414E+02 + 2.407E+03 6.650E+02 + 1.516E+03 3.538E+02 +
F18 2.096E+05 1.349E+05 1.471E+05 8.659E+04 − 2.837E+07 3.638E+07 + 1.048E+07 1.422E+07 +
F19 1.380E+04 8.170E+03 1.455E+04 1.033E+04 ≈ 1.635E+08 3.962E+08 + 1.283E+04 1.670E+04 −
F20 8.284E+02 2.850E+02 1.099E+03 3.582E+02 + 1.797E+03 4.564E+02 + 1.230E+03 3.499E+02 +
F21 3.278E+02 2.320E+01 4.489E+02 5.186E+01 + 6.837E+02 1.790E+02 + 5.105E+02 8.051E+01 +
F22 9.087E+02 2.252E+03 6.928E+03 1.683E+03 + 1.255E+04 3.307E+03 + 7.477E+03 1.474E+03 +
F23 6.142E+02 5.501E+01 7.939E+02 9.826E+01 + 1.463E+03 2.659E+02 + 1.078E+03 2.207E+02 +
F24 6.561E+02 4.957E+01 8.630E+02 9.604E+01 + 1.548E+03 2.956E+02 + 1.423E+03 3.037E+02 +
F25 5.518E+02 3.455E+01 5.554E+02 3.685E+01 ≈ 1.465E+03 2.862E+03 + 2.082E+03 7.709E+02 +
F26 2.151E+03 2.426E+03 3.876E+03 3.799E+03 + 7.648E+03 4.114E+03 + 7.099E+03 1.543E+03 +
F27 7.763E+02 9.264E+01 9.031E+02 1.534E+02 + 2.238E+03 6.243E+02 + 2.078E+03 3.871E+02 +

123
114 Page 12 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 6 continued
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std

F28 4.895E+02 2.785E+01 4.991E+02 3.291E+01 + 1.290E+03 1.762E+03 + 2.944E+03 8.029E+02 +


F29 1.132E+03 3.195E+02 1.554E+03 3.070E+02 + 4.766E+03 3.884E+03 + 3.052E+03 8.637E+02 +
F30 8.735E+05 1.566E+05 9.563E+05 2.124E+05 + 4.685E+08 9.958E+08 + 9.298E+07 6.349E+07 +
W/T/L –/–/– 22/4/3 28/1/0 26/1/2
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std
F1 9.635E+10 9.884E+09 + 4.493E+09 2.326E+09 + 8.759E+06 8.772E+06 + 6.160E+03 5.302E+03 +
F3 1.487E+05 1.042E+04 + 7.261E+04 1.523E+04 + 6.658E+04 3.400E+04 + 5.502E+02 7.789E+02 −
F4 2.710E+04 6.760E+03 + 4.341E+02 1.771E+02 + 2.911E+02 7.054E+01 + 1.690E+02 4.716E+01 +
F5 6.336E+02 2.898E+01 + 1.922E+02 5.205E+01 + 4.247E+02 8.710E+01 + 3.259E+02 4.164E+01 +
F6 9.827E+01 4.680E+00 + 1.019E+01 3.623E+00 + 7.817E+01 1.074E+01 + 6.019E+01 4.429E+00 +
F7 1.302E+03 5.902E+01 + 3.037E+02 6.764E+01 + 1.001E+03 9.285E+01 + 1.043E+03 1.452E+02 +
F8 6.791E+02 2.650E+01 + 1.852E+02 3.069E+01 + 4.346E+02 9.597E+01 + 3.412E+02 4.684E+01 +
F9 3.204E+04 2.580E+03 + 3.551E+03 2.143E+03 + 1.934E+04 4.682E+03 + 1.070E+04 1.350E+03 +
F10 1.315E+04 4.800E+02 + 5.519E+03 8.486E+02 − 8.766E+03 1.263E+03 + 7.350E+03 8.218E+02 +
F11 1.682E+04 2.846E+03 + 1.789E+03 1.152E+03 + 4.985E+02 1.057E+02 + 2.030E+02 4.325E+01 +
F12 7.667E+10 1.708E+10 + 5.809E+08 6.600E+08 + 2.109E+08 1.138E+08 + 1.276E+07 6.911E+06 +
F13 4.590E+10 1.270E+10 + 1.168E+08 1.135E+08 + 2.718E+05 2.331E+05 + 6.146E+04 3.237E+04 +
F14 3.605E+07 2.864E+07 + 3.512E+05 3.476E+05 + 6.736E+05 4.185E+05 + 3.104E+04 2.546E+04 ≈
F15 6.628E+09 4.800E+09 + 4.375E+06 9.979E+06 + 7.757E+04 5.104E+04 + 2.668E+04 1.300E+04 +
F16 7.172E+03 1.378E+03 + 1.280E+03 3.636E+02 ≈ 3.095E+03 6.584E+02 + 2.266E+03 5.274E+02 +
F17 1.819E+04 3.409E+04 + 9.308E+02 2.273E+02 − 2.284E+03 4.477E+02 + 1.973E+03 4.025E+02 +
F18 9.345E+07 5.237E+07 + 4.024E+06 4.810E+06 + 4.979E+06 4.231E+06 + 2.541E+05 1.010E+05 +
F19 6.888E+09 2.802E+09 + 1.429E+06 2.421E+06 + 1.866E+06 1.606E+06 + 4.780E+05 2.487E+05 +
F20 1.825E+03 1.962E+02 + 7.867E+02 3.287E+02 ≈ 1.613E+03 3.023E+02 + 1.534E+03 2.933E+02 +
F21 1.032E+03 9.733E+01 + 3.815E+02 2.742E+01 + 7.630E+02 1.077E+02 + 6.369E+02 6.712E+01 +
F22 1.408E+04 5.151E+02 + 6.136E+03 1.341E+03 + 9.211E+03 1.176E+03 + 8.315E+03 8.569E+02 +
F23 1.654E+03 1.582E+02 + 6.181E+02 5.885E+01 ≈ 1.265E+03 1.165E+02 + 1.650E+03 2.050E+02 +
F24 2.022E+03 4.130E+02 + 7.142E+02 9.997E+01 + 1.258E+03 1.577E+02 + 1.697E+03 2.054E+02 +
F25 1.031E+04 1.269E+03 + 8.599E+02 1.553E+02 + 6.461E+02 3.385E+01 + 5.665E+02 3.381E+01 +
F26 1.341E+04 1.036E+03 + 3.201E+03 5.942E+02 + 1.022E+04 1.555E+03 + 1.053E+04 7.705E+02 +
F27 1.905E+03 3.265E+02 + 7.910E+02 7.505E+01 ≈ 1.429E+03 3.336E+02 + 2.651E+03 4.883E+02 +
F28 8.950E+03 1.133E+03 + 1.032E+03 2.321E+02 + 6.280E+02 5.184E+01 + 5.126E+02 2.997E+01 +
F29 6.189E+04 4.451E+04 + 1.316E+03 2.674E+02 + 4.191E+03 9.273E+02 + 2.751E+03 4.849E+02 +
F30 8.605E+09 2.718E+09 + 7.309E+07 3.310E+07 + 8.594E+07 2.916E+07 + 1.572E+07 1.859E+06 +
W/T/L 29/0/0 23/4/2 29/0/0 27/1/1

4.2 Performance Evaluation Criteria problems, their minimum mean values (i.e., the best val-
ues) are highlighted in boldface.
The performance of HMRFO is evaluated using the following (2) Non-parametric statistical tests, including the Wilcoxon
four criteria: rank-sum test [78–80] to compare the obtained p-value
and the significant level α = 0.05 between the proposed
algorithm and the compared algorithm. A p-value of ≤
(1) Mean and standard deviation (std) of optimization errors 0.05 indicates a significant difference between the two
between obtained optimal values and known real opti- algorithms. The symbol +" denotes that the proposed
mal values. Since all objective functions are minimization

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 13 of 30 114

Table 7 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 100
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std

F1 7.041E+03 1.070E+04 4.745E+03 7.020E+03 ≈ 2.979E+10 8.275E+10 + 9.562E+09 2.731E+10 +


F3 1.282E+05 1.850E+04 8.837E+04 1.436E+04 − 4.102E+05 1.181E+05 + 1.686E+05 1.181E+05 ≈
F4 2.260E+02 4.708E+01 2.441E+02 4.546E+01 + 3.349E+03 1.154E+04 + 2.435E+04 9.289E+03 +
F5 5.001E+02 9.029E+01 7.939E+02 5.383E+01 + 1.123E+03 3.860E+02 + 7.885E+02 1.186E+02 +
F6 2.329E+01 1.013E+01 5.288E+01 6.535E+00 + 9.399E+01 2.316E+01 + 4.612E+01 5.629E+00 +
F7 7.957E+02 1.750E+02 2.044E+03 3.755E+02 + 2.650E+03 4.840E+02 + 2.523E+03 3.470E+02 +
F8 4.760E+02 8.640E+01 8.795E+02 9.688E+01 + 1.254E+03 3.895E+02 + 8.297E+02 1.428E+02 +
F9 7.764E+03 3.297E+03 2.095E+04 1.522E+03 + 6.267E+04 2.732E+04 + 2.651E+04 4.698E+03 +
F10 1.380E+04 1.381E+03 1.400E+04 1.143E+03 ≈ 2.638E+04 6.757E+03 + 1.477E+04 4.098E+03 ≈
F11 5.382E+02 1.223E+02 5.815E+02 9.974E+01 + 1.955E+05 8.247E+04 + 1.325E+05 5.401E+04 +
F12 9.133E+05 3.326E+05 1.114E+06 4.956E+05 + 7.813E+09 2.898E+10 + 2.861E+10 1.978E+10 +
F13 3.086E+03 3.114E+03 5.176E+03 5.464E+03 + 1.657E+09 5.891E+09 + 9.359E+07 5.607E+08 +
F14 1.574E+05 8.049E+04 1.210E+05 5.954E+04 − 1.730E+07 3.050E+07 + 5.640E+06 7.371E+06 +
F15 1.888E+03 2.126E+03 2.614E+03 2.981E+03 ≈ 4.192E+08 2.183E+09 + 5.979E+07 1.838E+08 +
F16 3.711E+03 7.617E+02 4.035E+03 6.447E+02 + 1.029E+04 4.420E+03 + 8.338E+03 1.984E+03 +
F17 2.740E+03 5.377E+02 3.069E+03 5.960E+02 + 3.464E+04 1.336E+05 + 3.701E+03 1.227E+03 +
F18 4.550E+05 1.976E+05 3.141E+05 1.330E+05 − 2.227E+07 4.413E+07 ≈ 1.566E+07 3.413E+07 +
F19 2.985E+03 3.830E+03 4.447E+03 5.895E+03 ≈ 5.661E+08 2.284E+09 + 1.746E+08 7.331E+08 +
F20 2.778E+03 4.567E+02 3.062E+03 5.765E+02 + 5.173E+03 9.592E+02 + 3.409E+03 7.814E+02 +
F21 6.334E+02 8.106E+01 9.622E+02 1.082E+02 + 1.682E+03 4.596E+02 + 1.331E+03 3.258E+02 +
F22 1.200E+04 7.819E+03 1.672E+04 1.765E+03 + 2.755E+04 5.569E+03 + 1.770E+04 4.489E+03 ≈
F23 1.042E+03 8.240E+01 1.380E+03 1.284E+02 + 3.070E+03 7.959E+02 + 2.974E+03 4.487E+02 +
F24 1.564E+03 1.808E+02 2.155E+03 2.506E+02 + 5.138E+03 1.302E+03 + 5.056E+03 8.685E+02 +
F25 7.874E+02 6.025E+01 7.732E+02 6.900E+01 ≈ 3.453E+03 6.374E+03 + 9.653E+03 3.146E+03 +
F26 1.195E+04 6.118E+03 1.620E+04 7.738E+03 + 2.558E+04 1.188E+04 + 2.842E+04 6.302E+03 +
F27 9.736E+02 1.126E+02 1.240E+03 2.246E+02 + 5.039E+03 1.892E+03 + 4.830E+03 1.014E+03 +
F28 5.620E+02 2.443E+01 5.728E+02 3.129E+01 + 4.755E+03 9.010E+03 + 1.558E+04 3.032E+03 +
F29 3.568E+03 5.398E+02 3.789E+03 5.135E+02 + 1.877E+04 2.814E+04 + 9.212E+03 2.101E+03 +
F30 5.998E+03 2.866E+03 7.896E+03 4.135E+03 + 1.806E+09 5.231E+09 + 8.556E+08 1.422E+09 +
W/T/L –/–/– 21/5/3 28/1/0 26/3/0

RSA GWO WOA BSO


Mean Std Mean Std Mean Std Mean Std

F1 2.435E+11 9.446E+09 + 3.177E+10 6.702E+09 + 3.994E+07 1.833E+07 + 4.694E+06 1.271E+06 +


F3 3.155E+05 1.683E+04 + 2.040E+05 2.320E+04 + 5.596E+05 1.836E+05 + 2.456E+04 1.908E+04 −
F4 7.517E+04 9.738E+03 + 2.445E+03 6.468E+02 + 6.117E+02 9.430E+01 + 2.820E+02 5.803E+01 +
F5 1.483E+03 4.743E+01 + 5.426E+02 5.583E+01 + 9.246E+02 9.402E+01 + 8.078E+02 7.808E+01 +
F6 1.072E+02 3.562E+00 + 2.800E+01 4.704E+00 + 7.966E+01 9.465E+00 + 6.510E+01 3.407E+00 +
F7 3.361E+03 1.066E+02 + 1.038E+03 1.046E+02 + 2.514E+03 1.719E+02 + 2.689E+03 2.749E+02 +
F8 1.636E+03 4.052E+01 + 5.462E+02 4.823E+01 + 1.091E+03 1.325E+02 + 9.136E+02 7.880E+01 +
F9 7.459E+04 6.599E+03 + 2.317E+04 9.458E+03 + 3.608E+04 9.728E+03 + 2.749E+04 2.976E+03 +
F10 2.976E+04 7.076E+02 + 1.345E+04 3.303E+03 − 1.982E+04 2.759E+03 + 1.546E+04 1.501E+03 +
F11 1.577E+05 2.500E+04 + 3.520E+04 1.069E+04 + 6.800E+03 2.349E+03 + 1.227E+03 1.384E+02 +
F12 1.670E+11 2.323E+10 + 4.837E+09 2.572E+09 + 6.098E+08 1.654E+08 + 7.418E+07 1.494E+07 +

123
114 Page 14 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 7 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std

F13 4.117E+10 8.719E+09 + 3.962E+08 3.537E+08 + 8.856E+04 3.315E+04 + 4.073E+04 1.085E+04 +


F14 6.200E+07 2.371E+07 + 3.597E+06 2.453E+06 + 1.732E+06 6.889E+05 + 3.031E+05 1.306E+05 +
F15 2.296E+10 6.884E+09 + 9.266E+07 2.321E+08 + 1.382E+05 3.507E+05 + 3.105E+04 1.328E+04 +
F16 1.972E+04 3.328E+03 + 3.721E+03 5.503E+02 ≈ 7.779E+03 1.542E+03 + 5.463E+03 7.726E+02 +
F17 6.727E+06 5.873E+06 + 2.708E+03 5.249E+02 ≈ 5.451E+03 1.004E+03 + 3.867E+03 4.811E+02 +
F18 9.293E+07 3.879E+07 + 3.226E+06 2.550E+06 + 2.322E+06 1.097E+06 + 4.686E+05 1.887E+05 ≈
F19 2.305E+10 7.182E+09 + 1.146E+08 1.809E+08 + 1.534E+07 6.739E+06 + 2.659E+06 1.254E+06 +
F20 5.128E+03 2.153E+02 + 2.374E+03 7.104E+02 − 4.347E+03 7.279E+02 + 3.724E+03 5.035E+02 +
F21 3.014E+03 2.490E+02 + 7.437E+02 5.625E+01 + 1.817E+03 1.908E+02 + 1.850E+03 2.182E+02 +
F22 3.130E+04 5.639E+02 + 1.566E+04 2.528E+03 ≈ 2.128E+04 2.443E+03 + 1.709E+04 1.272E+03 +
F23 2.962E+03 1.560E+02 + 1.109E+03 7.751E+01 + 2.431E+03 2.301E+02 + 3.246E+03 3.398E+02 +
F24 6.766E+03 2.287E+03 + 1.534E+03 8.054E+01 ≈ 3.460E+03 3.756E+02 + 3.615E+03 6.558E+02 +
F25 2.205E+04 2.229E+03 + 2.828E+03 5.136E+02 + 1.122E+03 7.378E+01 + 8.012E+02 4.237E+01 ≈
F26 4.470E+04 3.909E+03 + 9.794E+03 9.008E+02 − 2.894E+04 3.878E+03 + 2.629E+04 2.086E+03 +
F27 6.058E+03 1.730E+03 + 1.128E+03 8.608E+01 + 2.590E+03 7.884E+02 + 4.424E+03 1.269E+03 +
F28 2.727E+04 2.445E+03 + 4.041E+03 1.144E+03 + 9.169E+02 6.866E+01 + 6.088E+02 3.851E+01 +
F29 5.596E+05 4.382E+05 + 4.277E+03 5.183E+02 + 1.120E+04 1.728E+03 + 6.137E+03 6.884E+02 +
F30 3.889E+10 6.251E+09 + 4.819E+08 3.972E+08 + 1.826E+08 7.993E+07 + 1.187E+07 2.936E+06 +
W/T/L 29/0/0 22/4/3 29/0/0 26/2/1

algorithm is superior to its competitor, while the symbol height of the box represents the solution’s fluctuation, and
−" represents that the proposed algorithm is significantly the median represents the average level of the solution.
worse than its competitor. A p-value > 0.05 indicates (4) Convergence graphs to intuitively show the convergence
no significant difference between the two algorithms, speed and accuracy of the algorithm. It can explain
which is recorded as symbol ≈". W/T/L" indicates how whether the improved algorithm jumps out of the local
many times the proposed algorithm has won, tied and solution.
lost to its competitor, respectively. The Friedman test
[81], another non-parametric statistical test, is also used. 4.3 Comparison for Competitive Algorithms
The mean values of optimization errors are used as test
data. The smaller the value of Friedman rank, the better To evaluate the effectiveness and search performance of
the performance of the algorithm. The minimum value is HMRFO, seven competitive algorithms are compared: MRFO
highlighted in boldface. [11], adaptive wind driven optimization (AWDO) [70], a
(3) Box-and-whisker diagrams to show the robustness and hybrid algorithm that combines particle swarm optimization
accuracy of the algorithm’s solutions. The blue box’s and gravitational search algorithm (PSOGSA) [82], reptile
lower edge, red line, and upper edge indicate the first search algorithm (RSA) [12], grey wolf optimizer (GWO)
quartile, the second quartile (median), and the third quar- [13], whale optimization algorithm (WOA) [17], and brain
tile, respectively. The lines above and below the blue storm optimization (BSO) [15].
box indicate the maximum and minimum non-outliers,
respectively. The red symbol “+" displays outliers. The

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 15 of 30 114

24, 29 and 25 functions, respectively, indicating that HMRFO


performs well on functions with 10 dimensions.
In Table 5, HMRFO obtained the best mean values on 24
functions, which is the algorithm with the largest number of
best mean values. According to W/T/L, HMRFO surpassed
MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO on
23, 28, 27, 29, 26, 29 and 26 functions, respectively. This
shows that HMRFO has good performance on functions with
30 dimensions.
In Table 6, HMRFO, MRFO, AWDO, PSOGSA, RSA,
GWO, WOA and BSO achieved the best mean values on 20,
3, 0, 1, 0, 4, 0 and 1 functions, respectively. W/T/L demon-
strated that HMRFO significantly outperformed the others
on 22, 28, 26, 29, 23, 29 and 27 functions, respectively,
indicating that HMRFO maintains superior performance on
functions with 50 dimensions.
In Table 7, HMRFO obtained the best mean values on
19 functions. According to W/T/L, HMRFO outperformed
MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO on
21, 28, 26, 29, 22, 29 and 26 functions, respectively. The
result shows that HMRFO can optimize functions with 100
dimensions. Therefore, HMRFO has remarkably good per-
formance on functions with low, medium, high and large
dimensions.
The Friedman test results in Table 3 further prove that
HMRFO has superior performance. From the table, it can be
seen that HMRFO performs best on IEEE CEC2017 func-
tions with 10, 30, 50 and 100 dimensions.
Box-and-whisker diagrams and convergence graphs are
provided for four different types of functions from the IEEE
CEC2017 benchmark suite. Figures 4 and 5 show box-and-
whisker diagrams of errors obtained by eight algorithms on
IEEE CEC2017 functions with 10, 30, 50, and 100 dimen-
sions. The horizontal axis represents the eight algorithms,
Among them, RSA, GWO, WOA and BSO are meta- and the vertical axis represents the error values. From Figs. 4
heuristic algorithms based on swarm intelligence. The com- and 5, it can be observed that the blue box of HMRFO is the
parison is made on 29 benchmark functions from IEEE flattest, and its red median line is the lowest. This indicates
CEC2017 with dimensions of 10, 30, 50, and 100. AWDO that HMRFO has superior and stable performance.
is a variant of WDO that uses the covariance matrix adap- Figures 6 and 7 show convergence graphs of average opti-
tive evolutionary strategy (CMAES) to update parameters mizations obtained by eight algorithms on IEEE CEC2017
adaptively. PSOGSA is a hybrid algorithm that combines the functions with 10, 30, 50, and 100 dimensions. The horizon-
exploration of GSA and the exploitation of PSO. RSA is a tal axis represents the number of iterations, and the vertical
meta-heuristic algorithm inspired by the predatory behavior axis represents the log value of average optimizations. From
of crocodiles. GWO, WOA, and BSO are swarm intelligence Figs. 6 and 7, it is clear that the curves of HMRFO are the
algorithms inspired by wolves, whales, and brainstorming, lowest and the convergence speed is fast. Compared to the
respectively. The parameter settings of these algorithms are original MRFO in the convergence graphs, HMRFO can find
shown in Table 2. The comparative experimental results are a better solution, jump out of local optimization, avoid pre-
presented in Tables 4, 5, 6, 7, and 3. mature convergence, improve the solution quality, and have
In Table 4, HMRFO, MRFO, AWDO, PSOGSA, RSA, high optimization efficiency. This clearly demonstrates that
GWO, WOA and BSO achieved the best mean values on 23, the improved method in this paper is effective, and the popu-
5, 0, 1, 0, 1, 0 and 2 functions, respectively. According to lation diversity has indeed been improved. The advantages of
the W/T/L metric, HMRFO outperformed MRFO, AWDO, HMRFO are not fully reflected on unimodal functions. On
PSOGSA, RSA, GWO, WOA and BSO on 20, 28, 27, 29, more complex multimodal, hybrid, and composition func-

123
114 Page 16 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Fig. 4 Box-and-whisker diagrams of errors obtained by eight algorithms on IEEE CEC2017 functions (D = 10, 30)

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 17 of 30 114

Fig. 5 Box-and-whisker diagrams of errors obtained by eight algorithms on IEEE CEC2017 functions (D = 50, 100)

123
114 Page 18 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Fig. 6 Convergence graphs of average optimizations obtained by eight algorithms on IEEE CEC2017 functions (D = 10, 30)

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 19 of 30 114

Fig. 7 Convergence graphs of average optimizations obtained by eight algorithms on IEEE CEC2017 functions (D = 50, 100)

123
114 Page 20 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 8 CPU running time


Algorithms Dimension=10 Dimension=30 Dimension=50 Dimension=100
consumed by all tested
algorithms on IEEE CEC2017 HMRFO 6.17E+02 4.17E+03 1.23E+04 6.59E+04
functions with 10, 30, 50 and
100 dimensions MRFO 5.86E+02 4.12E+03 1.22E+04 6.56E+04
AWDO 3.90E+03 1.25E+04 2.78E+04 1.04E+05
PSOGSA 6.42E+03 2.72E+04 5.68E+04 2.18E+05
RSA 2.93E+03 2.33E+04 6.59E+04 2.78E+05
GWO 6.60E+02 5.34E+03 1.57E+04 8.00E+04
WOA 4.80E+02 3.72E+03 1.10E+04 6.25E+04
BSO 3.54E+03 1.31E+04 2.81E+04 9.87E+04

and MRFO have similar computational time and require very


little time. On the other hand, PSOGSA and RSA are the most
complex algorithms and take the longest time.
Figure 8 is plotted to display the computational time more
visibly. The horizontal axis represents the eight algorithms,
and the vertical axis represents the log value of CPU time.
From the bar graph, it is evident that the computational com-
plexities of HMRFO and MRFO are close and reasonable.
The effect of the improved method in this paper on the
algorithm complexity of MRFO is negligible. The low com-
putational complexity and minimal computation cost make
HMRFO suitable for high-dimensional problems and engi-
neering problems in the future.

4.5 Real-World Optimization Problems with Large


Fig. 8 Bar graph of CPU running time consumed by all tested algo- Dimensions
rithms on IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions
Four real-world optimization problems with large dimen-
sions from the IEEE Congress on Evolutionary Computation
2011 (CEC2011) [83] were used to evaluate the practicality
tions, HMRFO can search for smaller values and converge
of HMRFO. These problems are the hydrothermal scheduling
quickly, showing strong competitiveness.
problem (HSP), dynamic economic dispatch (DED) prob-
lem, large-scale transmission pricing problem (LSTPP), and
4.4 Algorithm Complexity static economic load dispatch (ELD) problem. The objective
of HSP is to minimize the total fuel cost for thermal system
The algorithm’s complexity validates the usability and func- operation, while DED is to determine the 24-h power gen-
tionality of the proposed algorithm. Algorithms with high eration schedule. LSTPP is a large-dimensional problem as
computational complexity are usually not studied due to their transmission pricing is affected by many factors. The objec-
high computation cost. Hence, the effectiveness of the algo- tive function of ELD is to minimize the fuel cost of generating
rithm should not only possess excellent optimization ability units during a specific period of operation. The dimensions
but also exhibit fast convergence speed and low computa- of HSP, DED, LSTPP, and ELD are 96, 120, 126, and 140,
tional complexity. In this subsection, we provide the central respectively. All of them are large-dimensional problems.
processing unit (CPU) running time consumed by all tested HMRFO and MRFO were each run 51 times on each prob-
algorithms on IEEE CEC2017 functions with 10, 30, 50, lem individually. The experimental results and running time
and 100 dimensions, and discuss the impact of the improved are listed in Tables 9 and 10, respectively.
method in this paper on the algorithm complexity of MRFO. In Table 9, on the HSP, DED, and ELD problems,
The maximum number of function evaluations for all algo- both HMRFO and MRFO found similar Best values. On
rithms is set to be the same. The CPU running time results are the LSTPP problem, MRFO explored a better Best value,
presented in Table 8. From the table, it can be observed that but after 51 independent runs on each problem, HMRFO
the computational time of WOA is the minimum. HMRFO obtained smaller Mean values on all four problems. These

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 21 of 30 114

Table 9 Experimental results of HMRFO and MRFO on real-world optimization problems with large dimensions
HSP DED
Algorithms Mean Std Best Worst Mean Std Best Worst

HMRFO 9.62E+05 2.24E+04 9.41E+5 1.07E+6 5.11E+04 4.52E+02 5.01E+4 5.21E+4


MRFO 9.63E+05 6.37E+04 9.41E+5 1.37E+6 5.12E+04 4.96E+02 5.01E+4 5.22E+4
LSTPP ELD
Algorithms Mean Std Best Worst Mean Std Best Worst
HMRFO 3.20E+03 4.18E+03 8.97E+2 3.06E+4 1.99E+06 7.23E+04 1.92E+6 2.27E+6
MRFO 3.30E+03 1.87E+03 0.00E+00 8.16E+3 2.00E+06 1.73E+05 1.92E+6 2.78E+6

Table 10 CPU running time consumed by HMRFO and MRFO on real-world optimization problems with large dimensions
Algorithms HSP (Dimension = 96) DED (Dimension = 120) LSTPP (Dimension = 126) ELD (Dimension = 140)

HMRFO 3.05E+03 4.71E+03 1.81E+04 5.86E+03


MRFO 3.07E+03 4.60E+03 1.79E+04 5.82E+03

Table 11 Experimental and statistical results of HMRFO with P R1 and P R2 on IEEE CEC2017 benchmark functions with 30 dimensions, where
P R1 = 0.8, P R2 = 0.6 is the main algorithm in statistical results (W/T/L)
P R1 = 0.1, P R2 = 0.2 P R1 = 0.2, P R2 = 0.1 P R1 = 0.2, P R2 = 0.5 P R1 = 0.5, P R2 = 0.2
Mean Std Mean Std Mean Std Mean Std

F1 3.056E+03 3.654E+03 ≈ 3.298E+03 3.903E+03 ≈ 3.445E+03 3.433E+03 ≈ 3.279E+03 2.981E+03 ≈


F3 4.855E+01 4.104E+01 ≈ 7.150E+01 1.727E+02 ≈ 4.659E+01 3.560E+01 ≈ 6.950E+01 5.761E+01 ≈
F4 4.287E+01 3.908E+01 + 4.961E+01 3.671E+01 + 4.622E+01 3.542E+01 + 4.541E+01 3.881E+01 +
F5 6.947E+01 1.979E+01 + 6.588E+01 2.173E+01 ≈ 6.633E+01 2.252E+01 ≈ 6.692E+01 1.723E+01 +
F6 3.356E–01 4.296E–01 ≈ 3.603E–01 5.395E–01 ≈ 4.131E–01 8.427E–01 ≈ 2.582E–01 7.995E–01 ≈
F7 1.102E+02 2.608E+01 + 1.104E+02 2.654E+01 + 9.892E+01 2.197E+01 ≈ 1.057E+02 2.258E+01 +
F8 6.656E+01 2.312E+01 ≈ 6.715E+01 1.955E+01 ≈ 7.181E+01 1.839E+01 + 6.629E+01 1.964E+01 ≈
F9 5.833E+01 5.683E+01 + 4.803E+01 4.720E+01 ≈ 4.582E+01 5.827E+01 ≈ 5.172E+01 4.678E+01 ≈
F10 3.496E+03 7.011E+02 ≈ 3.607E+03 6.429E+02 + 3.555E+03 6.053E+02 + 3.451E+03 5.576E+02 ≈
F11 6.312E+01 3.732E+01 ≈ 6.238E+01 2.819E+01 ≈ 6.190E+01 2.651E+01 ≈ 5.939E+01 2.800E+01 ≈
F12 4.759E+04 2.632E+04 ≈ 4.173E+04 2.346E+04 ≈ 4.704E+04 4.220E+04 ≈ 5.166E+04 3.457E+04 ≈
F13 1.088E+04 1.089E+04 ≈ 1.070E+04 1.173E+04 ≈ 8.652E+03 9.860E+03 ≈ 1.479E+04 1.527E+04 +
F14 2.166E+03 2.577E+03 ≈ 1.995E+03 2.057E+03 + 1.849E+03 2.237E+03 ≈ 2.390E+03 2.853E+03 +
F15 2.943E+03 4.519E+03 ≈ 3.174E+03 4.003E+03 ≈ 3.980E+03 6.320E+03 ≈ 3.077E+03 4.658E+03 ≈
F16 6.911E+02 2.683E+02 ≈ 7.195E+02 2.878E+02 ≈ 7.331E+02 2.439E+02 ≈ 7.276E+02 3.106E+02 ≈
F17 2.141E+02 1.271E+02 ≈ 2.212E+02 1.419E+02 + 1.966E+02 1.197E+02 ≈ 2.457E+02 1.343E+02 +
F18 8.320E+04 3.672E+04 ≈ 9.168E+04 5.083E+04 ≈ 8.883E+04 6.425E+04 ≈ 8.332E+04 5.549E+04 ≈
F19 3.846E+03 3.297E+03 ≈ 4.486E+03 5.224E+03 ≈ 5.049E+03 5.319E+03 ≈ 4.434E+03 4.810E+03 ≈
F20 2.151E+02 9.852E+01 ≈ 2.331E+02 1.171E+02 ≈ 2.371E+02 1.155E+02 ≈ 2.669E+02 1.232E+02 +
F21 2.575E+02 1.923E+01 ≈ 2.564E+02 1.515E+01 ≈ 2.504E+02 1.438E+01 − 2.565E+02 1.808E+01 ≈
F22 1.004E+02 9.422E–01 ≈ 1.005E+02 1.219E+00 ≈ 1.003E+02 1.024E+00 ≈ 1.003E+02 9.447E–01 ≈
F23 4.259E+02 2.560E+01 + 4.218E+02 2.329E+01 ≈ 4.276E+02 2.718E+01 + 4.228E+02 3.054E+01 ≈
F24 4.891E+02 2.176E+01 + 4.900E+02 2.101E+01 + 4.882E+02 2.020E+01 + 4.803E+02 2.102E+01 ≈
F25 3.928E+02 1.556E+01 + 3.916E+02 1.199E+01 + 3.942E+02 1.545E+01 + 3.909E+02 1.251E+01 ≈
F26 1.360E+03 9.732E+02 ≈ 1.542E+03 9.596E+02 ≈ 1.479E+03 9.385E+02 ≈ 1.364E+03 8.864E+02 ≈
F27 5.397E+02 1.695E+01 ≈ 5.472E+02 1.777E+01 + 5.384E+02 1.572E+01 ≈ 5.411E+02 1.318E+01 ≈

123
114 Page 22 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Table 11 continued
P R1 = 0.1, P R2 = 0.2 P R1 = 0.2, P R2 = 0.1 P R1 = 0.2, P R2 = 0.5 P R1 = 0.5, P R2 = 0.2
Mean Std Mean Std Mean Std Mean Std

F28 3.331E+02 5.256E+01 ≈ 3.285E+02 4.887E+01 + 3.376E+02 5.510E+01 + 3.418E+02 5.599E+01 +


F29 7.083E+02 1.508E+02 ≈ 7.151E+02 1.928E+02 ≈ 7.138E+02 1.842E+02 ≈ 6.984E+02 1.596E+02 ≈
F30 5.275E+03 1.934E+03 ≈ 4.573E+03 1.465E+03 ≈ 4.869E+03 1.993E+03 ≈ 4.688E+03 2.070E+03 ≈
W/T/L 7/22/0 9/20/0 7/21/1 8/21/0
P R1 = 0.4, P R2 = 0.8 P R1 = 0.8, P R2 = 0.4 P R1 = 0.6, P R2 = 0.8 P R1 = 0.8, P R2 = 0.6
Mean Std Mean Std Mean Std Mean Std
F1 2.896E+03 3.527E+03 ≈ 3.451E+03 3.042E+03 + 4.096E+03 4.234E+03 + 2.602E+03 3.305E+03
F3 5.305E+01 4.718E+01 ≈ 6.186E+01 7.582E+01 ≈ 6.188E+01 5.009E+01 ≈ 5.474E+01 4.114E+01
F4 4.396E+01 4.006E+01 + 5.680E+01 3.731E+01 + 4.148E+01 3.476E+01 ≈ 3.092E+01 3.623E+01
F5 7.396E+01 2.337E+01 + 7.577E+01 2.653E+01 + 6.645E+01 2.251E+01 ≈ 6.126E+01 1.628E+01
F6 4.480E–01 8.515E–01 + 5.451E–01 9.480E–01 + 4.944E–01 9.082E–01 ≈ 2.106E–01 2.939E–01
F7 1.088E+02 3.124E+01 + 1.005E+02 2.778E+01 ≈ 1.031E+02 2.614E+01 ≈ 1.001E+02 2.681E+01
F8 6.625E+01 2.370E+01 ≈ 7.232E+01 2.111E+01 ≈ 6.828E+01 2.369E+01 ≈ 6.723E+01 2.349E+01
F9 5.782E+01 6.387E+01 ≈ 5.234E+01 7.442E+01 ≈ 5.948E+01 7.229E+01 ≈ 4.386E+01 4.830E+01
F10 3.484E+03 5.620E+02 + 3.513E+03 7.484E+02 + 3.339E+03 6.827E+02 ≈ 3.267E+03 5.664E+02
F11 6.417E+01 3.411E+01 ≈ 6.040E+01 3.058E+01 ≈ 5.890E+01 3.085E+01 ≈ 5.858E+01 2.750E+01
F12 3.931E+04 3.709E+04 − 5.194E+04 4.409E+04 ≈ 5.508E+04 4.321E+04 ≈ 4.639E+04 2.344E+04
F13 1.298E+04 1.265E+04 ≈ 1.180E+04 1.255E+04 ≈ 1.124E+04 1.310E+04 ≈ 8.655E+03 8.503E+03
F14 1.867E+03 2.008E+03 ≈ 2.390E+03 2.836E+03 + 2.633E+03 2.317E+03 + 1.377E+03 1.495E+03
F15 2.709E+03 4.113E+03 ≈ 3.419E+03 4.711E+03 ≈ 4.734E+03 6.682E+03 ≈ 2.870E+03 3.218E+03
F16 6.644E+02 2.217E+02 ≈ 7.067E+02 3.105E+02 ≈ 7.073E+02 2.797E+02 ≈ 7.597E+02 2.849E+02
F17 1.997E+02 1.338E+02 ≈ 2.238E+02 1.357E+02 + 1.960E+02 1.200E+02 ≈ 1.753E+02 1.090E+02
F18 9.869E+04 6.865E+04 ≈ 1.019E+05 6.833E+04 ≈ 8.999E+04 5.150E+04 ≈ 8.106E+04 4.836E+04
F19 8.120E+03 8.761E+03 ≈ 3.519E+03 5.308E+03 − 4.679E+03 5.585E+03 ≈ 5.560E+03 6.391E+03
F20 2.574E+02 1.063E+02 + 2.387E+02 1.102E+02 ≈ 2.637E+02 9.107E+01 + 2.041E+02 8.186E+01
F21 2.537E+02 1.594E+01 ≈ 2.604E+02 1.826E+01 ≈ 2.561E+02 1.547E+01 ≈ 2.558E+02 1.590E+01
F22 1.004E+02 1.024E+00 + 1.006E+02 1.386E+00 + 1.006E+02 1.265E+00 + 1.002E+02 6.694E–01
F23 4.247E+02 2.682E+01 ≈ 4.187E+02 2.719E+01 ≈ 4.177E+02 2.229E+01 ≈ 4.197E+02 2.649E+01
F24 4.813E+02 2.095E+01 ≈ 4.866E+02 2.112E+01 ≈ 4.946E+02 2.747E+01 + 4.838E+02 3.014E+01
F25 3.899E+02 1.071E+01 ≈ 3.883E+02 8.320E+00 ≈ 3.906E+02 9.752E+00 ≈ 3.875E+02 3.210E+00
F26 1.362E+03 9.054E+02 ≈ 1.446E+03 9.535E+02 ≈ 1.521E+03 9.356E+02 ≈ 1.284E+03 9.921E+02
F27 5.382E+02 1.723E+01 ≈ 5.397E+02 1.510E+01 ≈ 5.443E+02 1.847E+01 ≈ 5.413E+02 1.867E+01
F28 3.346E+02 5.182E+01 ≈ 3.344E+02 5.720E+01 + 3.274E+02 5.016E+01 ≈ 3.210E+02 4.581E+01
F29 7.303E+02 1.871E+02 ≈ 6.710E+02 1.543E+02 ≈ 6.734E+02 1.709E+02 ≈ 6.762E+02 1.605E+02
F30 4.807E+03 1.617E+03 ≈ 4.317E+03 1.602E+03 ≈ 4.840E+03 1.911E+03 ≈ 4.920E+03 2.032E+03
W/T/L 7/21/1 9/19/1 5/24/0 –/–/–

results demonstrate that HMRFO is practical for optimiz- 5 Discussions


ing real-world problems with large dimensions. In Table
10, although all four problems are large-dimensional, both 5.1 Analysis for Parameters of HMRFO
HMRFO and MRFO require minimal computational time
to complete the optimization, illustrating the great potential The parameters μ and σ used in the FW selection method of
of HMRFO for engineering problems and large-dimensional HMRFO have been proven in [68]. The algorithm performs
optimization problems. best when μ = 3/4 and σ = 1/12, and these values are still
used in this paper.

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 23 of 30 114

Table 12 Experimental and


P R1 = 0.8, P R2 = 0.9 P R1 = 0.9, P R2 = 0.8 P R1 = 1, P R2 = 1
statistical results of HMRFO
with P R1 and P R2 on IEEE Mean Std Mean Std Mean Std
CEC2017 benchmark functions
with 30 dimensions, where F1 4.009E+03 3.921E+03 + 2.431E+03 3.074E+03 ≈ 3.085E+03 3.341E+03 ≈
P R1 = 0.8, P R2 = 0.6 is the F3 6.710E+01 7.916E+01 ≈ 6.477E+01 7.679E+01 ≈ 6.122E+01 5.178E+01 ≈
main algorithm in statistical F4 5.843E+01 3.811E+01 + 4.271E+01 4.230E+01 ≈ 5.022E+01 3.770E+01 +
results (W/T/L) (continue)
F5 6.953E+01 2.349E+01 + 6.251E+01 1.995E+01 ≈ 6.926E+01 2.211E+01 +
F6 3.681E–01 5.386E–01 ≈ 4.418E–01 8.583E–01 ≈ 3.715E–01 5.651E–01 ≈
F7 1.008E+02 2.130E+01 ≈ 1.030E+02 1.787E+01 ≈ 1.076E+02 2.763E+01 +
F8 6.972E+01 2.333E+01 ≈ 6.861E+01 2.076E+01 ≈ 6.592E+01 2.180E+01 ≈
F9 4.482E+01 5.930E+01 ≈ 4.631E+01 4.666E+01 ≈ 7.413E+01 9.869E+01 +
F10 3.432E+03 5.389E+02 ≈ 3.477E+03 6.161E+02 + 3.418E+03 6.343E+02 ≈
F11 6.873E+01 3.254E+01 ≈ 6.084E+01 3.037E+01 ≈ 6.591E+01 3.617E+01 ≈
F12 4.498E+04 2.755E+04 ≈ 4.578E+04 2.589E+04 ≈ 4.820E+04 2.502E+04 ≈
F13 9.086E+03 9.633E+03 ≈ 9.890E+03 1.031E+04 ≈ 1.205E+04 1.244E+04 ≈
F14 1.956E+03 2.015E+03 + 1.619E+03 1.810E+03 ≈ 2.554E+03 2.742E+03 +
F15 3.589E+03 4.950E+03 ≈ 3.113E+03 3.659E+03 ≈ 3.554E+03 5.830E+03 ≈
F16 7.081E+02 2.767E+02 ≈ 7.185E+02 2.378E+02 ≈ 7.284E+02 2.410E+02 ≈
F17 2.025E+02 1.244E+02 ≈ 1.867E+02 1.253E+02 ≈ 2.001E+02 1.422E+02 ≈
F18 9.391E+04 6.332E+04 ≈ 8.667E+04 4.880E+04 ≈ 9.631E+04 6.460E+04 ≈
F19 6.859E+03 6.109E+03 ≈ 4.658E+03 5.353E+03 ≈ 4.310E+03 5.152E+03 ≈
F20 2.443E+02 9.517E+01 + 2.657E+02 1.112E+02 + 2.535E+02 9.634E+01 +
F21 2.550E+02 1.468E+01 ≈ 2.582E+02 1.795E+01 ≈ 2.582E+02 2.021E+01 ≈
F22 1.009E+02 1.409E+00 + 1.004E+02 1.010E+00 ≈ 1.002E+02 8.361E–01 ≈
F23 4.239E+02 1.884E+01 + 4.221E+02 2.285E+01 ≈ 4.283E+02 2.869E+01 +
F24 4.816E+02 2.086E+01 ≈ 4.802E+02 2.444E+01 ≈ 4.897E+02 2.472E+01 +
F25 3.903E+02 1.189E+01 ≈ 3.896E+02 9.909E+00 ≈ 3.918E+02 1.369E+01 +
F26 1.286E+03 9.334E+02 ≈ 1.368E+03 8.819E+02 ≈ 1.476E+03 9.900E+02 ≈
F27 5.396E+02 1.350E+01 ≈ 5.451E+02 1.623E+01 ≈ 5.389E+02 1.427E+01 ≈
F28 3.243E+02 4.985E+01 + 3.366E+02 5.684E+01 + 3.379E+02 5.363E+01 +
F29 6.848E+02 1.582E+02 ≈ 7.280E+02 1.831E+02 ≈ 6.557E+02 1.640E+02 ≈
F30 4.620E+03 1.557E+03 ≈ 4.813E+03 2.347E+03 ≈ 4.726E+03 1.742E+03 ≈
W/T/L 8/21/0 3/26/0 10/19/0

In the hierarchical structure of HMRFO, there are two lay- Therefore, HMRFO with P R1 = 0.8 and P R2 = 0.6 per-
ers that randomly select an individual from the FW to update forms the best.
the population. The selection of an individual from the FW is
crucial for the overall performance of the algorithm. Specif-
ically, the values of P R1 and P R2 can significantly affect
the performance of HMRFO, and the optimal combination 5.2 Analysis for Individuals per Layer of HMRFO
of these parameters can maximize its performance. As P R1
and P R2 both range from 0 to 1, eleven combinations of P R1 In the hierarchical structure of HMRFO introduced earlier,
and P R2 are tested on IEEE CEC2017 benchmark functions the first layer (L 1 ) contains 60% of individuals, the second
with 30 dimensions, displayed in Tables 11 and 12, where layer (L 2 ) contains 30% of individuals, and the third layer
P R1 = 0.8, P R2 = 0.6 is the main algorithm in statisti- (L 3 ) contains 10% of individuals. In this section, we analyze
cal results (W/T/L). From the tables, according to W/T/L, the reasons behind this allocation. First, we set the second
HMRFO with P R1 = 0.8 and P R2 = 0.6 is better than the layer (L 2 ) to be 20%, 30%, and 40% respectively, and the
other ten combinations. And it has fourteen minimum mean third layer (L 3 ) to be 5%, 10%, and 15%, respectively, so that
values, which is the most out of all parameter combinations. L 1 = 100% − L 2 − L 3 , resulting in a total of nine combi-
nations tested on IEEE CEC2017 benchmark functions with
30 dimensions, presented in Tables 13 and 14. According to

123
114

Table 13 Experimental and statistical results of HMRFO with L 1 , L 2 and L 3 on IEEE CEC2017 benchmark functions with 30 dimensions, where L 1 = 60%, L 2 = 30% and L 3 = 10% is the
main algorithm in statistical results (W/T/L)

123
L 1 = 75%, L 2 = 20%, L 3 = 5% L 1 = 70%, L 2 = 20%, L 3 = 10% L 1 = 65%, L 2 = 20%, L 3 = 15% L 1 = 65%, L 2 = 30%, L 3 = 5%
Mean Std Mean Std Mean Std Mean Std
Page 24 of 30

F1 2.974E+03 3.122E+03 ≈ 2.929E+03 3.313E+03 ≈ 3.041E+03 2.967E+03 ≈ 2.579E+03 3.007E+03 ≈


F3 8.123E+01 7.270E+01 + 5.866E+01 4.610E+01 ≈ 4.423E+01 4.448E+01 − 7.777E+01 6.171E+01 +
F4 4.870E+01 4.003E+01 + 5.072E+01 3.912E+01 + 4.211E+01 3.626E+01 ≈ 4.089E+01 4.247E+01 ≈
F5 6.284E+01 1.880E+01 ≈ 6.426E+01 1.892E+01 ≈ 7.503E+01 2.217E+01 + 6.452E+01 2.085E+01 ≈
F6 3.235E–01 5.977E–01 ≈ 3.325E–01 6.446E–01 ≈ 5.696E–01 9.779E–01 + 2.815E–01 3.996E–01 ≈
F7 9.428E+01 1.918E+01 ≈ 1.050E+02 2.519E+01 ≈ 1.091E+02 2.539E+01 + 9.175E+01 2.063E+01 ≈
F8 5.909E+01 1.842E+01 − 6.883E+01 2.261E+01 ≈ 7.478E+01 2.414E+01 + 5.974E+01 1.987E+01 ≈
F9 2.936E+01 2.457E+01 ≈ 4.105E+01 4.397E+01 ≈ 7.628E+01 1.033E+02 ≈ 3.240E+01 3.548E+01 ≈
F10 3.328E+03 5.753E+02 ≈ 3.416E+03 6.666E+02 ≈ 3.493E+03 5.344E+02 + 3.271E+03 6.757E+02 ≈
F11 6.817E+01 3.057E+01 ≈ 6.357E+01 3.089E+01 ≈ 6.223E+01 2.712E+01 ≈ 5.410E+01 2.512E+01 ≈
F12 3.930E+04 2.888E+04 − 4.615E+04 3.506E+04 ≈ 4.741E+04 2.888E+04 ≈ 4.071E+04 2.419E+04 ≈
F13 1.061E+04 1.146E+04 ≈ 1.298E+04 1.091E+04 + 1.202E+04 1.213E+04 ≈ 9.950E+03 1.044E+04 ≈
F14 2.533E+03 3.026E+03 + 2.198E+03 2.231E+03 + 2.572E+03 2.790E+03 + 1.715E+03 1.971E+03 ≈
F15 2.996E+03 4.015E+03 ≈ 4.131E+03 5.075E+03 ≈ 3.293E+03 4.540E+03 ≈ 2.086E+03 2.999E+03 ≈
F16 7.404E+02 2.609E+02 ≈ 7.079E+02 2.479E+02 ≈ 7.559E+02 2.660E+02 ≈ 6.750E+02 2.813E+02 ≈
F17 1.574E+02 1.161E+02 ≈ 2.103E+02 1.219E+02 + 2.146E+02 1.394E+02 ≈ 1.739E+02 1.115E+02 ≈
F18 9.381E+04 6.022E+04 ≈ 9.711E+04 5.288E+04 + 9.606E+04 7.198E+04 ≈ 7.572E+04 4.438E+04 ≈
F19 4.951E+03 4.116E+03 ≈ 5.916E+03 5.958E+03 ≈ 4.674E+03 4.966E+03 ≈ 4.430E+03 5.413E+03 ≈
F20 2.146E+02 1.094E+02 ≈ 2.395E+02 1.129E+02 ≈ 2.697E+02 1.238E+02 + 2.033E+02 7.803E+01 ≈
F21 2.476E+02 1.487E+01 − 2.565E+02 1.880E+01 ≈ 2.611E+02 1.981E+01 ≈ 2.504E+02 1.493E+01 ≈
F22 1.004E+02 1.039E+00 ≈ 1.004E+02 9.070E–01 ≈ 1.006E+02 1.317E+00 + 1.006E+02 1.272E+00 ≈
F23 4.130E+02 1.836E+01 ≈ 4.242E+02 2.278E+01 ≈ 4.257E+02 2.569E+01 + 4.152E+02 2.060E+01 ≈
F24 4.766E+02 2.031E+01 ≈ 4.781E+02 1.720E+01 ≈ 4.895E+02 2.264E+01 + 4.808E+02 2.358E+01 ≈
F25 3.909E+02 1.067E+01 + 3.895E+02 8.475E+00 ≈ 3.909E+02 1.324E+01 ≈ 3.892E+02 6.648E+00 ≈
International Journal of Computational Intelligence Systems

F26 1.298E+03 8.100E+02 ≈ 1.475E+03 9.145E+02 ≈ 1.321E+03 9.832E+02 ≈ 1.305E+03 9.377E+02 ≈


F27 5.380E+02 1.656E+01 ≈ 5.361E+02 1.539E+01 ≈ 5.455E+02 1.962E+01 ≈ 5.422E+02 1.992E+01 ≈
F28 3.381E+02 5.839E+01 + 3.346E+02 5.533E+01 + 3.274E+02 5.192E+01 ≈ 3.425E+02 5.901E+01 +
F29 6.772E+02 1.635E+02 ≈ 7.156E+02 1.838E+02 ≈ 7.253E+02 1.793E+02 ≈ 6.534E+02 1.554E+02 ≈
F30 4.787E+03 2.050E+03 ≈ 4.708E+03 1.801E+03 ≈ 4.782E+03 2.236E+03 ≈ 5.000E+03 2.295E+03 ≈
(2023) 16:114

W/T/L 5/21/3 6/23/0 10/18/1 2/27/0


Table 14 Experimental and statistical results of HMRFO with L 1 , L 2 and L 3 on IEEE CEC2017 benchmark functions with 30 dimensions, where L 1 = 60%, L 2 = 30% and L 3 = 10% is the
main algorithm in statistical results (W/T/L) (continue)
L 1 = 60%, L 2 = 30%, L 3 = 10% L 1 = 55%, L 2 = 30%, L 3 = 15% L 1 = 55%, L 2 = 40%, L 3 = 5% L 1 = 50%, L 2 = 40%, L 3 = 10% L 1 = 45%, L 2 = 40%, L 3 = 15%
Mean Std Mean Std Mean Std Mean Std Mean Std

F1 2.602E+03 3.305E+03 2.920E+03 3.473E+03 ≈ 3.197E+03 4.225E+03 ≈ 3.276E+03 3.671E+03 ≈ 2.585E+03 3.123E+03 ≈
F3 5.474E+01 4.114E+01 4.888E+01 6.354E+01 − 8.377E+01 6.731E+01 + 5.501E+01 4.339E+01 ≈ 4.106E+01 5.449E+01 −
F4 3.092E+01 3.623E+01 3.956E+01 3.779E+01 ≈ 5.684E+01 3.657E+01 + 3.786E+01 3.842E+01 ≈ 4.065E+01 3.671E+01 ≈
F5 6.126E+01 1.628E+01 7.669E+01 2.107E+01 + 5.785E+01 1.283E+01 ≈ 6.596E+01 2.381E+01 ≈ 7.811E+01 2.661E+01 +
F6 2.106E–01 2.939E–01 5.930E–01 7.819E–01 + 3.286E–01 6.273E–01 ≈ 4.189E–01 5.528E–01 + 4.019E–01 1.244E+00 ≈
F7 1.001E+02 2.681E+01 1.116E+02 2.981E+01 + 9.590E+01 2.008E+01 ≈ 1.043E+02 2.315E+01 ≈ 1.076E+02 2.134E+01 +
F8 6.723E+01 2.349E+01 8.005E+01 2.812E+01 + 6.219E+01 2.142E+01 ≈ 6.772E+01 2.478E+01 ≈ 7.642E+01 2.365E+01 +
F9 4.386E+01 4.830E+01 7.456E+01 9.387E+01 ≈ 4.425E+01 7.384E+01 ≈ 5.284E+01 5.385E+01 ≈ 8.721E+01 1.308E+02 ≈
F10 3.267E+03 5.664E+02 3.364E+03 6.156E+02 ≈ 3.403E+03 6.650E+02 ≈ 3.510E+03 6.165E+02 + 3.653E+03 5.328E+02 +
International Journal of Computational Intelligence Systems

F11 5.858E+01 2.750E+01 7.072E+01 3.444E+01 + 7.149E+01 3.256E+01 + 6.081E+01 2.967E+01 ≈ 6.975E+01 3.149E+01 +
F12 4.639E+04 2.344E+04 4.700E+04 2.177E+04 ≈ 4.734E+04 3.081E+04 ≈ 5.107E+04 2.793E+04 ≈ 4.442E+04 3.261E+04 ≈
F13 8.655E+03 8.503E+03 1.266E+04 1.331E+04 ≈ 1.082E+04 9.692E+03 ≈ 1.101E+04 1.124E+04 ≈ 1.286E+04 1.224E+04 ≈
F14 1.377E+03 1.495E+03 1.936E+03 2.010E+03 + 1.369E+03 1.687E+03 ≈ 1.419E+03 1.263E+03 ≈ 1.856E+03 2.396E+03 ≈
F15 2.870E+03 3.218E+03
(2023) 16:114

3.752E+03 4.701E+03 ≈ 1.762E+03 2.105E+03 ≈ 2.555E+03 4.016E+03 ≈ 4.185E+03 6.284E+03 ≈


F16 7.597E+02 2.849E+02 7.806E+02 2.937E+02 ≈ 6.715E+02 2.768E+02 ≈ 7.019E+02 2.480E+02 ≈ 7.868E+02 2.638E+02 ≈
F17 1.753E+02 1.090E+02 2.330E+02 1.410E+02 + 1.917E+02 1.286E+02 ≈ 2.193E+02 1.359E+02 + 2.344E+02 1.400E+02 +
F18 8.106E+04 4.836E+04 9.746E+04 6.031E+04 + 7.702E+04 3.208E+04 ≈ 9.235E+04 6.389E+04 ≈ 1.150E+05 1.073E+05 ≈
F19 5.560E+03 6.391E+03 7.012E+03 6.703E+03 ≈ 4.307E+03 5.042E+03 ≈ 4.549E+03 4.704E+03 ≈ 5.373E+03 6.429E+03 ≈
F20 2.041E+02 8.186E+01 2.732E+02 1.267E+02 + 2.072E+02 1.060E+02 ≈ 2.447E+02 1.056E+02 ≈ 2.769E+02 1.103E+02 +
F21 2.558E+02 1.590E+01 2.645E+02 1.986E+01 + 2.507E+02 1.607E+01 − 2.565E+02 1.907E+01 ≈ 2.564E+02 1.625E+01 ≈
F22 1.002E+02 6.694E–01 1.005E+02 1.124E+00 ≈ 1.001E+02 4.816E–01 ≈ 1.005E+02 1.124E+00 ≈ 1.005E+02 1.222E+00 ≈
F23 4.197E+02 2.649E+01 4.336E+02 3.233E+01 + 4.197E+02 2.789E+01 ≈ 4.179E+02 2.566E+01 ≈ 4.281E+02 2.645E+01 +
F24 4.838E+02 3.014E+01 4.881E+02 2.465E+01 ≈ 4.732E+02 1.900E+01 ≈ 4.842E+02 2.604E+01 ≈ 4.936E+02 2.971E+01 +
F25 3.875E+02 3.210E+00 3.933E+02 1.413E+01 + 3.899E+02 8.447E+00 + 3.895E+02 8.682E+00 ≈ 3.912E+02 1.184E+01 ≈
F26 1.284E+03 9.921E+02 1.560E+03 9.796E+02 ≈ 1.162E+03 8.866E+02 ≈ 1.329E+03 9.489E+02 ≈ 1.760E+03 1.124E+03 +
F27 5.413E+02 1.867E+01 5.417E+02 1.560E+01 ≈ 5.392E+02 1.545E+01 ≈ 5.375E+02 1.621E+01 ≈ 5.421E+02 1.861E+01 ≈
F28 3.210E+02 4.581E+01 3.246E+02 4.613E+01 ≈ 3.284E+02 4.759E+01 + 3.369E+02 5.545E+01 + 3.435E+02 5.873E+01 +
Page 25 of 30

F29 6.762E+02 1.605E+02 7.055E+02 1.386E+02 ≈ 6.668E+02 1.671E+02 ≈ 6.778E+02 1.396E+02 ≈ 7.629E+02 1.987E+02 +
F30 4.920E+03 2.032E+03 4.897E+03 1.914E+03 ≈ 4.640E+03 1.633E+03 ≈ 4.514E+03 1.405E+03 ≈ 4.439E+03 1.628E+03 ≈
114

W/T/L -/-/- 12/16/1 5/23/1 4/25/0 12/16/1

123
114 Page 26 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

Fig. 9 Population diversity on F3, F4, F16 and F29 (D = 30)

the W/T/L results in the tables, HMRFO with L 1 = 60%, where N is the population size, x̄ is the mean point.
L 2 = 30%, and L 3 = 10% performs the best. To provide a clear understanding of how the proposed
method compares with existing methods in terms of diversity,
we also included the latest variant of MRFO, fractional-order
5.3 Analysis of Population Diversity Caputo manta ray foraging optimization (FCMRFO) [85],
and evaluated the diversity changes of unimodal function F3,
The FW selection method and hierarchical structure pre- multimodal function F4, hybrid function F16, and composi-
sented in this paper can enhance the population diversity tion function F29. Figure 9 shows the population diversity of
of the MRFO algorithm. To better visualize the population HMRFO, MRFO and FCMRFO on these four functions with
diversity of HMRFO and MRFO, the following equations, 30 dimensions. The figure indicates that HMRFO has a higher
taken from [84], are used to calculate it: population diversity than MRFO and FCMRFO on F3, F16,
and F29, suggesting that the proposed method can effectively
improve diversity. On F4, at the beginning of the iteration, the
1
N
 
Div(x) = xi − x̄ / max xi − x j  , (15) algorithm focuses on exploration, resulting in higher popula-
N 1≤i, j≤N
tion diversity for HMRFO than MRFO and FCMRFO. In the
i=1
N late stage of the iteration, the algorithm focuses on exploita-
1
x̄ = xi , (16) tion, leading to lower population diversity of HMRFO than
N
i=1

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 27 of 30 114

FCMRFO, but still higher than that of MRFO. Thus, HMRFO viduals in each layer can be dynamically adjusted based
can perform effective search and avoid being trapped in local on some evaluation metric.
optima. (2) The FW selection method used in this paper could be
applied to improve other meta-heuristic algorithms.
(3) HMRFO could be applied to tasks such as solar pho-
6 Conclusion and Future Work tovoltaic parameter estimation, dendritic neural models,
and multi-objective optimization.
In this paper, we propose a hierarchical manta ray foraging
optimization with weighted fitness-distance balance selec- Acknowledgements This work was mainly supported by the Japan
tion (HMRFO) by combining a hierarchical structure with Society for the Promotion of Science (JSPS) KAKENHI under Grant
the latest improved selection method. The proposed method JP22H03643, Japan Science and Technology Agency (JST) Support for
Pioneering Research Initiated by the Next Generation (SPRING) under
aims to increase population diversity to solve the problem Grant JPMJSP2145, and JST through the Establishment of University
of MRFO with premature convergence and trapping in local Fellowships towards the Creation of Science Technology Innovation
optima. To verify the performance of HMRFO, we compare under Grant JPMJFS2115.
it with MRFO and six state-of-the-art algorithms on IEEE
Author Contributions ZT: methodology, software, writing—original
CEC2017 functions. The experimental results demonstrate draft preparation. KW: methodology, software. ST: methodology, Soft-
that HMRFO has superior performance and can find better ware. YT: methodology, software, writing—reviewing and editing. RW:
solutions, escape local optima, and converge fast, indicating writing—reviewing and editing. SG: conceptualization, methodology,
that the proposed method effectively increases population software, supervision, writing—review and editing. All authors read
and approved the final manuscript.
diversity. In terms of algorithm complexity, HMRFO and
MRFO have similar computational time, suggesting that Funding This research was partially supported by the Japan Society for
the added improved method has little effect on the algo- the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643,
rithm complexity of MRFO. We also apply HMRFO and Japan Science and Technology Agency (JST) Support for Pioneer-
ing Research Initiated by the Next Generation (SPRING) under Grant
MRFO to optimize large-dimensional real-world problems JPMJSP2145, and JST through the Establishment of University Fel-
and find that HMRFO has good practicality, especially for lowships towards the Creation of Science Technology Innovation under
large-dimensional problems, as it takes less time and has low Grant JPMJFS2115.
computation cost. This is valuable information for study-
Data Availability Related data and material can be found at https://
ing large-dimensional optimization problems. Finally, the toyamaailab.github.io.
curves of population diversity of HMRFO and MRFO on four
different types of problems from IEEE CEC2017 further con- Declarations
firm that the improved method in this paper can successfully
enrich population diversity. Conflict of Interest The authors declare no conflict of interest.
After conducting experiments, we have discovered the fol-
Ethics Approval and Consent to Participate Not applicable.
lowing two advantages of HMRFO:
Consent for Publication Not applicable.
(1) The incorporation of FW and hierarchical structure sig-
Open Access This article is licensed under a Creative Commons
nificantly enhances the population diversity of HMRFO. Attribution 4.0 International License, which permits use, sharing, adap-
This results in the algorithm being able to escape local tation, distribution and reproduction in any medium or format, as
optima, avoid premature convergence, and improve the long as you give appropriate credit to the original author(s) and the
quality of solutions by considering different solutions in source, provide a link to the Creative Commons licence, and indi-
cate if changes were made. The images or other third party material
the search space. in this article are included in the article’s Creative Commons licence,
(2) HMRFO has a fast convergence rate and low computa- unless indicated otherwise in a credit line to the material. If material
tional complexity, making it a cost-effective approach to is not included in the article’s Creative Commons licence and your
optimize large dimensional problems. intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copy-
right holder. To view a copy of this licence, visit http://creativecomm
In future work, the following studies could be considered: ons.org/licenses/by/4.0/.

(1) The number of individuals per layer in this study is fixed


and may have limitations as it may perform differently on References
different problems. In the future, the hierarchical struc-
ture could be further improved by incorporating mutual 1. Kramer, O.: Genetic Algorithm Essentials, vol. 679. Springer
information interaction. For example, the number of indi- (2017)

123
114 Page 28 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

2. Beyer, H.-G., Schwefel, H.-P.: Evolution strategies-a comprehen- 26. Bayraktar, Z., Komurcu, M., Werner, D.H.: Wind driven optimiza-
sive introduction. Nat. Comput. 1(1), 3–52 (2002) tion (WDO): a novel nature-inspired optimization algorithm and its
3. Kenneth, V.P.: Differential evolution. In: Zelinka, I., Snášel, V., application to electromagnetics. In: 2010 IEEE Antennas and Prop-
Abraham, A. (eds.) Handbook of Optimization. Intelligent Systems agation Society International Symposium, pp. 1–4. IEEE, (2010)
Reference Library, vol 38. Springer, Berlin, Heidelberg (2013) 27. Kaveh, A., Bakhshpoori, T.: Water evaporation optimization: a
4. Moscato, P., Mendes, A., Berretta, R.: Benchmarking a memetic novel physically inspired optimization algorithm. Comput. Struct.
algorithm for ordering microarray data. Biosystems 88(1), 56–75 167, 69–85 (2016)
(2007) 28. Zhao, W., Wang, L., Zhang, Z.: A novel atom search optimization
5. De Jong, K.: Evolutionary computation: a unified approach. In: for dispersion coefficient estimation in groundwater. Futur. Gener.
Proceedings of the 2016 on Genetic and Evolutionary Computation Comput. Syst. 91, 601–610 (2019)
Conference Companion, pp. 185–199 (2016) 29. Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W.,
6. Passino, K.M.: Bacterial foraging optimization. Int. J. Swarm Intell. Mirjalili, S.: Henry gas solubility optimization: a novel physics-
Res. 1(1), 1–16 (2010) based algorithm. Futur. Gener. Comput. Syst. 101, 646–667 (2019)
7. Meng, Z., Pan, J.-S.: Monkey king evolution: a new memetic evo- 30. Doğan, B., Ölmez, T.: A new metaheuristic for numerical func-
lutionary algorithm and its application in vehicle fuel consumption tion optimization: vortex search algorithm. Inf. Sci. 293, 125–145
optimization. Knowl.-Based Syst. 97, 144–157 (2016) (2015)
8. Uymaz, S.A., Tezel, G., Yel, E.: Artificial algae algorithm (AAA) 31. Venkata Rao, R., Savsani, V.J., Balic, J.: Teaching-learning-based
for nonlinear global optimization. Appl. Soft Comput. 31, 153–171 optimization algorithm for unconstrained and constrained real-
(2015) parameter optimization problems. Eng. Optim. 44(12), 1447–1462
9. Yang, X.-S., Gandomi, A.H.: Bat algorithm: a novel approach for (2012)
global engineering optimization. Eng. Comput. 29(5), 464–483 32. Gajawada, S.: Entrepreneur: artificial human optimization. Trans.
(2012) Mach. Learn. Artif. Intell. 4(6), 64–70 (2016)
10. Dasgupta, D.: Artificial Immune Systems and their Applications. 33. Seyyed Hamid Samareh Moosavi and Vahid Khatibi Bardsiri: Poor
Springer Science & Business Media (2012) and rich optimization algorithm: a new human-based and multi
11. Zhao, W., Zhang, Z., Wang, L.: Manta ray foraging optimization: populations algorithm. Eng. Appl. Artif. Intell. 86, 165–181 (2019)
an effective bio-inspired optimizer for engineering applications. 34. Huan, T.T., Kulkarni, A.J., Kanesan, J., Huang, C.J., Abraham, A.:
Eng. Appl. Artif. Intell. 87, 103300 (2020) Ideology algorithm: a socio-inspired optimization methodology.
12. Abualigah, L., Elaziz, M.A., Sumari, P., Geem, Z.W., Gandomi, Neural Comput. Appl. 28(1), 845–876 (2017)
A.H.: Reptile search algorithm (RSA): a nature-inspired meta- 35. Punnathanam, V., Kotecha, P.: Yin-yang-pair optimization: a novel
heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022) lightweight optimization algorithm. Eng. Appl. Artif. Intell. 54,
13. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. 62–79 (2016)
Eng. Softw. 69, 46–61 (2014) 36. Philip Chen, C.L., Zhang, T., Chen, L., Tam, S.C.: I-ching div-
14. Wang, D., Tan, D., Liu, L.: Particle swarm optimization algorithm: ination evolutionary algorithm and its convergence analysis. IEEE
an overview. Soft. Comput. 22(2), 387–408 (2018) Trans. Cybern. 47(1), 2–13 (2017)
15. Shi, Y.: Brain storm optimization algorithm. In: International Con- 37. Ezugwu, A.E., Shukla, A.K., Rl Nath, A.A., Akinyelu, JO
ference in Swarm Intelligence, pp. 303–309. Springer (2011) Agushaka., Chiroma, H., Muhuri, P.K.: Metaheuristics: a com-
16. Shadravan, S., Naji, H.R., Bardsiri, V.K.: The sailfish optimizer: prehensive overview and classification along with bibliometric
a novel nature-inspired metaheuristic algorithm for solving con- analysis. Artif. Intell. Rev. 54(6), 4237–4316 (2021)
strained engineering optimization problems. Eng. Appl. Artif. 38. Tang, J., Liu, G., Pan, Q.: A review on representative swarm intel-
Intell. 80, 20–34 (2019) ligence algorithms for solving optimization problems: applications
17. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. and trends. IEEE/CAA J. Autom. Sin. 8(10), 1627–1643 (2021)
Eng. Softw. 95, 51–67 (2016) 39. Hare, W., Nutini, J., Tesfamariam, S.: A survey of non-gradient
18. Dorigo, M., Stützle, T.: Ant colony optimization: overview and optimization methods in structural engineering. Adv. Eng. Softw.
recent advances. In: Gendreau, M., Potvin, J.Y. (eds.) Handbook 59, 19–28 (2013)
of Metaheuristics. International Series in Operations Research & 40. Abualigah, L., Diabat, A.: Advances in sine cosine algorithm: A
Management Science, vol 146. Springer, Boston, MA (2019) comprehensive survey. Artif. Intell. Rev. 54(4), 2567–2608 (2021)
19. Dhiman, G., Kumar, V.: Spotted hyena optimizer: a novel bio- 41. Fonseca, C.M., Fleming, P.J.: An overview of evolutionary algo-
inspired based metaheuristic technique for engineering applica- rithms in multiobjective optimization. Evol. Comput. 3(1), 1–16
tions. Adv. Eng. Softw. 114, 48–70 (2017) (1995)
20. Yang, X.-S.: Firefly algorithm, levy flights and global optimization. 42. Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of
In: Bramer, M., Ellis, R., Petridis, M. (eds.) Research and Devel- swarm algorithms applied to discrete optimization problems. In:
opment in Intelligent Systems XXVI. Springer, London (2010) Swarm Intelligence and Bio-Inspired Computation, pp. 169–191.
21. Połap, D., Woźniak, M.: Red fox optimization algorithm. Expert Elsevier (2013)
Syst. Appl. 166, 114107 (2021) 43. Biswas, A., Mishra, K.K., Tiwari, S., Misra, A.K.: Physics-inspired
22. Abualigah, L., Shehab, M., Alshinwan, M., Alabool, H.: Salp optimization algorithms: a survey. J. Optim. 2013, Article ID
swarm algorithm: a comprehensive survey. Neural Comput. Appl. 438152. https://doi.org/10.1155/2013/438152
32(15), 11195–11215 (2020) 44. Kosorukoff, A.: Human based genetic algorithm. In: 2001
23. Cuevas, E., Cienfuegos, M., Zaldívar, D., Pérez-Cisneros, M.: A IEEE International Conference on Systems, Man and Cyber-
swarm optimization algorithm inspired in the behavior of the social- netics. e-Systems and e-Man for Cybernetics in Cyberspace
spider. Expert Syst. Appl. 40(16), 6374–6384 (2013) (Cat.No.01CH37236), volume 5, pp. 3464–3469. IEEE (2001)
24. Fausto, F., Cuevas, E., Valdivia, A., González, A.: A global 45. Eiben, A.E., Smith, J.: From evolutionary computation to the evo-
optimization algorithm inspired in the behavior of selfish herds. lution of things. Nature 521(7553), 476–482 (2015)
Biosystems 160, 39–55 (2017) 46. Boussaïd, I., Lepagnot, J., Siarry, P.: A survey on optimization
25. Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravita- metaheuristics. Inf. Sci. 237, 82–117 (2013)
tional search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)

123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 29 of 30 114

47. Yang, Y., Lei, Z., Wang, Y., Zhang, T., Peng, C., Gao, S.: Improving 66. Nandar Lynn and Ponnuthurai Nagaratnam Suganthan: Heteroge-
dendritic neuron model with dynamic scale-free network-based dif- neous comprehensive learning particle swarm optimization with
ferential evolution. IEEE/CAA J. Autom. Sin. 9(1), 99–110 (2022) enhanced exploration and exploitation. Swarm Evol. Comput. 24,
48. Hong, W.-J., Yang, P., Tang, K.: Evolutionary computation for 11–24 (2015)
large-scale multi-objective optimization: a decade of progresses. 67. Wang, Y., Gao, S., Yang, Yu., Cai, Z., Wang, Z.: A gravita-
Int. J. Autom. Comput. 18, 155–169 (2021) tional search algorithm with hierarchy and distributed framework.
49. Jiang, Y., Luo, Q., Wei, Y., Abualigah, L., Zhou, Y.: An efficient Knowl.-Based Syst. 218, 106877 (2021)
binary gradient-based optimizer for feature selection. Math. Biosci. 68. Wang, K., Tao, S., Wang, R.-L., Todo, Y., Gao, S.: Fitness-distance
Eng. 18(4), 3813–3854 (2021) balance with functional weights: a new selection method for evo-
50. Zhao, Z., Liu, S., Zhou, M.C., Abusorrah, A.: Dual-objective mixed lutionary algorithms. IEICE Trans. Inform. Syst. E–104D(10),
integer linear program and memetic algorithm for an industrial 1789–1792 (2021)
group scheduling problem. IEEE/CAA J. Autom. Sin. 8(6), 1199– 69. Aras, S., Gedikli, E., Kahraman, H.T.: A novel stochastic fractal
1209 (2020) search algorithm with fitness-distance balance for global numerical
51. Yousri, D., Elaziz, M.A., Abualigah, L., Oliva, D., Al-qaness, optimization. Swarm Evol. Comput. 61, 100821 (2021)
M.A.A., Ewees, A.A.: COVID-19 X-ray images classification 70. Bayraktar, Z., Komurcu, M.: Adaptive wind driven optimiza-
based on enhanced fractional-order cuckoo search optimizer using tion. In: Proceedings of the 9th EAI International Conference on
heavy-tailed distributions. Appl. Soft Comput. 101, 107052 (2021) Bio-Inspired Information and Communications Technologies (For-
52. Miikkulainen, R., Forrest, S.: A biological perspective on evolu- merly BIONETICS), pp. 124–127. ICST (Institute for Computer
tionary computation. Nat. Mach. Intell. 3(1), 9–15 (2021) Sciences, Social-Informatics and Telecommunications Engineer-
53. Ji, J., Gao, S., Cheng, J., Tang, Z., Todo, Y.: An approximate logic ing) (2016)
neuron model with a dendritic structure. Neurocomputing 173, 71. Tang, Z., Tao, S., Wang, K., Bo, L., Todo, Y., Gao, S.: Chaotic
1775–1783 (2016) wind driven optimization with fitness distance balance strategy.
54. Cuevas, E., Gálvez, J., Toski, M., Avila, K.: Evolutionary-mean Int. J. Comput. Intell. Syst. 15(1), 46 (2022)
shift algorithm for dynamic multimodal function optimization. 72. Zhao, W., Zhang, H., Zhang, Z., Zhang, K., Wang, L.: Parameters
Appl. Soft Comput. 113, 107880 (2021) tuning of fractional-order proportional integral derivative in water
55. Rodríguez, A., Camarena, O., Cuevas, E., Aranguren, I., Valdivia- turbine governing system using an effective SDO with enhanced
G, A., Morales-Castañeda, B., Zaldívar, D., Pérez-Cisneros, fitness-distance balance and adaptive local search. Water 14(19),
M.: Group-based synchronous-asynchronous grey wolf optimizer. 3035 (2022)
Appl. Math. Model. 93, 226–243 (2021) 73. Azadifar, S., Rostami, M., Berahmand, K., Moradi, P., Oussalah,
56. Díaz, P., Pérez-Cisneros, M., Cuevas, E., Avalos, O., Gálvez, J., M.: Graph-based relevancy-redundancy gene selection method for
Hinojosa, S., Zaldivar, D.: An improved crow search algorithm cancer diagnosis. Comput. Biol. Med. 147, 105766 (2022)
applied to energy problems. Energies 11(3), 571 (2018) 74. Rostami, M., Oussalah, M., Farrahi, V.: A novel time-aware food
57. Izci, D., Ekinci, S., Eker, E., Kayri, M.: Improved manta ray forag- recommender-system based on deep learning and graph clustering.
ing optimization using opposition-based learning for optimization IEEE Access 10, 52508–52524 (2022)
problems. In: 2020 International Congress on Human-Computer 75. Abualigah, L., Diabat, A., Geem, Z.W.: A comprehensive survey
Interaction, Optimization and Robotic Applications (HORA), pp of the harmony search algorithm in clustering applications. Appl.
1–6. IEEE (2020) Sci. 10(11), 3827 (2020)
58. Feng, J., Luo, X., Gao, M., Abbas, A., Yi-Peng, X., Pouramini, S.: 76. Wolpert, D.H., Macready, W.G.: No free lunch theorems for opti-
Minimization of energy consumption by building shape optimiza- mization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
tion using an improved manta-ray foraging optimization algorithm. 77. Awad, N.H., Ali, M.Z., Liang, J.J., Qu, B.Y., Suganthan, P.N.: Prob-
Energy Rep. 7, 1068–1078 (2021) lem definitions and evaluation criteria for the CEC 2017 special
59. Sheng, B., Pan, T., Luo, Y., Jermsittiparsert, K.: System identi- session and competition on single objective real-parameter numer-
fication of the PEMFCs based on balanced manta-ray foraging ical optimization. Technical Report (2016)
optimization algorithm. Energy Rep. 6, 2887–2896 (2020) 78. García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use
60. Micev, M., Ćalasan, M., Ali, Z.M., Hasanien, H.M., Abdel Aleem, of non-parametric tests for analyzing the evolutionary algorithms’
S.H.E.: Optimal design of automatic voltage regulation controller behaviour: a case study on the CEC’2005 special session on real
using hybrid simulated annealing - manta ray foraging optimization parameter optimization. J. Heuristics 15(6), 617 (2008)
algorithm. Ain Shams Eng. J. 12(1), 641–657 (2021) 79. Luengo, J., García, S., Herrera, F.: A study on the use of statistical
61. Elaziz, M.A., Yousri, D., Al-qaness, M.A.A., AbdelAty, A.M., tests for experimentation with neural networks: Analysis of para-
Radwan, A.G., Ewees, A.A.: A Grunwald-Letnikov based manta metric test conditions and non-parametric tests. Expert Syst. Appl.
ray foraging optimizer for global optimization and image segmen- 36(4), 7798–7808 (2009)
tation. Eng. Appl. Artif. Intell. 98, 104105 (2021) 80. García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced
62. Hassan, M.H., Houssein, E.H., Mahdy, M.A., Kamel, S.: An nonparametric tests for multiple comparisons in the design of
improved manta ray foraging optimizer for cost-effective emission experiments in computational intelligence and data mining: Exper-
dispatch problems. Eng. Appl. Artif. Intell. 100, 104155 (2021) imental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)
63. Gang, H., Li, M., Wang, X., Wei, G., Chang, C.-T.: An enhanced 81. Carrasco, J., García, S., Rueda, M.M., Das, S., Herrera, F.: Recent
manta ray foraging optimization algorithm for shape optimization trends in the use of statistical tests for comparing swarm and evo-
of complex CCG-Ball curves. Knowl.-Based Syst. 240, 108071 lutionary computing algorithms: Practical guidelines and a critical
(2022) review. Swarm Evol. Comput. 54, 100665 (2020)
64. Kahraman, H.T., Aras, S., Gedikli, E.: Fitness-distance balance 82. Mirjalili, S., Mohd Hashim, S.Z.: A new hybrid PSOGSA algorithm
(FDB): a new selection method for meta-heuristic search algo- for function optimization. In: 2010 International Conference on
rithms. Knowl.-Based Syst. 190, 105169 (2020) Computer and Information Application, pp. 374–377. IEEE (2010)
65. Alba, E., Dorronsoro, B.: The exploration/exploitation tradeoff in 83. Das, S., Suganthan, P.N.: Problem definitions and evaluation crite-
dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. ria for CEC 2011 competition on testing evolutionary algorithms
9(2), 126–142 (2005) on real world optimization problems. In: Jadavpur University,
Nanyang Technological University, Kolkata, pp. 341–359 (2010)

123
114 Page 30 of 30 International Journal of Computational Intelligence Systems (2023) 16:114

84. Wang, K., Wang, Y., Tao, S., Cai, Z., Lei, Z., Gao, S.: Spherical
search algorithm with adaptive population control for global con-
tinuous optimization problems. Appl. Soft Comput. 132, 109845
(2023)
85. Yousri, D., AbdelAty, A.M., Al-qaness, M.A.A., Ewees, A.A., Rad-
wan, A.G., Elaziz, M.A.: Discrete fractional-order Caputo method
to overcome trapping in local optima: Manta ray foraging optimizer
as a case study. Expert Syst. Appl. 192, 116355 (2022)

Publisher’s Note Springer Nature remains neutral with regard to juris-


dictional claims in published maps and institutional affiliations.

123

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy