HMRFO
HMRFO
https://doi.org/10.1007/s44196-023-00289-4
RESEARCH ARTICLE
Abstract
Manta ray foraging optimization (MRFO) tends to get trapped in local optima as it relies on the direction provided by the
previous individual and the best individual as guidance to search for the optimal solution. As enriching population diversity
can effectively solve this problem, in this paper, we introduce a hierarchical structure and weighted fitness-distance balance
selection to improve the population diversity of the algorithm. The hierarchical structure allows individuals in different
groups of the population to search for optimal solutions in different places, expanding the diversity of solutions. In MRFO,
greedy selection based solely on fitness can lead to local solutions. We innovatively incorporate a distance metric into the
selection strategy to increase selection diversity and find better solutions. A hierarchical manta ray foraging optimization with
weighted fitness-distance balance selection (HMRFO) is proposed. Experimental results on IEEE Congress on Evolutionary
Computation 2017 (CEC2017) functions show the effectiveness of the proposed method compared to seven competitive
algorithms, and the proposed method has little effect on the algorithm complexity of MRFO. The application of HMRFO
to optimize real-world problems with large dimensions has also obtained good results, and the computational time is very
short, making it a powerful alternative for very high-dimensional problems. Finally, the effectiveness of this method is further
verified by analyzing the population diversity of HMRFO.
Keywords Manta ray foraging optimization · Local optima · Population diversity · Hierarchical structure · Greedy selection ·
Weighted fitness-distance balance selection · Algorithm complexity
Abbreviations
MRFO Manta ray foraging optimization
B Shangce Gao
gaosc@eng.u-toyama.ac.jp HMRFO Hierarchical manta ray foraging optimization
with weighted fitness-distance balance selec-
Zhentao Tang
18305263456@163.com tion
EA Evolution-based algorithms
Kaiyu Wang
greskofairy@gmail.com SI Swarm-based intelligence
PM Physics-based methods
Sichen Tao
terrysc777@gmail.com HM Human-based behaviors
OBL Opposition-based learning
Yuki Todo
yktodo@ec.t.kanazawa-u.ac.jp FO Fractional-order
GBO Gradient-based optimizer
Rong-Long Wang
wang@u-fukui.ac.jp FW Fitness-distance balance selection method
with functional weight
1 Jiangsu Agri-animal Husbandry Vocational College, Taizhou CEC Congress on Evolutionary Computation
225300, China
NFEs The maximum number of function evalua-
2 Faculty of Engineering, University of Toyama, Toyama tions
930-8555, Japan
3 Faculty of Electrical, Information and Communication
4 Faculty of Engineering, University of Fukui, Fukui 910-8507,
Engineering, Kanazawa University, Kanazawa 9201192,
Japan Japan
AWDO Adaptive wind-driven optimization troller. In [61], fractional-order (FO) was utilized in MRFO
CMAES Covariance matrix adaptive evolutionary strat- to escape from local optima, and the proposed algorithm was
egy applied to image segmentation. In [62], a hybrid algorithm
PSOGSA A hybrid algorithm that combines particle based on MRFO and gradient-based optimizer (GBO) was
swarm optimization and gravitational search adopted to solve economic emission dispatch problems. In
algorithm [63], the global exploration ability of MRFO was enhanced
CPU Central processing unit by combining control parameter adjustment, wavelet muta-
HSP Hydrothermal scheduling problem tion, and quadratic interpolation strategy, and the improved
DED Dynamic economic dispatch problem algorithm was used to optimize complex curve shapes. From
LSTPP Large-scale transmission pricing problem these references, it can be concluded that MRFO tends to
ELD Static economic load dispatch problem converge prematurely and fall into the local optimal solu-
FCMRFO Fractional-order Caputo manta ray foraging tion.
optimization The search operators [64] of meta-heuristic algorithms
include exploration and exploitation [65, 66]. Exploration
involves using randomly generated individuals to produce
1 Introduction different solutions in the search space, which increases the
diversity of the population and improves the quality of the
Meta-heuristic algorithms [37] are inspired by nature [38]. solution. Exploitation involves conducting a local search
Based on their sources of inspiration, these algorithms can around the best individual, relying on the advantages of the
be classified into five categories [39, 40]: evolution-based current optimal solution to accelerate the convergence of
algorithms (EA) [41], swarm-based intelligence (SI) [42], the algorithm. Meta-heuristic algorithms based on swarm
physics-based methods (PM) [43], human-based behaviors intelligence are prone to falling into local optima and pre-
(HB) [44], and other optimization algorithms, as shown in mature convergence. Solving this problem is our motivation
Table 1. They mainly simulate physical or biological phe- for improving the algorithm.
nomena in nature and establish mathematical models to solve To improve MRFO, it is necessary to maintain population
optimization problems [45, 46]. These algorithms possess the diversity. Population diversity refers to having many non-
characteristics of self-organization, self-adaptation, and self- neighboring individuals in the search space that can generate
learning, and have been widely used in many fields, such as different solutions. By maintaining population diversity, indi-
biology [47, 48], feature selection [49], optimization comput- viduals can be dispersed instead of being gathered around
ing [50], image classification [51], and artificial intelligence the local optimal solution, thus escaping local optima and
[52, 53]. generating better solutions to improve solution quality.
There are many improved meta-heuristic methods, such This paper proposes adding a hierarchical structure [67]
as incorporating competitive memory and dynamic strategy and a fitness-distance balance selection method with func-
into the mean shift algorithm to optimize dynamic multi- tional weight (FW) [68] to increase population diversity.
modal functions [54], balancing exploration and exploitation To verify the performance of the proposed algorithm, we
by adding synchronous–asynchronous strategy to the grey compared the hierarchical manta ray foraging optimization
wolf optimizer [55], and reformulating the search factor in with weighted fitness-distance balance selection (HMRFO)
the algorithm [56]. Manta ray foraging optimization (MRFO) with seven competitive algorithms on the IEEE CEC2017
[11] is the latest swarm-based intelligence algorithm pro- benchmark functions. The results show that HMRFO has
posed in 2020. It has few adjustable parameters, is easy superior performance and fast convergence speed. Addi-
to implement, and can find solutions with specified preci- tionally, HMRFO has the same time complexity as MRFO.
sion at low computational cost [11]. Therefore, it has great The performance of HMRFO in four large-dimensional real-
research potential. In [57], the opposition-based learning world problems illustrates the practicality of HMRFO in
(OBL) method was introduced into MRFO to achieve an solving large-dimensional problems. By comparing the pop-
effective structure for the optimization problem. In [58], ulation diversity of HMRFO, MRFO and the latest variant
both OBL and self-adaptive methods were applied to MRFO of MRFO on different types of functions from the IEEE
to optimize energy consumption in residential buildings. CEC2017 benchmark suite, the effectiveness of the proposed
In [59], the Lévy flight mechanism and chaos theory were method in this paper is visually verified.
added to solve the problem of premature convergence of The contributions of the present study can be summarized
MRFO, and the improved algorithm was applied to the pro- as follows: (1) The hierarchical structure and FW selec-
ton exchange membrane fuel cell system. In [60], hybrid tion method are effective in improving population diversity
simulated annealing and MRFO were utilized to optimize and avoiding falling into local optima. (2) The hierarchical
the parameters of the proportional–integral–derivative con- structure and FW selection method have little effect on the
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 3 of 30 114
algorithm’s complexity. (3) HMRFO demonstrates superior on the three foraging behaviors of the manta ray population.
search performance, fast convergence speed, and high com- The specific behavioral models are as follows.
putational efficiency when dealing with large-dimensional
problems, making it applicable to such problems.
2.1.1 Chain Foraging
The remaining sections of this paper are organized as fol-
lows:
When manta rays find food, each manta ray follows the previ-
Section 2 introduces the original MRFO and some selec-
ous manta ray in a row and swims towards the food location.
tion methods. Section 3 proposes HMRFO. Section 4
Therefore, except for the first manta ray, the movement direc-
presents the experimental results and analysis. Section 5 dis-
tion of other manta rays not only moves towards the food but
cusses the parameters and population diversity of HMRFO.
also towards the front manta rays, forming a chain foraging
Section 6 concludes the paper and suggests future research
behavior. The mathematical model is expressed as follows:
directions.
⎧ d d
⎪
⎪ xi (t) + r · xbest (t) − xid (t)
⎪
⎨ +α · x d (t) − x d (t) , i = 1
xid (t + 1) = d
best i (1)
⎪
⎪ xid (t) + r · xi−1 (t) − xid (t)
⎪
⎩ d
2 Preliminaries +α · xbest (t) − xid (t) , i = 2, . . . , N ,
α = 2 · r · | log(r )|, (2)
2.1 Manta Ray Foraging Optimization
1
A manta ray is defined as X i = xi1 , xi2 , . . . , xid , where where the best individual is defined as X best = xbest , xbest
2 ,
i ∈ 1, 2, . . . , N , and xid represents the position of the ith . . . , xbest , where xbest (t) indicates the position of the best
d d
individual in the dth dimension. Here, N is the total number individual in the dth dimension at time t, r is a random vector
of manta rays. MRFO establishes mathematical models based within [0, 1], and α is the weight coefficient.
123
114 Page 4 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
2.1.2 Cyclone Foraging search space to update the next generation. This improves
the algorithm’s ability to explore the global search space and
Manta rays not only follow the manta ray in front of them but increases the diversity of solutions. The update equations are
also move spirally towards the food. This foraging behavior as follows:
is called cyclone foraging, and its mathematical equations
are expressed as follows: d
xrand = Lbd + r · U bd − Lbd , (5)
⎧
⎧ ⎪ d d
⎪ d d d ⎪ xrand
⎪ + r · xrand (t) − xid (t)
⎪ xbest + r · xbest (t) − xi (t)
⎪
⎪
⎪
⎪
⎪
⎪
⎪ ⎨ +β · x d (t) − x d (t) , i = 1
⎪
⎨ +β · x d (t) − x d (t) , i = 1 rand i
best i xid (t + 1) = (6)
d
xi (t + 1) = (3) ⎪
⎪ xrand + r · xi−1 (t) − xid (t)
d d
⎪ ⎪
⎪
⎪ xbest + r · xi−1 (t) − xid (t)
⎪
d d ⎪
⎪
⎪
⎪ ⎩ +β · x d (t) − x d (t) , i = 2, . . . , N ,
⎪
⎪ rand i
⎩ +β · x d (t) − x d (t) , i = 2, . . . , N ,
best i
T −t+1
β = 2er1 T · sin (2πr1 ) , (4) where r is a random vector in [0, 1], xrand
d represents a random
position, and Lbd and U bd denote the lower and upper limits
where r1 is a random number in the range of [0, 1], T repre- of the dth dimension, respectively.
sents the maximum number of iterations, and β denotes the
weight coefficient. 2.1.3 Somersault Foraging
This process iterates around the position of the best indi-
vidual. To avoid getting trapped in local optima, a new When manta rays approach food, they perform somersaults
position is randomly selected as the best position in the and circle around the food to pull it towards themselves. This
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 5 of 30 114
foraging behavior takes the food (i.e., the best individual) as through somersault foraging and ultimately returns the best
a pivot, and each individual swims back and forth around the solution.
pivot. In other words, the search space is limited between the MRFO has few adjustable parameters, low computational
current position and its symmetrical position with respect to cost, and is less affected by the increase in problem size,
the best individual. As the distance between the individual making it a powerful alternative for solving very high-
and the best individual decreases, the search space is also dimensional problems [69].
reduced, and the individual gradually approaches the best
individual. Therefore, in the later stages of iteration, the range
of somersault foraging is adaptively reduced. The expression 2.2 Selection Methods and Discussion
is given below:
There are currently six basic selection methods, which are:
123
114 Page 6 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
5. Fitness-distance balance selection [64] This is the latest The traditional selection methods evaluate the quality of
selection method that has been successfully applied in individuals based on the magnitude of their fitness, which
several algorithms [64, 69, 71, 72]. The fitness-distance can improve the convergence speed of the algorithm, but it
balance with functional weight (FW) selection added in also easily leads to local optima. Therefore, an increasing
this paper is an improved variant of this selection method number of alternative selection methods are being used to
[68]. replace traditional ones. In [73], maximum clique and edge
6. Combined selection This is a combination of at least two centrality are used to select genes with maximum relevancy
of the other selection methods. and minimum redundancy. In [74], users are clustered into
different groups using graph clustering, food ingredients are
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 7 of 30 114
Table 2 Parameter settings of HMRFO and other algorithms selection method, fitness-distance balance with functional
Algorithms Parameters weight (FW) [68], and employs a hierarchical structure [67]
to update the population, resulting in the proposed algorithm,
HMRFO S = 2, μ = 3/4, σ = 1/12, P R1 = 0.8, P R2 = 0.6 HMRFO.
MRFO S=2
AWDO u max = 0.3
3.2 Description of HMRFO
PSOGSA G 0 = 100, α = 20, w1 (t) = 0.5, w2 (t) = 1.5
RSA α = 0.1, β = 0.005 In MRFO, the fitness value of each individual is calculated
GWO α linearly decreases from 2 to 0 using the following equations:
WOA α linearly decreases from 2 to 0
BSO m = 5, p5a = 0.2, p6b = 0.8, p6biii = 0.4, p6c = 0.5 G i = f xi1 , xi2 , . . . , xid , (8)
N F = if goal is minimization: Fi = 1 − nor mG i
∀i=1 i (9)
Table 3 Friedman ranks of HMRFO and seven competitive algorithms if goal is maximization: Fi = nor mG i ,
on IEEE CEC2017
Algorithms Dimension Dimension Dimen Dimen where G i represents the objective function value of the ith
=10 =30 sion=50 sion=100 individual, nor mG i is the normalized value of G i , and Fi is
HMRFO 1.2586 1.2069 1.3793 1.3793 the fitness value of the ith individual.
MRFO 2.2931 2.6552 2.6207 2.4483
In MRFO, only the position of the best individual based on
fitness is obtained, which can easily lead to falling into a local
AWDO 6.069 6.5172 6.6897 6.6897
solution. To address this issue, the fitness-distance balance
PSOGSA 4.1552 4.4483 4.6897 5.2069
with functional weight (FW) selection method is added to
RSA 7.8276 7.931 7.8621 7.7586
HMRFO. This selection method considers both fitness and
GWO 3.931 3.2069 3.2414 3.3793
the distance between each individual and the best individual,
WOA 5.8621 5.5862 5.1724 4.9655
which effectively maintains population diversity, increases
BSO 4.6034 4.4483 4.3448 4.1724
the number of solutions, and improves solution quality. As
a result, the algorithm can escape local optima and enhance
its exploration ability. The mathematical equations for this
embedded using deep learning techniques, and based on user selection method are as follows:
and food information, the top few foods are recommended
to the target customers. Adding other selection techniques ∀i=1
N
, i = best,
and criteria to select more accurate individuals has become 1 2 2 2 2
Di = xi − xbest
1 + xi − xbest
2 + · · · + xid − xbest
d
,(10)
increasingly common. Similarly, the FW algorithm in this
article also added distance metrics. With only a single fitness ∀i=1
N
Si = ω · Fi + (1 − ω) · Di , (11)
evaluation, it is easy to find a local solution. The introduc- ω ∼ N μ, σ 2 , (12)
tion of distance metrics enables the algorithm to escape local
solutions and find better solutions with higher fitness values where Di represents the distance between the ith individ-
farther away, thus increasing the diversity of the solutions ual and the best individual, Fi is fitness value, Si denotes
obtained. the score of the ith individual, ω represents the functional
weight, and ω is randomly generated by Gaussian distribu-
3 Proposed HMRFO tion, according to [68], where μ = 3/4, σ = 1/12.
After obtaining the score Si , the population is sorted based
3.1 Motivation on the score Si . The higher the score, the greater the contri-
bution of the individual to the optimization problem, and the
According to the No-Free-Lunch theorem [75, 76], no sin- higher its rank, the more likely it is to be the optimal indi-
gle algorithm can find the best global optimal solution for vidual. This approach overcomes the disadvantage of relying
all optimization problems. Similarly, MRFO has its own solely on fitness to obtain the optimal individual, improves
limitations in optimization. In the case of swarm-based intel- population diversity, and prevents the algorithm from being
ligence MRFO, since the back manta rays are influenced by trapped in local optima. The effectiveness of this method has
most of the front manta rays, the swarm is prone to being been demonstrated in [64, 68, 69, 71].
attracted by local points, leading to falling into local optima. MRFO updates the population only based on X best in
To overcome this problem, enriching the population diversity somersault foraging. To enhance population diversity, a hier-
is an effective solution. Therefore, this paper proposes a new archical structure is added to the somersault foraging of
123
114 Page 8 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Table 4 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 10
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 9 of 30 114
Table 4 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std
MRFO. This hierarchical structure is divided into three lay- shown in Fig. 3 and Algorithm 1 presents the pseudocode of
ers, as described below: HMRFO.
123
114 Page 10 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Table 5 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 30 dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 11 of 30 114
Table 5 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std
Table 6 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 50
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std
123
114 Page 12 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Table 6 continued
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std
4.2 Performance Evaluation Criteria problems, their minimum mean values (i.e., the best val-
ues) are highlighted in boldface.
The performance of HMRFO is evaluated using the following (2) Non-parametric statistical tests, including the Wilcoxon
four criteria: rank-sum test [78–80] to compare the obtained p-value
and the significant level α = 0.05 between the proposed
algorithm and the compared algorithm. A p-value of ≤
(1) Mean and standard deviation (std) of optimization errors 0.05 indicates a significant difference between the two
between obtained optimal values and known real opti- algorithms. The symbol +" denotes that the proposed
mal values. Since all objective functions are minimization
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 13 of 30 114
Table 7 Experimental and statistical results of HMRFO and seven competitive algorithms on IEEE CEC2017 benchmark functions with 100
dimensions
HMRFO MRFO AWDO PSOGSA
Mean Std Mean Std Mean Std Mean Std
123
114 Page 14 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Table 7 continued
RSA GWO WOA BSO
Mean Std Mean Std Mean Std Mean Std
algorithm is superior to its competitor, while the symbol height of the box represents the solution’s fluctuation, and
−" represents that the proposed algorithm is significantly the median represents the average level of the solution.
worse than its competitor. A p-value > 0.05 indicates (4) Convergence graphs to intuitively show the convergence
no significant difference between the two algorithms, speed and accuracy of the algorithm. It can explain
which is recorded as symbol ≈". W/T/L" indicates how whether the improved algorithm jumps out of the local
many times the proposed algorithm has won, tied and solution.
lost to its competitor, respectively. The Friedman test
[81], another non-parametric statistical test, is also used. 4.3 Comparison for Competitive Algorithms
The mean values of optimization errors are used as test
data. The smaller the value of Friedman rank, the better To evaluate the effectiveness and search performance of
the performance of the algorithm. The minimum value is HMRFO, seven competitive algorithms are compared: MRFO
highlighted in boldface. [11], adaptive wind driven optimization (AWDO) [70], a
(3) Box-and-whisker diagrams to show the robustness and hybrid algorithm that combines particle swarm optimization
accuracy of the algorithm’s solutions. The blue box’s and gravitational search algorithm (PSOGSA) [82], reptile
lower edge, red line, and upper edge indicate the first search algorithm (RSA) [12], grey wolf optimizer (GWO)
quartile, the second quartile (median), and the third quar- [13], whale optimization algorithm (WOA) [17], and brain
tile, respectively. The lines above and below the blue storm optimization (BSO) [15].
box indicate the maximum and minimum non-outliers,
respectively. The red symbol “+" displays outliers. The
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 15 of 30 114
123
114 Page 16 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Fig. 4 Box-and-whisker diagrams of errors obtained by eight algorithms on IEEE CEC2017 functions (D = 10, 30)
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 17 of 30 114
Fig. 5 Box-and-whisker diagrams of errors obtained by eight algorithms on IEEE CEC2017 functions (D = 50, 100)
123
114 Page 18 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Fig. 6 Convergence graphs of average optimizations obtained by eight algorithms on IEEE CEC2017 functions (D = 10, 30)
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 19 of 30 114
Fig. 7 Convergence graphs of average optimizations obtained by eight algorithms on IEEE CEC2017 functions (D = 50, 100)
123
114 Page 20 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 21 of 30 114
Table 9 Experimental results of HMRFO and MRFO on real-world optimization problems with large dimensions
HSP DED
Algorithms Mean Std Best Worst Mean Std Best Worst
Table 10 CPU running time consumed by HMRFO and MRFO on real-world optimization problems with large dimensions
Algorithms HSP (Dimension = 96) DED (Dimension = 120) LSTPP (Dimension = 126) ELD (Dimension = 140)
Table 11 Experimental and statistical results of HMRFO with P R1 and P R2 on IEEE CEC2017 benchmark functions with 30 dimensions, where
P R1 = 0.8, P R2 = 0.6 is the main algorithm in statistical results (W/T/L)
P R1 = 0.1, P R2 = 0.2 P R1 = 0.2, P R2 = 0.1 P R1 = 0.2, P R2 = 0.5 P R1 = 0.5, P R2 = 0.2
Mean Std Mean Std Mean Std Mean Std
123
114 Page 22 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
Table 11 continued
P R1 = 0.1, P R2 = 0.2 P R1 = 0.2, P R2 = 0.1 P R1 = 0.2, P R2 = 0.5 P R1 = 0.5, P R2 = 0.2
Mean Std Mean Std Mean Std Mean Std
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 23 of 30 114
In the hierarchical structure of HMRFO, there are two lay- Therefore, HMRFO with P R1 = 0.8 and P R2 = 0.6 per-
ers that randomly select an individual from the FW to update forms the best.
the population. The selection of an individual from the FW is
crucial for the overall performance of the algorithm. Specif-
ically, the values of P R1 and P R2 can significantly affect
the performance of HMRFO, and the optimal combination 5.2 Analysis for Individuals per Layer of HMRFO
of these parameters can maximize its performance. As P R1
and P R2 both range from 0 to 1, eleven combinations of P R1 In the hierarchical structure of HMRFO introduced earlier,
and P R2 are tested on IEEE CEC2017 benchmark functions the first layer (L 1 ) contains 60% of individuals, the second
with 30 dimensions, displayed in Tables 11 and 12, where layer (L 2 ) contains 30% of individuals, and the third layer
P R1 = 0.8, P R2 = 0.6 is the main algorithm in statisti- (L 3 ) contains 10% of individuals. In this section, we analyze
cal results (W/T/L). From the tables, according to W/T/L, the reasons behind this allocation. First, we set the second
HMRFO with P R1 = 0.8 and P R2 = 0.6 is better than the layer (L 2 ) to be 20%, 30%, and 40% respectively, and the
other ten combinations. And it has fourteen minimum mean third layer (L 3 ) to be 5%, 10%, and 15%, respectively, so that
values, which is the most out of all parameter combinations. L 1 = 100% − L 2 − L 3 , resulting in a total of nine combi-
nations tested on IEEE CEC2017 benchmark functions with
30 dimensions, presented in Tables 13 and 14. According to
123
114
Table 13 Experimental and statistical results of HMRFO with L 1 , L 2 and L 3 on IEEE CEC2017 benchmark functions with 30 dimensions, where L 1 = 60%, L 2 = 30% and L 3 = 10% is the
main algorithm in statistical results (W/T/L)
123
L 1 = 75%, L 2 = 20%, L 3 = 5% L 1 = 70%, L 2 = 20%, L 3 = 10% L 1 = 65%, L 2 = 20%, L 3 = 15% L 1 = 65%, L 2 = 30%, L 3 = 5%
Mean Std Mean Std Mean Std Mean Std
Page 24 of 30
F1 2.602E+03 3.305E+03 2.920E+03 3.473E+03 ≈ 3.197E+03 4.225E+03 ≈ 3.276E+03 3.671E+03 ≈ 2.585E+03 3.123E+03 ≈
F3 5.474E+01 4.114E+01 4.888E+01 6.354E+01 − 8.377E+01 6.731E+01 + 5.501E+01 4.339E+01 ≈ 4.106E+01 5.449E+01 −
F4 3.092E+01 3.623E+01 3.956E+01 3.779E+01 ≈ 5.684E+01 3.657E+01 + 3.786E+01 3.842E+01 ≈ 4.065E+01 3.671E+01 ≈
F5 6.126E+01 1.628E+01 7.669E+01 2.107E+01 + 5.785E+01 1.283E+01 ≈ 6.596E+01 2.381E+01 ≈ 7.811E+01 2.661E+01 +
F6 2.106E–01 2.939E–01 5.930E–01 7.819E–01 + 3.286E–01 6.273E–01 ≈ 4.189E–01 5.528E–01 + 4.019E–01 1.244E+00 ≈
F7 1.001E+02 2.681E+01 1.116E+02 2.981E+01 + 9.590E+01 2.008E+01 ≈ 1.043E+02 2.315E+01 ≈ 1.076E+02 2.134E+01 +
F8 6.723E+01 2.349E+01 8.005E+01 2.812E+01 + 6.219E+01 2.142E+01 ≈ 6.772E+01 2.478E+01 ≈ 7.642E+01 2.365E+01 +
F9 4.386E+01 4.830E+01 7.456E+01 9.387E+01 ≈ 4.425E+01 7.384E+01 ≈ 5.284E+01 5.385E+01 ≈ 8.721E+01 1.308E+02 ≈
F10 3.267E+03 5.664E+02 3.364E+03 6.156E+02 ≈ 3.403E+03 6.650E+02 ≈ 3.510E+03 6.165E+02 + 3.653E+03 5.328E+02 +
International Journal of Computational Intelligence Systems
F11 5.858E+01 2.750E+01 7.072E+01 3.444E+01 + 7.149E+01 3.256E+01 + 6.081E+01 2.967E+01 ≈ 6.975E+01 3.149E+01 +
F12 4.639E+04 2.344E+04 4.700E+04 2.177E+04 ≈ 4.734E+04 3.081E+04 ≈ 5.107E+04 2.793E+04 ≈ 4.442E+04 3.261E+04 ≈
F13 8.655E+03 8.503E+03 1.266E+04 1.331E+04 ≈ 1.082E+04 9.692E+03 ≈ 1.101E+04 1.124E+04 ≈ 1.286E+04 1.224E+04 ≈
F14 1.377E+03 1.495E+03 1.936E+03 2.010E+03 + 1.369E+03 1.687E+03 ≈ 1.419E+03 1.263E+03 ≈ 1.856E+03 2.396E+03 ≈
F15 2.870E+03 3.218E+03
(2023) 16:114
F29 6.762E+02 1.605E+02 7.055E+02 1.386E+02 ≈ 6.668E+02 1.671E+02 ≈ 6.778E+02 1.396E+02 ≈ 7.629E+02 1.987E+02 +
F30 4.920E+03 2.032E+03 4.897E+03 1.914E+03 ≈ 4.640E+03 1.633E+03 ≈ 4.514E+03 1.405E+03 ≈ 4.439E+03 1.628E+03 ≈
114
123
114 Page 26 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
the W/T/L results in the tables, HMRFO with L 1 = 60%, where N is the population size, x̄ is the mean point.
L 2 = 30%, and L 3 = 10% performs the best. To provide a clear understanding of how the proposed
method compares with existing methods in terms of diversity,
we also included the latest variant of MRFO, fractional-order
5.3 Analysis of Population Diversity Caputo manta ray foraging optimization (FCMRFO) [85],
and evaluated the diversity changes of unimodal function F3,
The FW selection method and hierarchical structure pre- multimodal function F4, hybrid function F16, and composi-
sented in this paper can enhance the population diversity tion function F29. Figure 9 shows the population diversity of
of the MRFO algorithm. To better visualize the population HMRFO, MRFO and FCMRFO on these four functions with
diversity of HMRFO and MRFO, the following equations, 30 dimensions. The figure indicates that HMRFO has a higher
taken from [84], are used to calculate it: population diversity than MRFO and FCMRFO on F3, F16,
and F29, suggesting that the proposed method can effectively
improve diversity. On F4, at the beginning of the iteration, the
1
N
Div(x) = xi − x̄ / max xi − x j , (15) algorithm focuses on exploration, resulting in higher popula-
N 1≤i, j≤N
tion diversity for HMRFO than MRFO and FCMRFO. In the
i=1
N late stage of the iteration, the algorithm focuses on exploita-
1
x̄ = xi , (16) tion, leading to lower population diversity of HMRFO than
N
i=1
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 27 of 30 114
FCMRFO, but still higher than that of MRFO. Thus, HMRFO viduals in each layer can be dynamically adjusted based
can perform effective search and avoid being trapped in local on some evaluation metric.
optima. (2) The FW selection method used in this paper could be
applied to improve other meta-heuristic algorithms.
(3) HMRFO could be applied to tasks such as solar pho-
6 Conclusion and Future Work tovoltaic parameter estimation, dendritic neural models,
and multi-objective optimization.
In this paper, we propose a hierarchical manta ray foraging
optimization with weighted fitness-distance balance selec- Acknowledgements This work was mainly supported by the Japan
tion (HMRFO) by combining a hierarchical structure with Society for the Promotion of Science (JSPS) KAKENHI under Grant
the latest improved selection method. The proposed method JP22H03643, Japan Science and Technology Agency (JST) Support for
Pioneering Research Initiated by the Next Generation (SPRING) under
aims to increase population diversity to solve the problem Grant JPMJSP2145, and JST through the Establishment of University
of MRFO with premature convergence and trapping in local Fellowships towards the Creation of Science Technology Innovation
optima. To verify the performance of HMRFO, we compare under Grant JPMJFS2115.
it with MRFO and six state-of-the-art algorithms on IEEE
Author Contributions ZT: methodology, software, writing—original
CEC2017 functions. The experimental results demonstrate draft preparation. KW: methodology, software. ST: methodology, Soft-
that HMRFO has superior performance and can find better ware. YT: methodology, software, writing—reviewing and editing. RW:
solutions, escape local optima, and converge fast, indicating writing—reviewing and editing. SG: conceptualization, methodology,
that the proposed method effectively increases population software, supervision, writing—review and editing. All authors read
and approved the final manuscript.
diversity. In terms of algorithm complexity, HMRFO and
MRFO have similar computational time, suggesting that Funding This research was partially supported by the Japan Society for
the added improved method has little effect on the algo- the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643,
rithm complexity of MRFO. We also apply HMRFO and Japan Science and Technology Agency (JST) Support for Pioneer-
ing Research Initiated by the Next Generation (SPRING) under Grant
MRFO to optimize large-dimensional real-world problems JPMJSP2145, and JST through the Establishment of University Fel-
and find that HMRFO has good practicality, especially for lowships towards the Creation of Science Technology Innovation under
large-dimensional problems, as it takes less time and has low Grant JPMJFS2115.
computation cost. This is valuable information for study-
Data Availability Related data and material can be found at https://
ing large-dimensional optimization problems. Finally, the toyamaailab.github.io.
curves of population diversity of HMRFO and MRFO on four
different types of problems from IEEE CEC2017 further con- Declarations
firm that the improved method in this paper can successfully
enrich population diversity. Conflict of Interest The authors declare no conflict of interest.
After conducting experiments, we have discovered the fol-
Ethics Approval and Consent to Participate Not applicable.
lowing two advantages of HMRFO:
Consent for Publication Not applicable.
(1) The incorporation of FW and hierarchical structure sig-
Open Access This article is licensed under a Creative Commons
nificantly enhances the population diversity of HMRFO. Attribution 4.0 International License, which permits use, sharing, adap-
This results in the algorithm being able to escape local tation, distribution and reproduction in any medium or format, as
optima, avoid premature convergence, and improve the long as you give appropriate credit to the original author(s) and the
quality of solutions by considering different solutions in source, provide a link to the Creative Commons licence, and indi-
cate if changes were made. The images or other third party material
the search space. in this article are included in the article’s Creative Commons licence,
(2) HMRFO has a fast convergence rate and low computa- unless indicated otherwise in a credit line to the material. If material
tional complexity, making it a cost-effective approach to is not included in the article’s Creative Commons licence and your
optimize large dimensional problems. intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copy-
right holder. To view a copy of this licence, visit http://creativecomm
In future work, the following studies could be considered: ons.org/licenses/by/4.0/.
123
114 Page 28 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
2. Beyer, H.-G., Schwefel, H.-P.: Evolution strategies-a comprehen- 26. Bayraktar, Z., Komurcu, M., Werner, D.H.: Wind driven optimiza-
sive introduction. Nat. Comput. 1(1), 3–52 (2002) tion (WDO): a novel nature-inspired optimization algorithm and its
3. Kenneth, V.P.: Differential evolution. In: Zelinka, I., Snášel, V., application to electromagnetics. In: 2010 IEEE Antennas and Prop-
Abraham, A. (eds.) Handbook of Optimization. Intelligent Systems agation Society International Symposium, pp. 1–4. IEEE, (2010)
Reference Library, vol 38. Springer, Berlin, Heidelberg (2013) 27. Kaveh, A., Bakhshpoori, T.: Water evaporation optimization: a
4. Moscato, P., Mendes, A., Berretta, R.: Benchmarking a memetic novel physically inspired optimization algorithm. Comput. Struct.
algorithm for ordering microarray data. Biosystems 88(1), 56–75 167, 69–85 (2016)
(2007) 28. Zhao, W., Wang, L., Zhang, Z.: A novel atom search optimization
5. De Jong, K.: Evolutionary computation: a unified approach. In: for dispersion coefficient estimation in groundwater. Futur. Gener.
Proceedings of the 2016 on Genetic and Evolutionary Computation Comput. Syst. 91, 601–610 (2019)
Conference Companion, pp. 185–199 (2016) 29. Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W.,
6. Passino, K.M.: Bacterial foraging optimization. Int. J. Swarm Intell. Mirjalili, S.: Henry gas solubility optimization: a novel physics-
Res. 1(1), 1–16 (2010) based algorithm. Futur. Gener. Comput. Syst. 101, 646–667 (2019)
7. Meng, Z., Pan, J.-S.: Monkey king evolution: a new memetic evo- 30. Doğan, B., Ölmez, T.: A new metaheuristic for numerical func-
lutionary algorithm and its application in vehicle fuel consumption tion optimization: vortex search algorithm. Inf. Sci. 293, 125–145
optimization. Knowl.-Based Syst. 97, 144–157 (2016) (2015)
8. Uymaz, S.A., Tezel, G., Yel, E.: Artificial algae algorithm (AAA) 31. Venkata Rao, R., Savsani, V.J., Balic, J.: Teaching-learning-based
for nonlinear global optimization. Appl. Soft Comput. 31, 153–171 optimization algorithm for unconstrained and constrained real-
(2015) parameter optimization problems. Eng. Optim. 44(12), 1447–1462
9. Yang, X.-S., Gandomi, A.H.: Bat algorithm: a novel approach for (2012)
global engineering optimization. Eng. Comput. 29(5), 464–483 32. Gajawada, S.: Entrepreneur: artificial human optimization. Trans.
(2012) Mach. Learn. Artif. Intell. 4(6), 64–70 (2016)
10. Dasgupta, D.: Artificial Immune Systems and their Applications. 33. Seyyed Hamid Samareh Moosavi and Vahid Khatibi Bardsiri: Poor
Springer Science & Business Media (2012) and rich optimization algorithm: a new human-based and multi
11. Zhao, W., Zhang, Z., Wang, L.: Manta ray foraging optimization: populations algorithm. Eng. Appl. Artif. Intell. 86, 165–181 (2019)
an effective bio-inspired optimizer for engineering applications. 34. Huan, T.T., Kulkarni, A.J., Kanesan, J., Huang, C.J., Abraham, A.:
Eng. Appl. Artif. Intell. 87, 103300 (2020) Ideology algorithm: a socio-inspired optimization methodology.
12. Abualigah, L., Elaziz, M.A., Sumari, P., Geem, Z.W., Gandomi, Neural Comput. Appl. 28(1), 845–876 (2017)
A.H.: Reptile search algorithm (RSA): a nature-inspired meta- 35. Punnathanam, V., Kotecha, P.: Yin-yang-pair optimization: a novel
heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022) lightweight optimization algorithm. Eng. Appl. Artif. Intell. 54,
13. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. 62–79 (2016)
Eng. Softw. 69, 46–61 (2014) 36. Philip Chen, C.L., Zhang, T., Chen, L., Tam, S.C.: I-ching div-
14. Wang, D., Tan, D., Liu, L.: Particle swarm optimization algorithm: ination evolutionary algorithm and its convergence analysis. IEEE
an overview. Soft. Comput. 22(2), 387–408 (2018) Trans. Cybern. 47(1), 2–13 (2017)
15. Shi, Y.: Brain storm optimization algorithm. In: International Con- 37. Ezugwu, A.E., Shukla, A.K., Rl Nath, A.A., Akinyelu, JO
ference in Swarm Intelligence, pp. 303–309. Springer (2011) Agushaka., Chiroma, H., Muhuri, P.K.: Metaheuristics: a com-
16. Shadravan, S., Naji, H.R., Bardsiri, V.K.: The sailfish optimizer: prehensive overview and classification along with bibliometric
a novel nature-inspired metaheuristic algorithm for solving con- analysis. Artif. Intell. Rev. 54(6), 4237–4316 (2021)
strained engineering optimization problems. Eng. Appl. Artif. 38. Tang, J., Liu, G., Pan, Q.: A review on representative swarm intel-
Intell. 80, 20–34 (2019) ligence algorithms for solving optimization problems: applications
17. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. and trends. IEEE/CAA J. Autom. Sin. 8(10), 1627–1643 (2021)
Eng. Softw. 95, 51–67 (2016) 39. Hare, W., Nutini, J., Tesfamariam, S.: A survey of non-gradient
18. Dorigo, M., Stützle, T.: Ant colony optimization: overview and optimization methods in structural engineering. Adv. Eng. Softw.
recent advances. In: Gendreau, M., Potvin, J.Y. (eds.) Handbook 59, 19–28 (2013)
of Metaheuristics. International Series in Operations Research & 40. Abualigah, L., Diabat, A.: Advances in sine cosine algorithm: A
Management Science, vol 146. Springer, Boston, MA (2019) comprehensive survey. Artif. Intell. Rev. 54(4), 2567–2608 (2021)
19. Dhiman, G., Kumar, V.: Spotted hyena optimizer: a novel bio- 41. Fonseca, C.M., Fleming, P.J.: An overview of evolutionary algo-
inspired based metaheuristic technique for engineering applica- rithms in multiobjective optimization. Evol. Comput. 3(1), 1–16
tions. Adv. Eng. Softw. 114, 48–70 (2017) (1995)
20. Yang, X.-S.: Firefly algorithm, levy flights and global optimization. 42. Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of
In: Bramer, M., Ellis, R., Petridis, M. (eds.) Research and Devel- swarm algorithms applied to discrete optimization problems. In:
opment in Intelligent Systems XXVI. Springer, London (2010) Swarm Intelligence and Bio-Inspired Computation, pp. 169–191.
21. Połap, D., Woźniak, M.: Red fox optimization algorithm. Expert Elsevier (2013)
Syst. Appl. 166, 114107 (2021) 43. Biswas, A., Mishra, K.K., Tiwari, S., Misra, A.K.: Physics-inspired
22. Abualigah, L., Shehab, M., Alshinwan, M., Alabool, H.: Salp optimization algorithms: a survey. J. Optim. 2013, Article ID
swarm algorithm: a comprehensive survey. Neural Comput. Appl. 438152. https://doi.org/10.1155/2013/438152
32(15), 11195–11215 (2020) 44. Kosorukoff, A.: Human based genetic algorithm. In: 2001
23. Cuevas, E., Cienfuegos, M., Zaldívar, D., Pérez-Cisneros, M.: A IEEE International Conference on Systems, Man and Cyber-
swarm optimization algorithm inspired in the behavior of the social- netics. e-Systems and e-Man for Cybernetics in Cyberspace
spider. Expert Syst. Appl. 40(16), 6374–6384 (2013) (Cat.No.01CH37236), volume 5, pp. 3464–3469. IEEE (2001)
24. Fausto, F., Cuevas, E., Valdivia, A., González, A.: A global 45. Eiben, A.E., Smith, J.: From evolutionary computation to the evo-
optimization algorithm inspired in the behavior of selfish herds. lution of things. Nature 521(7553), 476–482 (2015)
Biosystems 160, 39–55 (2017) 46. Boussaïd, I., Lepagnot, J., Siarry, P.: A survey on optimization
25. Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravita- metaheuristics. Inf. Sci. 237, 82–117 (2013)
tional search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
123
International Journal of Computational Intelligence Systems (2023) 16:114 Page 29 of 30 114
47. Yang, Y., Lei, Z., Wang, Y., Zhang, T., Peng, C., Gao, S.: Improving 66. Nandar Lynn and Ponnuthurai Nagaratnam Suganthan: Heteroge-
dendritic neuron model with dynamic scale-free network-based dif- neous comprehensive learning particle swarm optimization with
ferential evolution. IEEE/CAA J. Autom. Sin. 9(1), 99–110 (2022) enhanced exploration and exploitation. Swarm Evol. Comput. 24,
48. Hong, W.-J., Yang, P., Tang, K.: Evolutionary computation for 11–24 (2015)
large-scale multi-objective optimization: a decade of progresses. 67. Wang, Y., Gao, S., Yang, Yu., Cai, Z., Wang, Z.: A gravita-
Int. J. Autom. Comput. 18, 155–169 (2021) tional search algorithm with hierarchy and distributed framework.
49. Jiang, Y., Luo, Q., Wei, Y., Abualigah, L., Zhou, Y.: An efficient Knowl.-Based Syst. 218, 106877 (2021)
binary gradient-based optimizer for feature selection. Math. Biosci. 68. Wang, K., Tao, S., Wang, R.-L., Todo, Y., Gao, S.: Fitness-distance
Eng. 18(4), 3813–3854 (2021) balance with functional weights: a new selection method for evo-
50. Zhao, Z., Liu, S., Zhou, M.C., Abusorrah, A.: Dual-objective mixed lutionary algorithms. IEICE Trans. Inform. Syst. E–104D(10),
integer linear program and memetic algorithm for an industrial 1789–1792 (2021)
group scheduling problem. IEEE/CAA J. Autom. Sin. 8(6), 1199– 69. Aras, S., Gedikli, E., Kahraman, H.T.: A novel stochastic fractal
1209 (2020) search algorithm with fitness-distance balance for global numerical
51. Yousri, D., Elaziz, M.A., Abualigah, L., Oliva, D., Al-qaness, optimization. Swarm Evol. Comput. 61, 100821 (2021)
M.A.A., Ewees, A.A.: COVID-19 X-ray images classification 70. Bayraktar, Z., Komurcu, M.: Adaptive wind driven optimiza-
based on enhanced fractional-order cuckoo search optimizer using tion. In: Proceedings of the 9th EAI International Conference on
heavy-tailed distributions. Appl. Soft Comput. 101, 107052 (2021) Bio-Inspired Information and Communications Technologies (For-
52. Miikkulainen, R., Forrest, S.: A biological perspective on evolu- merly BIONETICS), pp. 124–127. ICST (Institute for Computer
tionary computation. Nat. Mach. Intell. 3(1), 9–15 (2021) Sciences, Social-Informatics and Telecommunications Engineer-
53. Ji, J., Gao, S., Cheng, J., Tang, Z., Todo, Y.: An approximate logic ing) (2016)
neuron model with a dendritic structure. Neurocomputing 173, 71. Tang, Z., Tao, S., Wang, K., Bo, L., Todo, Y., Gao, S.: Chaotic
1775–1783 (2016) wind driven optimization with fitness distance balance strategy.
54. Cuevas, E., Gálvez, J., Toski, M., Avila, K.: Evolutionary-mean Int. J. Comput. Intell. Syst. 15(1), 46 (2022)
shift algorithm for dynamic multimodal function optimization. 72. Zhao, W., Zhang, H., Zhang, Z., Zhang, K., Wang, L.: Parameters
Appl. Soft Comput. 113, 107880 (2021) tuning of fractional-order proportional integral derivative in water
55. Rodríguez, A., Camarena, O., Cuevas, E., Aranguren, I., Valdivia- turbine governing system using an effective SDO with enhanced
G, A., Morales-Castañeda, B., Zaldívar, D., Pérez-Cisneros, fitness-distance balance and adaptive local search. Water 14(19),
M.: Group-based synchronous-asynchronous grey wolf optimizer. 3035 (2022)
Appl. Math. Model. 93, 226–243 (2021) 73. Azadifar, S., Rostami, M., Berahmand, K., Moradi, P., Oussalah,
56. Díaz, P., Pérez-Cisneros, M., Cuevas, E., Avalos, O., Gálvez, J., M.: Graph-based relevancy-redundancy gene selection method for
Hinojosa, S., Zaldivar, D.: An improved crow search algorithm cancer diagnosis. Comput. Biol. Med. 147, 105766 (2022)
applied to energy problems. Energies 11(3), 571 (2018) 74. Rostami, M., Oussalah, M., Farrahi, V.: A novel time-aware food
57. Izci, D., Ekinci, S., Eker, E., Kayri, M.: Improved manta ray forag- recommender-system based on deep learning and graph clustering.
ing optimization using opposition-based learning for optimization IEEE Access 10, 52508–52524 (2022)
problems. In: 2020 International Congress on Human-Computer 75. Abualigah, L., Diabat, A., Geem, Z.W.: A comprehensive survey
Interaction, Optimization and Robotic Applications (HORA), pp of the harmony search algorithm in clustering applications. Appl.
1–6. IEEE (2020) Sci. 10(11), 3827 (2020)
58. Feng, J., Luo, X., Gao, M., Abbas, A., Yi-Peng, X., Pouramini, S.: 76. Wolpert, D.H., Macready, W.G.: No free lunch theorems for opti-
Minimization of energy consumption by building shape optimiza- mization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
tion using an improved manta-ray foraging optimization algorithm. 77. Awad, N.H., Ali, M.Z., Liang, J.J., Qu, B.Y., Suganthan, P.N.: Prob-
Energy Rep. 7, 1068–1078 (2021) lem definitions and evaluation criteria for the CEC 2017 special
59. Sheng, B., Pan, T., Luo, Y., Jermsittiparsert, K.: System identi- session and competition on single objective real-parameter numer-
fication of the PEMFCs based on balanced manta-ray foraging ical optimization. Technical Report (2016)
optimization algorithm. Energy Rep. 6, 2887–2896 (2020) 78. García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use
60. Micev, M., Ćalasan, M., Ali, Z.M., Hasanien, H.M., Abdel Aleem, of non-parametric tests for analyzing the evolutionary algorithms’
S.H.E.: Optimal design of automatic voltage regulation controller behaviour: a case study on the CEC’2005 special session on real
using hybrid simulated annealing - manta ray foraging optimization parameter optimization. J. Heuristics 15(6), 617 (2008)
algorithm. Ain Shams Eng. J. 12(1), 641–657 (2021) 79. Luengo, J., García, S., Herrera, F.: A study on the use of statistical
61. Elaziz, M.A., Yousri, D., Al-qaness, M.A.A., AbdelAty, A.M., tests for experimentation with neural networks: Analysis of para-
Radwan, A.G., Ewees, A.A.: A Grunwald-Letnikov based manta metric test conditions and non-parametric tests. Expert Syst. Appl.
ray foraging optimizer for global optimization and image segmen- 36(4), 7798–7808 (2009)
tation. Eng. Appl. Artif. Intell. 98, 104105 (2021) 80. García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced
62. Hassan, M.H., Houssein, E.H., Mahdy, M.A., Kamel, S.: An nonparametric tests for multiple comparisons in the design of
improved manta ray foraging optimizer for cost-effective emission experiments in computational intelligence and data mining: Exper-
dispatch problems. Eng. Appl. Artif. Intell. 100, 104155 (2021) imental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)
63. Gang, H., Li, M., Wang, X., Wei, G., Chang, C.-T.: An enhanced 81. Carrasco, J., García, S., Rueda, M.M., Das, S., Herrera, F.: Recent
manta ray foraging optimization algorithm for shape optimization trends in the use of statistical tests for comparing swarm and evo-
of complex CCG-Ball curves. Knowl.-Based Syst. 240, 108071 lutionary computing algorithms: Practical guidelines and a critical
(2022) review. Swarm Evol. Comput. 54, 100665 (2020)
64. Kahraman, H.T., Aras, S., Gedikli, E.: Fitness-distance balance 82. Mirjalili, S., Mohd Hashim, S.Z.: A new hybrid PSOGSA algorithm
(FDB): a new selection method for meta-heuristic search algo- for function optimization. In: 2010 International Conference on
rithms. Knowl.-Based Syst. 190, 105169 (2020) Computer and Information Application, pp. 374–377. IEEE (2010)
65. Alba, E., Dorronsoro, B.: The exploration/exploitation tradeoff in 83. Das, S., Suganthan, P.N.: Problem definitions and evaluation crite-
dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. ria for CEC 2011 competition on testing evolutionary algorithms
9(2), 126–142 (2005) on real world optimization problems. In: Jadavpur University,
Nanyang Technological University, Kolkata, pp. 341–359 (2010)
123
114 Page 30 of 30 International Journal of Computational Intelligence Systems (2023) 16:114
84. Wang, K., Wang, Y., Tao, S., Cai, Z., Lei, Z., Gao, S.: Spherical
search algorithm with adaptive population control for global con-
tinuous optimization problems. Appl. Soft Comput. 132, 109845
(2023)
85. Yousri, D., AbdelAty, A.M., Al-qaness, M.A.A., Ewees, A.A., Rad-
wan, A.G., Elaziz, M.A.: Discrete fractional-order Caputo method
to overcome trapping in local optima: Manta ray foraging optimizer
as a case study. Expert Syst. Appl. 192, 116355 (2022)
123