0% found this document useful (0 votes)
74 views73 pages

Artificial Gorilla Troops Optimizer: A New Nature Inspired Metaheuristic Algorithm For Global Optimization Problems

Uploaded by

max
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views73 pages

Artificial Gorilla Troops Optimizer: A New Nature Inspired Metaheuristic Algorithm For Global Optimization Problems

Uploaded by

max
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/353186350

Artificial gorilla troops optimizer: A new nature‐inspired metaheuristic


algorithm for global optimization problems

Article  in  International Journal of Intelligent Systems · July 2021


DOI: 10.1002/int.22535

CITATIONS READS

195 3,771

3 authors:

Benyamin Abdollahzadeh Farhad Soleimanian Gharehchopogh


Islamic Azad University of Urmia Islamic Azad University of Urmia
20 PUBLICATIONS   601 CITATIONS    131 PUBLICATIONS   2,402 CITATIONS   

SEE PROFILE SEE PROFILE

Seyedali Mirjalili
Griffith University
500 PUBLICATIONS   52,770 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Design, analysis and optimization of optical devices by using artificial intelligence techniques View project

Wrapper Single-objective Feature Selection View project

All content following this page was uploaded by Farhad Soleimanian Gharehchopogh on 09 January 2022.

The user has requested enhancement of the downloaded file.


Received: 10 February 2021 | Revised: 12 April 2021 | Accepted: 28 May 2021

DOI: 10.1002/int.22535

RESEARCH ARTICLE

Artificial gorilla troops optimizer: A new


nature‐inspired metaheuristic algorithm for
global optimization problems

Benyamin Abdollahzadeh1 |
Farhad Soleimanian Gharehchopogh1 | Seyedali Mirjalili2,3

1
Department of Computer Engineering,
Urmia Branch, Islamic Azad University, Abstract
Urmia, Iran Metaheuristics play a critical role in solving optimiza-
2
Centre for Artificial Intelligence tion problems, and most of them have been inspired by
Research and Optimisation, Torrens
University Australia, Fortitude Valley, the collective intelligence of natural organisms in nat-
Brisbane, Queensland, Australia ure. This paper proposes a new metaheuristic algo-
3
YFL (Yonsei Frontier Lab), Yonsei rithm inspired by gorilla troops' social intelligence in
University, Seoul, Korea
nature, called Artificial Gorilla Troops Optimizer
Correspondence (GTO). In this algorithm, gorillas' collective life is
Farhad Soleimanian Gharehchopogh, mathematically formulated, and new mechanisms are
Department of Computer Engineering,
Urmia Branch, Islamic Azad University,
designed to perform exploration and exploitation. To
Urmia 44867‐57159, Iran. evaluate the GTO, we apply it to 52 standard bench-
Email: bonab.farhad@gmail.com mark functions and seven engineering problems.
Friedman's test and Wilcoxon rank‐sum statistical tests
statistically compared the proposed method with sev-
eral existing metaheuristics. The results demonstrate
that the GTO performs better than comparative algo-
rithms on most benchmark functions, particularly on
high‐dimensional problems. The results demonstrate
that the GTO can provide superior results compared
with other metaheuristics.

KEYWORDS
gorilla troops optimizer, metaheuristic algorithms, optimization

Int J Intell Syst. 2021;1–72. wileyonlinelibrary.com/journal/int © 2021 Wiley Periodicals LLC | 1


2 | ABDOLLAHZADEH ET AL.

1 | INTRODUCTION

Optimization denotes finding the best possible or desirable solution(s) to a problem commonly
encountered in a wide range of fields. Optimization algorithms may show two types of beha-
viors when optimizing problems: deterministic and stochastic.1 Deterministic algorithms ty-
pically require complicated calculations, which makes them less practical and applicable. The
performance of such methods also degrades substantially proportional to the size of an opti-
mization problem. Stochastic or randomized algorithms show stochastic behavior and make
educated decisions to search “wise regions” of a search space in an optimization problem. This
allows them to better cope with the difficulties of challenging optimization problems.
Using nature‐inspired, stochastic algorithms with efficient computations rather than de-
terministic methods and algorithms has been suggested.1,2 Heuristic and metaheuristic algo-
rithms fall under approximate methods to solve optimization problems, seek optimal solutions
in a reasonable time, and use appropriate computational resources. However, these algorithms
do not guarantee to find the best possible solutions in one try. This originates from the sto-
chastic search mechanism of such algorithms.
Metaheuristics have become very popular in engineering applications3,4 due to several
reasons. First, they have relatively simple concepts and are easy to implement. Second, they
outperform local search algorithms. Third, they can be used in a wide range of applications.
Finally, there is no need for derivative function information. Nature‐inspired metaheuristic
algorithms solve optimization problems by imitating biological or physical phenomena. These
algorithms can be categorized into five main categories, nature‐based methods, physics‐based
methods, swarm‐based methods, human‐based methods, and animal‐based methods. Of
course, all metaheuristic optimization algorithms benefit from these advantages despite the
differences. In Figure 1, the classification of metaheuristic algorithms is provided.
Studies have demonstrated that most of the suggested metaheuristic algorithms have been
inspired by animals' search and prey behavior in nature. However, there is still no work that

FIGURE 1 Classification of metaheuristic algorithms [Color figure can be viewed at wileyonlinelibrary.com]


ABDOLLAHZADEH ET AL. | 3

mimics gorilla troops' lifestyle to design and develop a metaheuristic algorithm. This motivated
our attempts to provide a mathematical model of gorillas' behavior and proposed Gorilla Troops
Optimizer (GTO). We first intend to investigate the unique aspect of gorilla troops and then
provide several mathematical models based on the proposed GTO.
In the rest of this study, Section 2 provides a literature review of nature‐inspired meta-
heuristic algorithms. Section 3 describes the biological principles and social lives of the gorilla.
Section 4 proposes the GTO algorithm, which includes a proposed algorithm theory and its
flowchart and formulation. Section 5 deals with testing the GTO's performance using standard
benchmark functions, and the results are displayed in separate graphical diagrams. Conclu-
sions and future works are given in Section 6.

2 | RELATED W ORKS

There are various categories for metaheuristic algorithms in the literature. Despite different
categories, one would argue that most of these algorithms have been inspired by animals'
collective behavior and hunting processes in nature. This section aims to explore nature‐
inspired metaheuristic algorithms and study the basic algorithms proposed to solve optimiza-
tion problems.
Genetic Algorithm (GA) is the first and most popular method for solving optimization
problems that Holland proposed in 1992, inspired by the Darwinian evolutionary concepts. This
algorithm has been widely used in most optimization problems with two recombination and
mutation operators and is seen as one of the successful algorithms,5 with various improved and
recombination versions already presented.6 In 2001, the Harmony Search (HS) algorithm,
derived from the musicians' search process following the best state of harmony, was introduced
by Geem et al.7 After the initial version of this algorithm was introduced in many optimization
problems, it was used mainly because of its simplicity. In the following, we will describe several
new metaheuristic algorithms inspired by nature. A new HS algorithm for discrete optimization
was developed to study the truss structure.8 The recombined HS algorithm follows a new
approach for improvisation: Although the algorithm retains the harmony memory and screw
adjustment functions, the randomization functions replace the HS algorithm with neighbor-
hood search and the universal best particle swarm search. The efficiency of the HS algorithm
was tested on six truss structure optimization problems under different loading conditions. The
HS algorithm usually outperforms other optimization methods in terms of optimal solution and
convergence ability. The HS algorithm provides an optimal balance between exploration and
exploitation and converges faster relative to other methods. It achieves a result much better in
almost all design samples than other methods as it requires less structural analysis.
Particle swarm optimization (PSO)9 was introduced in 1995 based on animals' swarming
behavior in nature, such as birds and fish. Since then, PSO has become the focus of attention,
forming an exciting research topic called swarm intelligence. It has been used in almost all
optimization areas, including computational intelligence and designing/planning applications.
In 2005, Karaboga proposed an algorithm based on the bees' collective behavior called the
artificial bee colony (ABC)10 algorithm. The ABC algorithm simulates employed bees, onlooker
bees, and scout bees and provides mathematical formulas for each step. This algorithm, like any
metaheuristic algorithms, had its weaknesses, with improved versions introduced later. In
2008, Yang introduced an algorithm inspired by the luminosity of fireflies.11 In this algorithm,
the amount of light intensity and attractiveness of each firefly was formulated, in a way that
4 | ABDOLLAHZADEH ET AL.

each firefly is compared with other fireflies in terms of brightness or light, with low‐light
fireflies moving towards brighter fireflies. Of course, fireflies sometimes fly randomly, which
led to an improved version of the algorithm.
Yang introduced an algorithm inspired by bat behavior in 2010,12 which is based on the
behavior of bats' acoustic resonance at different pulse rates and loudness. In Reference [13], an
optimization algorithm based on gravity and mass interactions is proposed called the gravita-
tional search algorithm (GSA). Search agents are a set of masses that interact with each other
based on the Newtonian law of gravity and motions. Agents are seen as objects, and their mass
measures their function. All of these objects are attracted to each other by the gravity forces,
which causes all objects to move towards heavier objects universally. Hence, the masses co-
operate through gravity using the direct form of communication. Heavy masses corresponding
to reasonable solutions move slower and lighter, as this step ensures the algorithm's efficiency.
In GSA, each mass (agent) is characterized by four issues: position, inertial mass, active gravity
mass, and passive gravity mass. The mass's position relates to the problem solution, and its
gravitational and inertial masses are determined using an appropriate fitness function.
In 2013, a new metaheuristic algorithm for optimization problems called the hunter‐seeker
algorithm was proposed.14 In this algorithm, randomly generated solutions serve as a hunter,
and the seeker is assigned depending on their performance in the objective function. Their
performance can be determined numerically, and that is called the survival rate. Spider
Monkey Optimization (SMO) algorithm15 is proposed for numerical optimization, where a new
model for numerical optimization using spider monkey feeding behavior modeling is proposed.
Spider monkeys are classified as animals based on the “fission and fusion” social structure.
These animals transfer themselves from more prominent groups to smaller groups due to lack
of food and vice versa.
In 2014, Oveis Abedinia et al. proposed a new metaheuristic algorithm based on Shark
Smell Optimization (SSO).16 This algorithm is based on the shark's ability, the superior hunter
in nature, to hunt prey, derived from the shark's sense of smell and its movement towards the
smell source. The shark's various behaviors in the search area, that is, seawater, have been
mathematically modeled in the proposed optimization method. The Symbiotic Organisms
Search (SOS) algorithm is one of the newest methods to solve optimization problems based on
interacting organisms in nature. This algorithm considers three stages of mutualism, parasit-
ism, and commensalism in nature that may benefit or harm each other.17 However, the chaos‐
integrated SOS (CSOS) algorithm was developed for global optimization in Reference [18]. The
chaotic local search is embedded in the proposed algorithm, strengthening the search process
around the best solution as the most promising search space area. It increases the probability of
maintaining a better solution and ultimately improves the quality of the solution. Besides, the
proposed algorithm performs better in multidimensional test functions, implying a proper
balance between exploration and exploitation, so CSOS can be considered an up‐and‐coming
optimization tool for solving complex nonlinear engineering optimization problems.
Moth–flame optimization (MFO)19 algorithm is a new exploratory model inspired by moths'
traverse orientations. Moths fly at night at a constant angle to the moon, as there is a very
effective mechanism for traveling in a straight line for long distances. However, these fantasy
insects are trapped in a useless and deadly spiral path around artificial light, as this behavior is
mathematically modeled for optimization. In the proposed MFO algorithm, it is assumed that
the solutions to the problem are a moth, and the problem variables are the position of the
butterflies in the search space. Therefore, butterflies can fly in one‐, two‐, and three‐
dimensional spaces or very high spaces by changing their position vector.
ABDOLLAHZADEH ET AL. | 5

A new metaheuristic algorithm based on hierarchical gray wolf behavior was introduced in
2014,3 named Gray Wolf Optimization (GWO). In this algorithm, ordinary wolves have named
omega, following three wolves: alpha, beta, and delta. In the simulation, the three best solu-
tions include alpha, beta, and delta wolves, respectively, and the remaining solutions are
considered ordinary wolves.
A metaheuristic algorithm based on butterflies' life was proposed in 2017, in which two groups of
Artificial Butterfly Optimization (ABO) were placed between exploration and exploitation of the
search space.20 However, this algorithm's authors provided two versions of ABO1 and ABO2 using
three different flight types. The modified ABC algorithm was introduced based on a highly improved
general approach and limited adaptive strategy for universal optimization.21 The modified ABC was
named IGAL‐ABC based on the highly improved universal approach and limited adaptive strategy for
optimization problems. The exploration and exploitation capacity of the ABC algorithm was balanced
and improved in this search process. The optimization of the proposed algorithm was tested on
single‐ and multi‐state standard functions. Comparisons results suggested that the proposed algo-
rithm on multidimensional standard functions had better efficiency, generating better convergence
and optimization features.
The Sine–Cosine Algorithm (SCA) for solving optimization problems was introduced in
Reference [22], generating several initial random candidate solutions and seeking the best solutions
using a cosine and sine functions‐based mathematical model. The efficiency of the proposed method
was examined in three phases. The findings revealed that the proposed algorithm could explore
different search space areas, avoid local optimization, move towards general optimization, exploit the
search space areas optimally, and converge quicker than other algorithms. This algorithm proves very
useful in solving real problems with unknown and limited search space.
Teaching Learning Based Optimization (TLBO)26 algorithm was developed and inspired by
the learner and teaching methods to solve optimization problems where TLBO has used a new
inertia weight strategy in the learning phase to increase learning capacity. Random topological
order is adopted using energy weight, so the learner improves their search ability. This adaptive
learning‐teaching algorithm is used to select genes by proposing newly updated mechanisms to
solve optimization problems in real‐world applications.
The Farmland Fertility Algorithm (FFA) was developed to solve ongoing problems,2 which
is inspired by the fact that farmland is divided into several parts, with each sector's solutions
becoming optimized with optimal efficiency, both internal and external memory. Simulation
findings reveal that farmland fertility often performs better than other metaheuristic algo-
rithms. In Reference [23], the self‐adaptive fruit fly optimization algorithm for universal op-
timization was developed, as it was provided to solve high‐dimensional optimization problems.
The proposed self‐adaptive method significantly improves the capability to search for the fruit
fly in promising areas that depend on the search process and the problem and is seen as a string
and good algorithm for universal optimization.
We just reviewed only a few major metaheuristic algorithms; however, there are other meta-
heuristic optimization algorithms, such as Golden Ball (GB) algorithm,24 Cuckoo Search (CS),25
Simulated Annealing (SA) algorithm,26 Gravitational Optimization,27 Biogeography‐Based Optimi-
zation (BBO),28 Galaxy‐based Search Algorithm (GbSA),29 Group Counseling Optimizer (GCO),30
Clonal Selection Algorithm (CSA),31 Bird Mating Optimizer (BMO),32 Social Spider Optimization
(SSO),33 Imperialist Competitive Algorithm (ICA),34 Intelligent Water Drops (IWD) algorithm,35
Colliding Bodies Optimization (CBO),36 League Championship Algorithm (LCA),37 Differential
Evolution (DE),38 Charged System Search (CSS) algorithm,39 Ray Optimization algorithm (RO),40
6 | ABDOLLAHZADEH ET AL.

Water Evaporation Optimization (WEO) algorithm,41 Glowworm Swarm Optimization (GSO),42


Dolphin echolocation optimization (DEO),43 and Water Cycle Algorithm (WCA).44
Generally, the No Free Lunch (NFL) Theorems45,46 states that, on average, all nonsampling
optimization algorithms perform equally well in solving almost all optimization problems.47 This
hypothesis also states that all black box search algorithms and optimization algorithms have the same
function in all possible target functions in a fixed search space. On the other hand, however, there is
no algorithm to solve all real‐world problems accurately and well.48 For this reason, in NLF theory,
an algorithm has been aligned with the problem. NLF has also introduced a scenario where an
existing algorithm can even be better than a random search. Problem subset knowledge of a random
search can also often be precious; one of the most important reasons is simple execution and good
performance. This is an important principle where the NFL does not apply. However, when and why
can researchers ignore the NFL? That is unlikely.
It appears that the researcher intends to make a specific claim about an algorithm that
performs moderately compared with a set of possible problems in space, that is, (original
NFL), a CUP set (SNFL), focus set (FNFL), or restricted set (RMNFL). However, it is clear
that under such a situation, one cannot ignore NFL, and it is impossible to see any im-
provements in the random search. This is why a researcher may create a super algorithm
better than a random one in all real‐world problems. This could yield promising results,
though NFL results cannot prevent this as researchers can also ignore NFL. In the end, a
researcher hoping to have a super algorithm to be better than a random one in solving all
problems may be neutralized by an NFL.49

3 | GORILL A T ROOPS

Gorillas, like other apes, have feelings, make and use tools, establish strong family bonds, and
think about their past and future.50,51 Some researchers argue that gorillas also have inner
feelings or religious inclinations.50,51 On average, gorillas perform such activities as taking rest,
traveling, and eating during the day. Gorilla diets vary from species to species. Mountain
gorillas are primarily herbivores and feed on substances, such as leaves, stems, kernels, and
twigs, while fruits account for a tiny portion of their meals.52–54
Gorillas live in groups called troops, consisting of an adult male or silverback gorilla
group (see Figure 2) and several adult female gorillas and their offspring.55–57 However,
there are groups, including several male gorillas.56 A silverback usually lives over 12 years
and takes its name because of the silver‐colored hair that grows on its backs during puberty.

FIGURE 2 Silverback gorilla [Color figure can be viewed at wileyonlinelibrary.com]


ABDOLLAHZADEH ET AL. | 7

Moreover, silverbacks have enormous canines that reappear during puberty. Both male and
female gorillas tend to migrate from their born.55–58 Gorillas usually migrate and move on
to new groups.57 Also, male gorillas tend to abandon their groups and form new groups by
attracting female gorillas who have migrated. However, male gorillas sometimes remain in
the same group they were born and are included in the silverback group. If the silverback
gorilla dies, these gorillas may dominate the group or engage with the silverback to dom-
inate it.56,59,60
On the other hand, male gorillas engage in fierce competition for adult females. They get
engaged in violent fights with other males to expand and form their groups over females, as the
fights may sometimes last for several days.61–66 In groups where only one gorilla lives, female
gorillas and offspring disperse and seek new groups for themselves.58,67 Without silverback
gorillas to protect baby gorillas, baby gorillas may fall victim to infanticide, thereby attempting
to join new groups as a solution to this problem.58,67,68
Silverback is the focus of a group.69 It makes all the decisions, mediates the fights,
determines group movements, directs the gorillas to food sources, and takes responsibility
for the group's safety well‐being. Young male gorillas, called blackbacks, follow silverbacks
and serve as backup protectors for the group. The blackbacks are between 8 and 12 years old
and have no silver hairs on their backs.69 The link between silverback and female gorillas
forms the heart of gorillas' social life. This bond is maintained by grooming and staying
close to each other.57,70 Female gorillas form strong relationships with male gorillas to
preserve the mating situation and protect themselves from predators.57,70,71 Male gorillas
have poor social relationships, especially in groups with several male gorillas with clear
hierarchies. Also, there is a fierce rivalry in these groups to find a mate.
Although male gorillas in all‐male groups tend to form friendly relationships by playing,
grooming, staying together, and sometimes engaging in homosexual engagements,58,59,67–71
extreme violence in stable groups is rare. However, when two mountain gorilla groups meet
each other, the silverbacks can create deep wounds and fissures on their rivals' bodies using
their canines.68 Male gorillas are not diligent in caring for newborns but play a role in
setting associations with other gorillas.72 Silverback gorillas have a strong supportive re-
lationship with the newborns and protect them from intragroup invasions.72 Gorillas are
known to have 25 different songs, many of which are mainly used for group communica-
tion. Voices classified as grunts and barks are most often heard when traveling, indicating
group members' presence.73 These songs may also be used during social interactions when
discipline is required. Screams and roars are warning signs often produced by the silverback
gorillas.
This paper provides a new algorithm called GTO, using the gorilla troop's behavior to solve
optimization problems; we will explain this in detail in Section 4.

3.1 | Artificial gorilla troops optimization algorithm (GTO)

In this section, inspired by gorillas' group behaviors, we provided a new metaheuristic algo-
rithm called GTO, where specific mathematical mechanisms are presented to explain the two
phases of exploration and exploitation fully. In the GTO algorithm, five different operators are
used for optimization operations (exploration and exploitation) simulated based on gorilla
behaviors.
8 | ABDOLLAHZADEH ET AL.

FIGURE 3 Different phases of Gorilla Troops Optimizer [Color figure can be viewed at wileyonlinelibrary.com]

Three different operators have been used in the exploration phase: migration to an un-
known place to increase GTO exploration. The second operator, a movement to the other
gorillas, increases the balance between exploration and exploitation. The third operator in the
exploration phase, that is, migration towards a known location, significantly increases the GTO
capability to search for different optimization spaces. On the other hand, two operators are used
in the exploitations phase, which significantly increases the search performance in exploitation.
In GTO, a different method is used for the phase change procedure of exploration and ex-
ploitation, as shown in Figure 3, in which an overview of the optimization operation procedure
in the GTO algorithm is illustrated.
GTO generally follows the following several rules to search for a solution:

1. The GTO algorithm's optimization space contains three types of solutions, where the X is
known as the gorillas' position vector, and the GX as the gorilla candidate position vectors
created in each phase and operates should it performs better than the current solution.
Finally, the silverback is the best solution found in each iteration.
2. Only one silverback in the entire population when considering the number of search agents
selected for optimization operations.
3. Three types of X, GX, and silverback solutions simulate the gorillas' social life in nature
accurately.
4. Gorillas can increase their power by finding better food sources or positioning in a fair and
robust group. In GTO, solutions are created in each iteration known as GX in the GTO
algorithm. If the solution found is new (GX), it replaces the current solution (X). Otherwise,
it remains in memory (GX).
5. The tendency to a communal life among gorillas prevents them from living individually.
Thus they look for food as a group and continue to live under a silverback leader, who
makes all the group decisions. In our formulation phase, assuming that the worst solution in
the population is the weakest member of the gorilla group, the gorillas attempt to turn away
from the worst solution and get closer to the best solution (silverback), improving all the
gorilla's positions.
ABDOLLAHZADEH ET AL. | 9

Considering the basic concepts of gorilla group life when finding food and their group life
together, given GTO's unique features in many optimization problems, the algorithms can be
widely used. For better understanding, the GTO flowchart is shown in Figure 4, and each step
of the formulation algorithm is fully introduced.
The GTO algorithm uses various mechanisms for optimization operations, which are de-
scribed below.

3.1.1 | Exploration phase

In this subsection, the mechanisms applied in the exploration phase in GTO are examined.
If we consider the nature of gorillas' group life, we conclude that they live in nature in
groups under the domination of a silverback, as he is obeyed; there are times when gorillas
leave their group. Upon leaving the group, the gorillas will migrate to different places in
nature, which they may or may not have met in the past. In the GTO algorithm, all gorillas
are seen as candidate solutions, and the best candidate solution at each optimization op-
eration stage is considered a silverback gorilla. We used three different mechanisms for the
exploration phase, that is, migration to an unknown location, migration towards a known
location, and moving to other gorillas. Each of these three mechanisms is selected according
to a general procedure.
A parameter called p was used to select the mechanism of migration to an unknown
location. The first mechanism is selected when rand < p. However, if rand ≥ 0.5,
the mechanism of movement towards other gorillas is selected. However, if rand < 0.5, the
mechanism of migration to a known location is selected. According to the mechanisms
used, each of the mechanisms gives an excellent ability to the GTO algorithm. The
first mechanism makes it possible for the algorithm to monitor the entire problem
space well, the second mechanism improves the GTO exploration performance, and
finally, the third mechanism reinforces the GTO in escaping from local optimal points.
Equation (1) has been used to simulate the three mechanisms used in the exploration
phase.

⎧ (UB − LB ) × r1 + LB, rand < p ,



GX (t + 1) = ⎨ (r2 − C ) × Xr (t ) + L × H , rand ≥ 0.5,

⎩ X (i ) − L × (L × (X (t ) − GXr (t )) + r3 × (X (t ) − GXr (t ))), rand < 0.5.
(1)

In Equation (1), GX(t + 1) is the gorilla candidate position vector in the next t iteration.
X(t) is the current vector of the gorilla position. Moreover, r1, r2 , r3, and rand is random
values ranging from 0 to 1 updated in each iteration. p is a parameter that must be given a
value before the optimization operation and has a range of 0–1; this parameter determines
the probability of selecting the migration mechanism to an unknown location. UB and LB
represent the upper and lower bounds of the variables, respectively. Xr is one member of the
gorillas in the group randomly selected from the entire population and also GXr . One of the
vectors of gorilla candidate positions randomly selected and includes the positions updated
in each phase. Finally, C, L, and H are calculated using Equations (2), (4), and (5),
respectively.
10 | ABDOLLAHZADEH ET AL.

FIGURE 4 Flowchart of Gorilla Troops Optimizer [Color figure can be viewed at wileyonlinelibrary.com]
ABDOLLAHZADEH ET AL. | 11

⎛ It ⎞⎟
C = F × ⎜1 − , (2)
⎝ MaxIt ⎠

F = cos (2 × r4 ) + 1, (3)

L = C × l. (4)

In Equation (2), It is the current iteration value, MaxIt is the total value of iterations
to perform the optimization operation, and F is calculated using Equation (3). In
Equation (3), cos indicates the cosine function and r4 is random values ranging from 0 to 1
updated in each iteration. According to Figure 5, in Equation (2), values with sudden changes
over a large interval are generated in the early stages of the optimization operation, but this
interval of change decreases in the final stages. L is calculated using Equation (4), where l is a
random value in the range of −1 and 1. Equation (4) is used to simulate the silverback
leadership. In the real world, the silverback gorilla may not make the right decisions to find
food or control the group due to a lack of sufficient experience in the early stages of group
leadership; however, it achieves enough experience obtaining enough experience good stability
in his leadership. The changes in the values generated in the two independent implementations
using Equations (2) and (4) are illustrated in Figure 5.
Also, in Equation (1), H is calculated using Equation (5), while in Equation (5), Z is calculated
using Equation (6), where Z is a random value in the problem dimensions and the range of −C, C.
H = Z × X (t ), (5)

Z = [−C , C ]. (6)

Figure 6 illustrates how the position of search agent vectors changes in the exploration phase.
At the end of the exploration phase, a group formation operation is done. At the end of the
exploration phase, the cost of all GX solutions is calculated, and if the cost is GX (t ) < X (t ) , the
GX(t) solution is used as the X(t) solution. Thus, the best solution generated in this phase is also
considered as a silverback.

F I G U R E 5 Value of C, L during two runs and 500 iterations [Color figure can be viewed at
wileyonlinelibrary.com]
12 | ABDOLLAHZADEH ET AL.

F I G U R E 6 Example of overall vectors in the case of an exploration phase [Color figure can be viewed at
wileyonlinelibrary.com]

3.1.2 | Exploitation phase

In the GTO algorithm's exploitation phase, two behaviors of Follow the silverback and Com-
petition for adult females are applied. The silverback gorilla leads a group, makes all the
decisions, determines the group's movements, and directs the gorillas towards the food sources.
It is also responsible for the group's safety and well‐being, and all gorillas in the group follow all
silverback decisions. On the other hand, silverback gorilla may weaken and get old and
eventually die, with the black back in the group may become the group leader, or other male
gorillas may engage the silverback gorilla and dominated the group. As described with the two
mechanisms used in the exploitation phase, it is possible to select either Follow the silverback
or Competition for adult females using the C value in Equation (2). If C ≥ W , the follow the
silverback mechanism is selected, but if C < W , adult females' Competition is taken. W is a
parameter to be set before the optimization operation.

Follow the silverback


With the group newly formed, the silverback is young and healthy, and the other male gorillas
in the group are young and follow the silverback well. They also follow all the silverback orders
to go to various areas to find food sources and follow silverback. Also, the members can affect
all group members on group movement. This strategy is selected when the C ≥ W value is
selected. Equation (7) is used to simulate this behavior. Figure 7 is also used to illustrate this
mechanism.

GX (t + 1) = L × M × (X (t ) − Xsilverback ) + X (t ), (7)

⎛ N g ⎞ 1g
1
M = ⎜⎜ ∑GXi (t ) ⎟ ,
⎟ (8)
⎝ N i =1 ⎠

g = 2L . (9)

In Equation (7), X(t) is the gorilla position vector, and Xsilverback is the silverback gorilla
position vector (best solution). Moreover, L is calculated using Equation (4) and M using
ABDOLLAHZADEH ET AL. | 13

F I G U R E 7 Example of overall vectors follows the silverback in 2D and 3D space [Color figure can be viewed
at wileyonlinelibrary.com]

Equation (8). In Equation (8), GXi(t) shows each candidate gorilla's vector position in iteration
t. N represents the total number of gorillas. g is also estimated using Equation (9), and in
Equation (9), L is also calculated using Equation (4).

Competition for adult females


If C < W , the second mechanism is selected for the exploitation phase. After a while, when
young gorillas reach puberty, they fight with other male gorillas expanding their group on
choosing adult females, and this competition is often violent. These fights can last for days
and involve group members. Equation (10) is used to simulate this behavior.

GX (i ) = Xsilverback − (Xsilverback × Q − X (t ) × Q ) × A, (10)

Q = 2 × r5 − 1, (11)

A = β × E, (12)

⎧ N , rand ≥ 0.5,
E=⎨ 1 (13)
⎩ N2, rand < 0.5.

In Equation (10), Xsilverback is the silverback position vector (best solution) and X(t) is the
current gorilla position vector. Q is seen to simulate the impact force, calculated using
Equation (11). In Equation (11), r5 is random values ranging from 0 to 1. A coefficient vector to
determine the degree of violence in conflicts is calculated using Equation (12). In
Equation (12), β is a parameter to be given value before the optimization operation, and E is
valued using Equation (13) while being used to simulate the effect of violence on the dimen-
sions of solutions. If rand ≥ 0.5, the E's value of E will be equal to random values in the normal
14 | ABDOLLAHZADEH ET AL.

F I G U R E 8 Example of overall vectors in the competition for adult females [Color figure can be viewed at
wileyonlinelibrary.com]

distribution and the problem's dimensions, but if rand < 0.5, E will be equal to a random value
in the normal distribution. rand is also a random value between 0 and 1. Figure 8 is used to
indicate how the solutions change.
At the end of the exploitation phase, a group formation operation is conducted, in which
the cost of all GX solutions is estimated, and if the cost of GX (t ) < X (t ), the GX(t) solution
is used as the X(t) solution and the best solution obtained among the whole population is
seen as a silverback. The proposed pseudocode for the proposed algorithm is given in
Algorithm 1.
Pseudocode of GTO: The pseudocode of the GTO is described in Algorithm 1.

Algorithm 1. Pseudocode of GTO


% GTO setting
Inputs: The population size N and maximum number of iterations T and parameters β and p
Outputs: The location of Gorilla and its fitness value
% initialization
Initialize the random population Xi (i = 1, 2, …, N )
Calculate the fitness values of Gorilla
% Main Loop
while (stopping condition is not met) do
Update the C using Equation (2)
Update the L using Equation (4)
% Exploration phase
for (each Gorilla ( Xi )) do
Update the location Gorilla using Equation (1)
end for
% Create group
Calculate the fitness values of Gorilla
if GX is better than X, replace them

(Continues)
ABDOLLAHZADEH ET AL. | 15

Set Xsilverback as the location of silverback (best location)


% Exploitation phase
for (each Gorilla ( Xi )) do
if (|C|  1) then
Update the location Gorilla using Equation (7)
Else
Update the location Gorilla using Equation (10)
End if
end for
% Create group
Calculate the fitness values of Gorilla
if New Solutions are better than previous solutions, replace them
Set Xsilverback as the location of silverback (best location)
end while
Return XBestGorilla , bestFitness

3.2 | Computational complexity

The GTO algorithm's computational complexity depends on three main processes:


initialization, fitness evaluation, and updating of vultures. Because there is an N gorilla, the
computational complexity in the initialization process is equal to O(N). On the other hand,
the computational complexity in the update mechanism process is based on two phases of
exploration and exploitation. In each of the phases, an updating operation is performed on
all the solutions in the optimization space, and the best solution is performed, which is
equal to O (T × N ) + O (T × N × D) × 2. Where T represents the maximum value of itera-
tions, and D is the dimensions of the problems. Therefore, the GTO algorithm's computa-
tional complexity is O (N × (1 + T + TD) × 2).

4 | RESULT AN D DIS CU S S ION

4.1 | Benchmark set and compared algorithms

A set of different and diverse benchmark functions77,78 includes three different unimodal
(UM), multimodal (MM), and composite (CM) groups. Using the UM benchmark functions
(F1–F7), which has only one of the global best, each optimization algorithm's exploitative
capacities (intensification) can be revealed. The exploration capacities (diversification)
optimization algorithms are revealed using MM criterion functions (F8–F23). The UM and
MM standard functions' mathematical formulas and properties are illustrated in
Tables A1–A3 in Appendix A.
For the third group, the benchmark functions (F24–F52) available in CEC2017 competition
are used, involving hybrid composite, rotated and shifted MM cases. These benchmarks are
used in many articles that can be used to appraise optimization algorithms' performance
because to solve these problems. There is a need for balance in exploration and exploitation and
16 | ABDOLLAHZADEH ET AL.

F I G U R E 9 Parameter space representation of the benchmark functions C6, C8, and C9 [Color figure can be
viewed at wileyonlinelibrary.com]

the ability to escape local optimal points in optimization algorithms. Details of the benchmark
functions are given in Table A4 in Appendix A.
Results and Performance of GTO with other types of optimization algorithms PSO,28 GWO,3
WOA,4 MFO,19 TSA,74 MVO,75 SCA,22 and GSA13 have been compared. This comparison is
based on the best solution, the worst solution, standard deviation (STD), and average mean
(AVG) results. The MFO, WOA, GWO, MVO, SCA, PSO, SHO, GSA, and TSA optimization
algorithms are selected as powerful and novel optimization algorithms, while PSO and GSA
algorithms are chosen since they are used a lot in the optimization context. Also, for better
evaluation, such as Derrac et al.,76 a Wilcoxon statistical test with a significance level of 5% was
performed to detect significant differences concerning the results of GTO compared with other
optimization methods (Figure 9).

4.2 | Parameter settings

The GTO has been tested and executed using the Matlab 9.2 (R2017R) laptop computer running
Windows 10 Enterprise 64‐bit with an Intel Core i7‐4510U 2.6 GHz processor and 8.00 GB
RAM, and all tests performed to check the performance of the GTO were carried out using
30 populations in a maximum of 500 iterations. All results are stored based on the average of
30 independent run results and are compared using the obtained results. The settings of PSO,28
GWO,3 WOA,4 MFO,19 TSA,74 MVO,75 SCA,22 and GSA13 are used from the settings presented
in the original work. These algorithms cover both recently proposed techniques, such as SCA,
MVO, TSA, MFO, WOA, GWO, and the most utilized optimizers in the field like the PSO and
GSA algorithms.
We used to set three parameters of the GTO algorithm as presented in Reference [77]. This
method has been used for parameter tuning in many researches.78–80 Each parameter is set
according to three levels of different values of the low, medium, and high. A total of 33 models
are generated from a combination of parameters for each level. This paper has tested for this
evaluation of the benchmark functions 1–13 with dimensions 30 and different combinations of
parameters. Finally, the best performance of the combination of parameters is used to evaluate
the following steps, shown in Table 1.
The parameter settings of the optimization algorithms are shown in Table 1.
ABDOLLAHZADEH ET AL. | 17

TABLE 1 Parameter settings of optimization algorithms for comparison and evaluation of the GTO
Algorithm Parameter Value
GTO β 3

W 0.8
p 0.03
GWO Convergence constant a [2 0]
TSA Parameter Pmin 1
Parameter Pmax 4
PSO Inertia factor 0.3
vMax 6
c1 2
c2 2
MVO Existence probability [0.2 1]
Traveling distance rate [0.6 1]
MFO Convergence constant a [−2 −1]
Spiral factor b 1
WOA Convergence constant a [2, 0]
Spiral factor b 1
SCA A 2
GSA α 20
G0 100
Power of R 1
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

4.3 | Qualitative results of GTO

Evaluations of qualitative results of GTO nine standard unimodal and multimodal standard
functions are used. This evaluation is also done using four different criteria: search history,
convergence behavior, the average fitness of the population, and the trajectory of the first
Gorilla. The search history diagram illustrates some locations in the optimization space that
artificial gorillas would visit. The convergence behavior diagram demonstrates the best sil-
verback gorilla's fitness value as the best solution during the optimization process. The po-
pulation diagram's average fitness diagram shows how the average fitness of the whole
population changes in various optimization stages. Finally, the first gorilla diagram trajectory
shows how the first gorilla changes' first variable during the optimization process. Studying the
search history diagrams in Figure 10, one would argue that GTO has shown a similar pattern in
the wake of various optimization problems. According to the diagrams demonstrated in
Figure 10, the gorillas were found to have performed exploration operations in different
18 | ABDOLLAHZADEH ET AL.

F I G U R E 10 Qualitative results for the F1, F3, F4, F7, F8, F9, F10, F12, and F13 functions [Color figure can
be viewed at wileyonlinelibrary.com]

optimization space areas and continued the exploitation operations near the best solution.
Finally, it is determined that the GTO algorithm goes through almost the entire optimization
space well and has an adequate capacity in the exploration and exploitation phases. Con-
vergence diagrams illustrate the silverback gorilla's average fitness in different optimization
stages, as shown in Figure 10. It is concluded that there is a rapid and continuous descending
pattern in all convergence diagrams. Looking at the convergence diagrams, it is easy to con-
clude that GTO has a rapid convergence trend.
According to the average fitness of the population criterion, which shows the average fitness
of the entire population in various optimization stages and is shown in Figure 10, it is said that
GTO has found various solutions by performing random movements in search agents. More-
over, these solutions have had less fitness, resulting in descending trends. On the other hand,
looking at the diagrams shown in Figure 10, one would conclude that GTO has an excellent
ability to improve all gorillas in at least half of the iterations. On the other hand, in all the
diagrams, there is a pattern of rapidly descending curves. Also, as iterations increase, the range
of fitness changes decreases, but these changes still exist with a small range due to GTO's
exploration at different optimization stages. Finally, such movements result from the GTO
focusing more on promising areas during the optimization phase. Finally, the first gorilla
diagram trajectory shows the first gorilla's behavior searching for different optimization space
areas. The first gorilla is selected as representing other gorillas. This diagram provides a good
understanding of the gorillas' behavior to explore new search space solutions. Looking at the
trajectory diagrams, it is evident that the first gorilla has gone through sudden changes in the
early stages, with minor changes and fluctuations in the other optimization stages. According
to81 developed by Van Den Bergh and Engel Brecht, it is argued that such activities cause a
P‐metaheuristic to finally converge at one point and perform exploitation operations from
that area.
On the other hand, in different optimization stages, it is seen that there are fluctuations.
Such movements also make GTO have an excellent ability to escape from optimal local loca-
tions because there is no guarantee for the search agents to locate next to the optimal global
location in the premature optimization stages. In Figure 10, the first gorilla had sudden and
significant movements in the early stages of optimization, but in the later stages, it only went
ABDOLLAHZADEH ET AL. | 19

through sudden movements in some stages, ranging from half of the search space. This feature
suggests an excellent ability to explore GTO at different optimization stages. Also, in the later
stages of the search operation, the range of the fluctuations had almost slightly changed and
decreased. This indicates that GTO has finally stabilized the first gorilla movement, indicating
that GTO seeks to be exploitative in promising areas.

4.4 | Quantitative results and discussion

In this subsection, to investigate the GTO performance, other optimization algorithms were
compared. Various benchmark functions were used, where F1–F13 with 30, 100, 500, and 1000
dimensions were applied to evaluate the GTO scalability. Also, benchmark functions (F14–F52)
MM and CM were used to appraise the GTO performance further. The GTO test aims to use
significant problems to examine the GTO's ability to solve significant problems and scalability.
On the other hand, it determines how GTO performs in producing solutions and what quality
in the face of problems under different dimensions.
On the other hand, it reveals whether GTO can retain its search features when encountered
with large‐scale issues. This test was evaluated using the results obtained from 30 independent
implementations in 500 stored iterations and four different average error criteria of AVG,
Worst, Best, and STD. Figure 11 and Tables 2–5 illustrate the results of the scalability test.
Tables 6 and 7 illustrate the GTO performance and the results obtained in different benchmark
functions (F14–F52) compared with other optimizers.
According to the results in Tables 2–5 and Figure 11, it is concluded that GTO has an
excellent ability to achieve good results and acceptable performance in all dimensions; this is
while GTO has, via increasing dimensions, managed to meet an acceptable level of search
capabilities. It also has a significant advantage over other optimizers in (F1–F13) in all com-
parable dimensions because other comparable optimizers significantly reduce their perfor-
mance as the dimensions increase. On the basis of scalability testing, it is clear that GTO has an
excellent ability to balance exploration and exploitation capabilities in the face of large‐scale
problems.
Table 2 shows that GTO can obtain better and more significant results in the benchmark
functions (F1–F13) than most optimization algorithms under comparison. Also, Table 3 shows
the results from benchmark functions with 100 dimensions. In this test, GTO also generated
significant performance compared with most optimization algorithms under comparison and
obtained reasonable‐quality solutions. On the other hand, Table 4 illustrates the results from
evaluating the (F1–F13) with 500 dimensions. Table 4, GTO can find reasonable‐quality so-
lutions, producing significant performance compared with competitive optimizers. Finally,
Table 5 demonstrates the GTO test results and other optimizers using the benchmark functions
(F1–F13) with 1000 dimensions. According to these results, it is easily determined that GTO is
still highly capable of finding high‐quality solutions and achieves results almost similar to those
achieved at lower dimensions. By evaluating GTO using functions (F14–F23), it is tested and
compared with competitive optimizers.
The results in Table 6 show that GTO has also performed well in fixed‐dimension benchmark
functions (F14–F23), and they indicate a very competitive or better performance for GTO in solving
some MM test cases. The results related to benchmark functions (F16–F17) and PSO, GSA, and
MFO algorithms were funded to be very competitive, and these optimization algorithms had an
excellent ability to obtain high‐quality results. On the other hand, in the GSA benchmark function,
20 | ABDOLLAHZADEH ET AL.

F I G U R E 11 Results of GTO scalability evaluation against other optimization algorithms in


F1–F13 functions in different dimensions. GSA, gravitational search algorithm; GTO, Gorilla Troops
Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer;
PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale
optimization algorithm [Color figure can be viewed at wileyonlinelibrary.com]
TABLE 2 Results of benchmark functions (F1–F13), with 30 dimensions
No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F1 Best 0.0000E+00 1.4368E−23 3.3533E−29 3.9005E−03 8.3647E−01 4.6711E−09 1.5095E−89 1.1440E−16 9.5597E−01
ABDOLLAHZADEH

Worst 0.0000E+00 1.5016E−20 1.6202E−26 1.4236E+02 1.8630E+00 6.3732E−05 1.2874E−71 4.2911E−16 1.0005E+04
Mean 0.0000E+00 2.3061E−21 1.7091E−27 1.4523E+01 1.1070E+00 4.7058E−06 5.5002E−73 2.3532E−16 6.7289E+02
ET AL.

STD 0.0000E+00 4.3365E−21 3.2809E−27 3.0107E+01 2.7820E−01 1.1926E−05 2.3708E−72 9.1916E−17 2.5361E+03
F2 Best 8.0420E−220 5.3213E−15 1.3643E−17 2.7766E−04 5.4848E−01 3.7884E−06 1.0074E−58 5.4248E−08 5.2060E−01
Worst 2.3838E−202 7.8343E−13 4.3580E−16 2.1884E−01 2.3333E+00 1.8728E−02 2.6524E−49 2.4036E+00 9.0003E+01
Mean 1.0211E−203 1.1565E−13 1.2365E−16 2.5643E−02 8.4484E−01 3.6477E−03 9.6725E−51 1.2500E−01 3.1246E+01
STD 0.0000E+00 1.6225E−13 1.0931E−16 4.5382E−02 4.3868E−01 4.9417E−03 4.8401E−50 4.8061E−01 1.9907E+01
F3 Best 0.0000E+00 4.9700E−09 3.8126E−08 6.6862E+02 6.5143E+01 1.2301E+01 1.1829E+04 4.5802E+02 3.1181E+03
Worst 0.0000E+00 3.3065E−03 1.9122E−04 1.9865E+04 4.0876E+02 4.3861E+03 8.0571E+04 1.6428E+03 4.6219E+04
Mean 0.0000E+00 2.6060E−04 1.8580E−05 8.8051E+03 2.1883E+02 7.9186E+02 4.3601E+04 9.8887E+02 2.2994E+04
STD 0.0000E+00 6.5035E−04 4.1272E−05 5.2670E+03 9.9175E+01 1.1616E+03 1.4048E+04 3.2842E+02 1.2222E+04
F4 Best 2.1404E−210 1.5499E−02 8.1138E−08 1.1681E+01 1.1783E+00 2.7605E−01 8.0883E−03 2.5876E+00 4.6091E+01
Worst 2.7688E−196 1.9585E+00 7.9629E−06 7.4047E+01 2.9982E+00 2.5435E+00 8.5122E+01 1.2506E+01 8.4318E+01
Mean 1.0449E−197 3.6686E−01 1.1296E−06 3.8649E+01 1.9398E+00 5.7113E−01 4.9909E+01 6.7621E+00 6.6329E+01
STD 0.0000E+00 4.1234E−01 1.5496E−06 1.4645E+01 5.5471E−01 4.7769E−01 2.7878E+01 2.1010E+00 9.4383E+00
F5 Best 7.9224E−08 2.6213E+01 2.5787E+01 4.9381E+01 3.8449E+01 2.0183E+01 2.6923E+01 1.6046E+01 2.1360E+02
Worst 2.4608E+01 2.8902E+01 2.8748E+01 1.5111E+06 2.9294E+03 3.3744E+02 2.8757E+01 5.8272E+02 8.0033E+07
Mean 4.8493E+00 2.8359E+01 2.6993E+01 9.4594E+04 5.6504E+02 4.5709E+01 2.7935E+01 7.3227E+01 2.6909E+06
STD 9.8652E+00 7.4575E−01 8.0766E−01 2.8920E+05 8.8406E+02 5.9492E+01 5.3679E−01 1.1786E+02 1.4608E+07

(Continues)
|
21
22

TABLE 2 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F6 Best 4.6778E−12 1.7989E+00 7.7955E−05 4.1260E+00 8.0166E−01 6.3996E−09 7.7984E−02 1.0342E−16 1.1135E+00
Worst 3.4783E−07 5.5347E+00 1.2630E+00 1.4647E+02 1.7881E+00 2.1075E−05 9.9711E−01 1.3162E−02 1.0101E+04
Mean 3.4157E−08 3.6958E+00 6.8931E−01 2.0045E+01 1.2963E+00 2.1819E−06 4.3282E−01 4.3874E−04 1.0022E+03
STD 6.5676E−08 8.0031E−01 3.5603E−01 2.9239E+01 2.8293E−01 4.7464E−06 2.3830E−01 2.4031E−03 3.0397E+03
F7 Best 2.6158E−06 2.7697E−03 3.5203E−04 3.6638E−03 1.6213E−02 4.4480E−02 2.0010E−04 2.2990E−02 7.6178E−02
Worst 4.5937E−04 2.3775E−02 4.4746E−03 8.5922E−01 9.5213E−02 1.8912E−01 1.7971E−02 1.9154E−01 1.9040E+01
Mean 1.0512E−04 1.0797E−02 2.0936E−03 1.4483E−01 3.5576E−02 9.0402E−02 3.8493E−03 7.7579E−02 3.0785E+00
STD 1.0581E−04 5.3721E−03 1.0777E−03 1.9743E−01 1.9754E−02 3.4907E−02 4.3432E−03 4.5161E−02 5.4242E+00
F8 Best −1.2569E+04 −7.3162E+03 −7.2007E+03 −4.4667E+03 −9.5122E+03 −3.4688E+03 −1.2567E+04 −3.5277E+03 −1.0475E+04
Worst −1.2569E+04 −4.6233E+03 −3.4153E+03 −3.3228E+03 −7.2117E+03 −2.0919E+03 −7.0495E+03 −1.7384E+03 −6.9838E+03
Mean −1.2569E+04 −5.8828E+03 −5.9014E+03 −3.7930E+03 −8.0292E+03 −2.6741E+03 −1.0522E+04 −2.4884E+03 −8.8188E+03
STD 1.8775E−04 6.1166E+02 7.6410E+02 2.9567E+02 6.7539E+02 3.5924E+02 1.8162E+03 4.3288E+02 8.1531E+02
F9 Best 0.0000E+00 1.0304E+02 5.6843E−14 3.1313E−03 8.9042E+01 2.1889E+01 0.0000E+00 1.7909E+01 9.8520E+01
Worst 0.0000E+00 2.7225E+02 9.7855E+00 1.0631E+02 2.0363E+02 6.1687E+01 7.3952E+01 4.2783E+01 2.4718E+02
Mean 0.0000E+00 1.8400E+02 1.7682E+00 4.6123E+01 1.2356E+02 3.7709E+01 2.4651E+00 3.0280E+01 1.6932E+02
STD 0.0000E+00 4.4655E+01 3.0413E+00 3.4456E+01 3.0144E+01 1.0036E+01 1.3502E+01 6.3234E+00 4.2520E+01
F10 Best 8.8818E−16 1.3580E−12 7.1942E−14 1.3046E−01 1.1340E+00 3.0002E−06 8.8818E−16 8.6779E−09 1.4836E+00
Worst 8.8818E−16 3.7125E+00 1.3589E−13 2.0374E+01 2.9430E+00 2.6608E+00 7.9936E−15 1.3404E+00 1.9963E+01
Mean 8.8818E−16 1.5625E+00 1.0404E−13 1.4513E+01 1.8003E+00 4.1633E−01 4.7962E−15 4.4681E−02 1.5169E+01
STD 0.0000E+00 1.6076E+00 1.3791E−14 8.1063E+00 4.7857E−01 8.5605E−01 2.8529E−15 2.4473E−01 6.8139E+00
ABDOLLAHZADEH
ET AL.
TABLE 2 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F11 Best 0.0000E+00 0.0000E+00 0.0000E+00 3.0943E−01 7.9821E−01 6.4046E+01 0.0000E+00 1.9133E+01 6.0102E−01
ABDOLLAHZADEH

Worst 0.0000E+00 8.9366E−02 2.7677E−02 3.0726E+00 9.9129E−01 1.1028E+02 1.3763E−01 5.5310E+01 2.7093E+02
ET AL.

Mean 0.0000E+00 1.0987E−02 4.4610E−03 1.1399E+00 8.9914E−01 8.5935E+01 4.5878E−03 2.8310E+01 2.2089E+01
STD 0.0000E+00 1.7120E−02 8.7510E−03 5.2740E−01 5.3427E−02 8.9650E+00 2.5128E−02 7.6542E+00 5.6315E+01
F12 Best 2.7740E−11 3.4138E−01 1.4121E−02 9.1971E−01 6.7568E−02 5.7347E−11 6.5012E−03 5.2386E−01 2.1152E+00
Worst 2.6005E−07 1.8043E+01 1.5857E−01 5.3384E+05 4.1350E+00 1.2555E+00 6.1329E−02 3.4070E+00 2.5600E+08
Mean 3.6895E−08 7.5149E+00 4.7988E−02 2.8034E+04 1.8157E+00 2.2526E−01 2.2071E−02 1.9135E+00 8.5336E+06
STD 5.4173E−08 4.4181E+00 3.1123E−02 1.0072E+05 1.4051E+00 3.6070E−01 1.2612E−02 7.7171E−01 4.6739E+07
F13 Best 7.8752E−11 1.8834E+00 2.2815E−01 2.5119E+00 3.8911E−02 5.0062E−11 6.7885E−02 1.7135E−01 2.4142E+00
Worst 1.0988E−02 4.8076E+00 1.1927E+00 3.2766E+06 2.9693E−01 1.6087E+00 1.2649E+00 3.0698E+01 9.3419E+02
Mean 7.3420E−04 3.2360E+00 6.2308E−01 2.9465E+05 1.6759E−01 5.6211E−02 5.8120E−01 8.4765E+00 5.3251E+01
STD 2.7872E−03 6.1605E−01 2.3309E−01 7.7295E+05 7.0790E−02 2.9326E−01 2.8742E−01 6.5773E+00 1.6722E+02
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
23
24

TABLE 3 Results of benchmark functions (F1–F13), with 100 dimensions


|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F1 Best 0.0000E+00 4.3745E−11 3.2223E−13 7.6341E+02 1.2629E+02 3.4891E−01 6.6470E−88 2.6529E+03 3.4523E+04
Worst 0.0000E+00 1.5602E−09 5.2187E−12 3.3116E+04 2.1038E+02 1.3724E+01 7.0325E−72 5.5277E+03 1.2436E+05
Mean 0.0000E+00 3.8145E−10 1.5011E−12 1.2555E+04 1.8245E+02 1.9267E+00 3.5572E−73 4.0160E+03 6.0081E+04
STD 0.0000E+00 4.4179E−10 1.1956E−12 7.8229E+03 2.4141E+01 2.7296E+00 1.3165E−72 7.3515E+02 1.9837E+04
F2 Best 8.6580E−213 5.8471E−08 1.3117E−08 8.6795E−01 2.9588E+04 1.1179E+00 4.6167E−55 9.9745E+00 1.6953E+02
Worst 5.9582E−198 8.4608E−07 8.3604E−08 1.6535E+01 7.8941E+24 6.9342E+00 9.6057E−49 3.5919E+01 3.6393E+02
Mean 2.5890E−199 2.4816E−07 4.0127E−08 6.4652E+00 5.7119E+23 3.0410E+00 4.1365E−50 1.9359E+01 2.3980E+02
STD 0.0000E+00 2.0743E−07 1.4356E−08 4.5167E+00 2.0315E+24 1.1479E+00 1.7649E−49 5.4089E+00 4.2589E+01
F3 Best 0.0000E+00 1.7497E+03 5.4350E+01 1.5488E+05 5.2760E+04 4.3888E+03 6.7617E+05 9.3091E+03 1.5333E+05
Worst 0.0000E+00 2.7387E+04 3.5225E+03 3.4417E+05 7.4132E+04 7.8068E+04 1.8385E+06 3.2503E+04 3.6077E+05
Mean 0.0000E+00 1.2330E+04 6.3184E+02 2.6040E+05 6.4830E+04 2.0501E+04 1.1967E+06 1.5937E+04 2.2479E+05
STD 0.0000E+00 6.9693E+03 7.3824E+02 4.5331E+04 6.0225E+03 1.6123E+04 3.0569E+05 5.1043E+03 5.8877E+04
F4 Best 1.0044E−209 3.1582E+01 7.8209E−02 8.0178E+01 5.0232E+01 4.8039E+00 2.4640E+01 1.4425E+01 8.7002E+01
Worst 6.0946E−193 8.7166E+01 2.8610E+00 9.5109E+01 7.1352E+01 8.0438E+00 9.6764E+01 2.2952E+01 9.5947E+01
Mean 3.3959E−194 5.3289E+01 8.1032E−01 8.8694E+01 6.0339E+01 6.6532E+00 7.9260E+01 1.7738E+01 9.2932E+01
STD 0.0000E+00 1.5108E+01 7.6359E−01 3.1603E+00 6.2794E+00 8.7633E−01 2.0802E+01 1.8232E+00 2.0311E+00
F5 Best 3.6576E−07 9.6448E+01 9.6630E+01 4.0077E+07 3.2487E+03 2.7617E+02 9.7534E+01 3.1327E+04 3.5622E+07
Worst 9.4902E+01 9.8696E+01 9.8545E+01 2.4510E+08 3.1873E+04 6.5160E+02 9.8409E+01 2.3289E+05 2.6711E+08
Mean 9.5133E+00 9.8264E+01 9.7984E+01 1.2995E+08 1.0425E+04 4.5377E+02 9.8158E+01 1.0284E+05 1.4217E+08
STD 2.8905E+01 5.1945E−01 5.9335E−01 6.0439E+07 9.3257E+03 9.1540E+01 2.2169E−01 5.0943E+04 5.4204E+07
ABDOLLAHZADEH
ET AL.
TABLE 3 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F6 Best 5.7588E−06 1.2171E+01 8.2118E+00 2.5497E+03 1.1761E+02 2.9039E−01 2.0601E+00 2.8476E+03 2.7690E+04
ABDOLLAHZADEH

Worst 2.6158E−02 1.6824E+01 1.2106E+01 2.3493E+04 1.9653E+02 1.1422E+01 7.1318E+00 6.1903E+03 1.0361E+05
ET AL.

Mean 5.1516E−03 1.4264E+01 1.0252E+01 9.5915E+03 1.5338E+02 1.8094E+00 4.0446E+00 4.1147E+03 6.1069E+04
STD 6.3310E−03 1.2294E+00 1.1122E+00 6.0050E+03 2.3128E+01 2.4744E+00 1.1772E+00 8.7754E+02 1.7189E+04
F7 Best 7.6254E−06 2.3419E−02 1.6825E−03 2.3358E+01 2.6043E−01 2.2197E+00 4.6665E−05 1.6157E+00 5.4901E+01
Worst 4.5520E−04 9.2439E−02 1.3277E−02 3.2039E+02 9.4876E−01 1.9939E+01 2.5268E−02 7.9360E+00 6.3587E+02
Mean 1.0238E−04 4.9408E−02 6.4748E−03 1.4188E+02 6.4617E−01 5.6669E+00 5.4548E−03 3.9681E+00 2.6049E+02
STD 1.0619E−04 1.9827E−02 2.3799E−03 7.7993E+01 1.7818E−01 3.5655E+00 6.4975E−03 1.3930E+00 1.3063E+02
F8 Best −4.1898E+04 −1.5035E+04 −1.9513E+04 −7.9229E+03 −2.5696E+04 −6.6500E+03 −4.1898E+04 −6.5083E+03 −2.6234E+04
Worst −4.1895E+04 −1.1637E+04 −5.2781E+03 −5.6589E+03 −2.0415E+04 −3.0482E+03 −2.4827E+04 −3.5562E+03 −1.8669E+04
Mean −4.1898E+04 −1.3204E+04 −1.6025E+04 −6.9765E+03 −2.2881E+04 −4.9691E+03 −3.5969E+04 −4.7038E+03 −2.2385E+04
STD 8.3095E−01 1.0233E+03 2.4115E+03 5.8043E+02 1.6663E+03 7.6092E+02 5.6258E+03 8.0790E+02 1.8572E+03
F9 Best 0.0000E+00 7.2721E+02 7.4692E−11 6.0801E+01 5.3700E+02 1.1142E+02 0.0000E+00 1.2374E+02 7.5427E+02
Worst 0.0000E+00 1.1544E+03 2.9368E+01 4.8717E+02 8.7310E+02 2.3035E+02 1.1369E−13 2.2889E+02 9.9996E+02
Mean 0.0000E+00 9.9019E+02 9.4659E+00 2.5354E+02 7.1314E+02 1.5999E+02 3.7896E−15 1.7647E+02 8.7778E+02
STD 0.0000E+00 9.1744E+01 8.6083E+00 1.1244E+02 8.8002E+01 2.6130E+01 2.0756E−14 2.5022E+01 6.9373E+01
F10 Best 8.8818E−16 3.8914E−07 5.1723E−08 8.2704E+00 4.2388E+00 2.0932E+00 8.8818E−16 3.6880E+00 1.9575E+01
Worst 8.8818E−16 1.7234E−05 2.6265E−07 2.0686E+01 2.0249E+01 6.2428E+00 7.9936E−15 5.9216E+00 1.9963E+01
Mean 8.8818E−16 4.2338E−06 1.1814E−07 1.8848E+01 9.1430E+00 3.1808E+00 3.9672E−15 4.4641E+00 1.9903E+01
STD 0.0000E+00 3.5786E−06 5.0343E−08 4.0766E+00 6.8589E+00 9.2895E−01 2.7572E−15 5.4120E−01 8.9166E−02

(Continues)
|
25
26

TABLE 3 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F11 Best 0.0000E+00 6.2403E−12 1.2446E−13 1.9296E+01 1.9984E+00 2.7754E+02 0.0000E+00 5.9878E+02 2.8124E+02
Worst 0.0000E+00 3.9800E−02 4.4613E−02 2.7897E+02 3.0392E+00 3.6336E+02 0.0000E+00 7.6172E+02 8.1021E+02
Mean 0.0000E+00 5.6575E−03 5.5354E−03 1.0325E+02 2.5740E+00 3.3598E+02 0.0000E+00 6.7162E+02 5.5946E+02
STD 0.0000E+00 1.2978E−02 1.3163E−02 6.8312E+01 3.2546E−01 1.7042E+01 0.0000E+00 4.2890E+01 1.2589E+02
F12 Best 1.2274E−10 5.9311E+00 1.7035E−01 2.7122E+07 1.2233E+01 1.9822E−01 2.5426E−02 5.8755E+00 9.9734E+07
Worst 4.5135E−04 2.5840E+01 4.8755E−01 6.6708E+08 3.0268E+01 2.2760E+00 7.5791E−02 1.8677E+01 1.1297E+09
Mean 6.9514E−05 1.4256E+01 2.8446E−01 3.0081E+08 1.8116E+01 9.9461E−01 4.6775E−02 1.1223E+01 3.3380E+08
STD 1.1401E−04 5.3756E+00 7.8656E−02 1.6533E+08 5.3798E+00 5.6330E−01 1.5897E−02 3.4655E+00 1.9426E+08
F13 Best 5.2669E−07 9.0494E+00 5.8342E+00 1.2090E+08 1.4305E+02 4.9141E+00 1.3121E+00 1.6306E+02 1.5049E+08
Worst 3.7673E−02 1.8865E+01 7.6615E+00 8.9827E+08 2.2023E+02 8.6882E+01 4.8376E+00 2.4589E+04 1.6634E+09
Mean 2.8296E−03 1.3106E+01 6.8300E+00 5.1265E+08 1.7296E+02 4.5033E+01 2.9166E+00 2.6429E+03 6.3494E+08
STD 7.7068E−03 2.2122E+00 4.8020E−01 2.0445E+08 2.1961E+01 2.3441E+01 8.2925E−01 5.2405E+03 3.7801E+08
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ABDOLLAHZADEH
ET AL.
TABLE 4 Results of benchmark functions (F1–F13), with 500 dimensions
No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F1 Best 0.0000E+00 3.9560E−03 6.6965E−04 5.4429E+04 1.0108E+05 2.4798E+02 4.4134E−82 4.5957E+04 1.0806E+06
ABDOLLAHZADEH

Worst 0.0000E+00 6.9541E−02 2.7993E−03 3.4888E+05 1.2842E+05 7.5508E+02 8.6466E−70 5.9391E+04 1.2190E+06
Mean 0.0000E+00 3.3903E−02 1.4509E−03 2.1389E+05 1.1899E+05 4.1854E+02 3.1481E−71 5.2706E+04 1.1477E+06
ET AL.

STD 0.0000E+00 1.7024E−02 5.1598E−04 7.8042E+04 7.2207E+03 1.1329E+02 1.5776E−70 3.6249E+03 3.4308E+04
F2 Best 2.6963E−214 2.5553E−03 7.4946E−03 3.9088E+01 1.4513E+146 1.4812E+02 1.5485E−57 2.6697E+02 8.8480E+75
Worst 4.4229E−197 1.3638E−02 1.5640E−02 2.1944E+02 8.0920E+207 1.9263E+02 1.6467E−46 3.1868E+269 2.3500E+131
Mean 2.5844E−198 7.0192E−03 1.0670E−02 1.0193E+02 5.9867E+206 1.6429E+02 8.7863E−48 1.2361E+268 7.8333E+129
STD 0.0000E+00 2.9923E−03 2.1801E−03 4.1878E+01 Inf 1.3636E+01 3.2049E−47 Inf 4.2905E+130
F3 Best 0.0000E+00 8.8185E+05 1.7206E+05 3.6517E+06 1.6141E+06 2.4182E+05 1.5733E+07 3.6984E+05 3.5091E+06
Worst 0.0000E+00 1.7466E+06 4.5015E+05 1.0427E+07 2.5157E+06 2.8206E+06 7.0624E+07 3.5203E+06 6.8987E+06
Mean 0.0000E+00 1.3570E+06 3.0508E+05 6.6787E+06 2.1117E+06 8.2904E+05 3.0098E+07 1.1329E+06 4.8240E+06
STD 0.0000E+00 2.3510E+05 6.3185E+04 1.6460E+06 2.4168E+05 5.7559E+05 1.1821E+07 6.2456E+05 7.0979E+05
F4 Best 5.7867E−206 9.8370E+01 5.2558E+01 9.8509E+01 9.2032E+01 1.1371E+01 3.4048E+01 2.4025E+01 9.7363E+01
Worst 4.9921E−189 9.9606E+01 7.8509E+01 9.9498E+01 9.6253E+01 1.4971E+01 9.9266E+01 3.5637E+01 9.9417E+01
Mean 1.6724E−190 9.9163E+01 6.6077E+01 9.9155E+01 9.4276E+01 1.3467E+01 8.5823E+01 2.7805E+01 9.8841E+01
STD 0.0000E+00 3.0766E−01 5.4071E+00 2.3316E−01 1.3414E+00 7.7605E−01 1.6408E+01 2.3633E+00 4.4627E−01
F5 Best 6.0311E−06 1.4303E+04 4.9753E+02 1.1139E+09 1.3713E+08 2.2554E+04 4.9544E+02 5.4336E+06 4.4020E+09
Worst 4.9340E+02 5.7146E+05 4.9888E+02 2.7805E+09 2.4214E+08 3.9044E+04 4.9712E+02 9.7750E+06 5.4984E+09
Mean 5.0118E+01 1.4598E+05 4.9809E+02 1.9327E+09 1.7223E+08 3.0855E+04 4.9617E+02 7.1942E+06 4.9881E+09
STD 1.5010E+02 1.1587E+05 3.5353E−01 4.8046E+08 3.3038E+07 4.4688E+03 3.9636E−01 1.1225E+06 2.5974E+08

(Continues)
|
27
28

TABLE 4 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F6 Best 1.3871E−03 9.8734E+01 8.8316E+01 7.4445E+04 1.0555E+05 2.3158E+02 1.9722E+01 4.2972E+04 1.0994E+06
Worst 1.8175E+00 1.0637E+02 9.6325E+01 3.0597E+05 1.3277E+05 5.5803E+02 4.7188E+01 5.9536E+04 1.2234E+06
Mean 4.9894E−01 1.0260E+02 9.1667E+01 2.0710E+05 1.1554E+05 3.8808E+02 3.2980E+01 5.3015E+04 1.1581E+06
STD 4.6877E−01 1.8822E+00 1.9816E+00 5.1880E+04 8.0632E+03 8.4844E+01 8.5784E+00 3.6469E+03 3.2364E+04
F7 Best 6.5992E−06 1.1804E+00 2.7350E−02 6.6861E+03 8.4147E+02 1.5408E+03 1.2734E−04 6.2024E+02 3.3078E+04
Worst 4.0011E−04 7.2526E+00 8.0187E−02 2.1078E+04 1.3956E+03 3.2398E+03 2.5790E−02 1.0682E+03 4.4810E+04
Mean 9.3845E−05 3.4194E+00 4.5572E−02 1.5259E+04 1.1371E+03 2.3926E+03 4.7984E−03 8.2011E+02 3.8656E+04
STD 1.0466E−04 1.5552E+00 1.3763E−02 3.2438E+03 1.4419E+02 3.7904E+02 5.9384E−03 1.2175E+02 2.3822E+03
F8 Best −2.0949E+05 −3.6264E+04 −6.8972E+04 −1.8235E+04 −8.3356E+04 −1.3589E+04 −2.0936E+05 −1.5317E+04 −7.1489E+04
Worst −2.0939E+05 −2.4922E+04 −4.9024E+04 −1.3277E+04 −6.5288E+04 −7.4799E+03 −1.0868E+05 −7.3612E+03 −5.7310E+04
Mean −2.0948E+05 −3.0578E+04 −5.6970E+04 −1.5521E+04 −7.4143E+04 −1.0613E+04 −1.7810E+05 −1.0173E+04 −6.2107E+04
STD 2.0361E+01 2.3541E+03 5.0704E+03 1.1725E+03 4.9435E+03 1.6249E+03 3.3168E+04 1.7550E+03 4.2467E+03
F9 Best 0.0000E+00 4.1472E+03 2.6623E+01 2.6720E+02 6.1693E+03 2.1598E+03 0.0000E+00 2.4150E+03 6.6583E+03
Worst 0.0000E+00 6.7721E+03 1.3167E+02 2.2344E+03 6.8161E+03 2.7581E+03 0.0000E+00 2.9123E+03 7.3658E+03
Mean 0.0000E+00 5.5376E+03 7.1224E+01 1.0817E+03 6.4058E+03 2.4376E+03 0.0000E+00 2.6954E+03 6.9741E+03
STD 0.0000E+00 6.2108E+02 2.4092E+01 4.4975E+02 1.7839E+02 1.3789E+02 0.0000E+00 1.2631E+02 1.7055E+02
F10 Best 8.8818E−16 4.8050E−03 1.2897E−03 9.7539E+00 2.0726E+01 6.8186E+00 8.8818E−16 9.7337E+00 2.0047E+01
Worst 8.8818E−16 2.8829E−02 2.5546E−03 2.0835E+01 2.0909E+01 8.8899E+00 7.9936E−15 1.0576E+01 2.0465E+01
Mean 8.8818E−16 1.1216E−02 1.9672E−03 1.8671E+01 2.0836E+01 7.7665E+00 5.2699E−15 1.0235E+01 2.0285E+01
STD 0.0000E+00 5.1528E−03 3.2084E−04 3.9157E+00 4.9099E−02 4.4145E−01 2.2242E−15 2.1885E−01 1.3999E−01
ABDOLLAHZADEH
ET AL.
TABLE 4 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F11 Best 0.0000E+00 4.8899E−04 6.5815E−05 6.1575E+02 9.5641E+02 1.6806E+03 0.0000E+00 7.7653E+03 9.8341E+03
ABDOLLAHZADEH

Worst 0.0000E+00 1.6651E−01 1.1380E−01 3.7589E+03 1.1843E+03 1.8915E+03 0.0000E+00 8.9367E+03 1.0979E+04
ET AL.

Mean 0.0000E+00 1.0639E−02 1.2159E−02 2.0122E+03 1.0719E+03 1.7776E+03 0.0000E+00 8.5757E+03 1.0355E+04
STD 0.0000E+00 2.9727E−02 3.1647E−02 7.7251E+02 7.5775E+01 4.6628E+01 0.0000E+00 2.3837E+02 3.3259E+02
F12 Best 9.8793E−12 5.7627E+05 6.6568E−01 3.9643E+09 9.4411E+07 2.2294E+00 2.8082E−02 3.8321E+01 1.0565E+10
Worst 4.6529E−03 1.3045E+07 9.2436E−01 7.7107E+09 2.3943E+08 5.7086E+00 1.7943E−01 1.8065E+04 1.3022E+10
Mean 3.7955E−04 3.8079E+06 7.6761E−01 5.9041E+09 1.7199E+08 3.9439E+00 9.1032E−02 5.7877E+03 1.1784E+10
STD 9.9661E−04 3.4247E+06 5.6579E−02 1.1005E+09 4.3412E+07 7.4974E−01 3.6962E−02 5.6640E+03 5.7768E+08
F13 Best 1.5099E−04 9.7440E+04 4.7168E+01 6.3484E+09 3.4209E+08 5.8809E+02 8.8398E+00 1.5441E+06 1.9839E+10
Worst 1.6580E+00 2.5135E+06 5.4158E+01 1.2178E+10 6.2665E+08 8.7066E+02 3.7606E+01 5.7846E+06 2.5181E+10
Mean 1.6720E−01 8.0758E+05 5.0275E+01 9.8697E+09 4.7357E+08 7.5158E+02 1.8407E+01 2.6717E+06 2.1917E+10
STD 3.3179E−01 7.7246E+05 1.3109E+00 1.5823E+09 8.1781E+07 6.5251E+01 6.5833E+00 8.7124E+05 1.2045E+09
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
29
30

TABLE 5 Results of benchmark functions (F1–F13), with 1000 dimensions


|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F1 Best 0.0000E+00 8.3482E−01 1.3068E−01 8.9858E+04 7.4867E+05 2.4635E+03 3.5027E−78 1.1278E+05 2.6694E+06
Worst 0.0000E+00 2.1216E+01 4.2039E−01 8.3308E+05 8.7248E+05 4.8032E+03 4.2387E−68 1.3478E+05 2.7967E+06
Mean 0.0000E+00 5.8069E+00 2.4126E−01 4.4114E+05 8.0535E+05 3.5154E+03 1.5101E−69 1.2511E+05 2.7334E+06
STD 0.0000E+00 4.9022E+00 6.9701E−02 1.6162E+05 3.3743E+04 6.1530E+02 7.7304E−69 5.1680E+03 3.4145E+04
F2 Best 2.4266E−209 5.8490E−03 2.2670E−01 1.0000E+300 4.9254E+203 7.1393E+02 9.6113E−55 1.8916E+258 1.0000E+300
Worst 1.8437E−196 7.8520E−02 2.1375E+00 1.0000E+300 3.8712E+271 1.0000E+300 6.5641E−47 2.1923E+299 1.0000E+300
Mean 1.0098E−197 2.7039E−02 6.4334E−01 1.0000E+300 2.6660E+270 1.0000E+300 5.5849E−48 7.3077E+297 1.0000E+300
STD 0.0000E+00 1.6976E−02 3.7341E−01 1.0000E+300 1.0000E+300 1.0000E+300 1.6168E−47 1.0000E+300 1.0000E+300
F3 Best 0.0000E+00 4.2454E+06 9.9816E+05 1.7727E+07 7.0988E+06 9.3878E+05 7.1384E+07 2.3625E+06 1.2164E+07
Worst 0.0000E+00 7.6531E+06 2.4071E+06 5.4917E+07 8.9303E+06 9.1543E+06 2.3891E+08 1.2445E+07 2.9589E+07
Mean 0.0000E+00 5.7375E+06 1.4940E+06 3.0052E+07 7.9011E+06 3.0005E+06 1.2016E+08 6.0920E+06 1.9612E+07
STD 0.0000E+00 9.7986E+05 3.5552E+05 7.6664E+06 5.7369E+05 2.0975E+06 3.6240E+07 2.1045E+06 3.7418E+06
F4 Best 1.0392E−204 9.9250E+01 7.2506E+01 9.9276E+01 9.5893E+01 1.3764E+01 1.5379E+00 2.8898E+01 9.9152E+01
Worst 2.5843E−189 9.9772E+01 8.5165E+01 9.9786E+01 9.8933E+01 1.7638E+01 9.9575E+01 3.5298E+01 9.9742E+01
Mean 1.2771E−190 9.9577E+01 7.8413E+01 9.9610E+01 9.7606E+01 1.5660E+01 7.8392E+01 3.2847E+01 9.9551E+01
STD 0.0000E+00 1.2110E−01 3.3976E+00 1.0754E−01 7.2257E−01 8.6534E−01 2.5060E+01 1.5336E+00 1.3934E−01
F5 Best 2.5289E−04 1.3523E+07 1.0197E+03 1.7727E+09 2.0251E+09 2.4927E+05 9.9262E+02 1.8336E+07 1.1612E+10
Worst 9.8841E+02 9.5928E+07 1.1400E+03 6.6390E+09 2.6641E+09 4.4661E+05 9.9675E+02 2.4859E+07 1.3278E+10
Mean 1.0147E+02 4.4624E+07 1.0555E+03 4.1389E+09 2.3725E+09 3.3299E+05 9.9406E+02 2.1407E+07 1.2499E+10
STD 3.0070E+02 2.1372E+07 3.1904E+01 9.9639E+08 1.9624E+08 5.0670E+04 9.1931E−01 1.7819E+06 4.0469E+08
ABDOLLAHZADEH
ET AL.
TABLE 5 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F6 Best 3.7165E−03 2.2555E+02 1.9652E+02 2.9666E+05 7.2503E+05 2.7081E+03 3.3327E+01 1.1729E+05 2.6129E+06
ABDOLLAHZADEH

Worst 3.3921E+00 2.4641E+02 2.0755E+02 8.9968E+05 8.5012E+05 4.9081E+03 9.9718E+01 1.3311E+05 2.8015E+06
ET AL.

Mean 1.1147E+00 2.3491E+02 2.0279E+02 5.2718E+05 7.8921E+05 3.6446E+03 6.6643E+01 1.2584E+05 2.7348E+06
STD 9.6315E−01 5.0550E+00 2.5997E+00 1.5724E+05 3.6771E+04 5.5579E+02 1.6649E+01 3.6788E+03 3.9399E+04
F7 Best 1.1982E−05 7.8081E+01 1.0699E−01 4.0231E+04 2.2217E+04 1.6648E+04 5.4700E−05 4.5809E+03 1.7760E+05
Worst 3.7004E−04 8.3688E+02 2.0425E−01 9.9845E+04 3.3152E+04 2.6003E+04 1.8211E−02 6.4409E+03 2.1363E+05
Mean 1.1591E−04 4.1201E+02 1.4151E−01 6.5161E+04 2.8074E+04 2.0704E+04 4.0616E−03 5.3877E+03 1.9613E+05
STD 8.7652E−05 2.1052E+02 2.5082E−02 1.3534E+04 2.8097E+03 2.5684E+03 4.9539E−03 4.6217E+02 7.2124E+03
F8 Best −4.1898E+05 −4.9824E+04 −9.9463E+04 −2.5086E+04 −1.1290E+05 −2.0619E+04 −4.1887E+05 −2.0305E+04 −1.0562E+05
Worst −4.1870E+05 −3.9010E+04 −1.9154E+04 −1.9437E+04 −9.9410E+04 −1.1943E+04 −2.3384E+05 −1.0929E+04 −8.1676E+04
Mean −4.1895E+05 −4.4899E+04 −8.7079E+04 −2.1762E+04 −1.0817E+05 −1.5677E+04 −3.4394E+05 −1.4485E+04 −8.9969E+04
STD 5.6761E+01 2.7841E+03 1.3376E+04 1.7287E+03 3.7498E+03 2.1340E+03 6.4239E+04 2.5838E+03 5.7312E+03
F9 Best 0.0000E+00 5.4952E+03 1.4223E+02 7.6823E+02 1.4247E+04 5.9056E+03 0.0000E+00 6.1952E+03 1.5163E+04
Worst 0.0000E+00 1.3116E+04 5.2886E+02 3.9540E+03 1.4955E+04 7.1027E+03 0.0000E+00 6.9938E+03 1.5837E+04
Mean 0.0000E+00 9.6317E+03 2.2177E+02 1.9316E+03 1.4595E+04 6.4470E+03 0.0000E+00 6.5838E+03 1.5535E+04
STD 0.0000E+00 2.1203E+03 7.5592E+01 8.0283E+02 2.4206E+02 3.0401E+02 0.0000E+00 1.7472E+02 1.7392E+02
F10 Best 8.8818E−16 2.6172E−02 1.2469E−02 9.4061E+00 2.0949E+01 7.8819E+00 8.8818E−16 1.0560E+01 2.0026E+01
Worst 8.8818E−16 8.7354E−01 2.−4657E−02 2.0886E+01 2.1019E+01 9.5002E+00 7.9936E−15 1.1181E+01 2.0695E+01
Mean 8.8818E−16 1.2690E−01 1.8284E−02 1.9903E+01 2.0988E+01 8.6221E+00 4.5593E−15 1.0885E+01 2.0409E+01
STD 0.0000E+00 1.5112E−01 2.6578E−03 2.8383E+00 2.3613E−02 3.0715E−01 2.7174E−15 1.8625E−01 2.1664E−01

(Continues)
|
31
32

TABLE 5 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F11 Best 0.0000E+00 8.2300E−02 1.0870E−02 1.7364E+03 6.3239E+03 3.4947E+03 0.0000E+00 2.0001E+04 2.3799E+04
Worst 0.0000E+00 1.0574E+00 2.3040E−01 7.8590E+03 7.5367E+03 3.7675E+03 1.1102E−16 2.1149E+04 2.5467E+04
Mean 0.0000E+00 3.8167E−01 2.5458E−02 4.2058E+03 7.1021E+03 3.5881E+03 3.7007E−18 2.0550E+04 2.4492E+04
STD 0.0000E+00 2.6010E−01 3.9153E−02 1.5832E+03 3.0834E+02 5.4933E+01 2.0270E−17 3.1159E+02 3.8192E+02
F12 Best 5.3797E−09 1.4692E+08 9.7371E−01 9.4555E+09 3.5107E+09 4.9507E+00 3.2476E−02 2.1126E+04 2.8568E+10
Worst 2.7047E−03 2.0570E+09 2.2587E+00 1.9237E+10 5.5082E+09 8.3249E+00 2.3904E−01 4.2617E+05 3.3148E+10
Mean 2.7031E−04 6.9381E+08 1.2517E+00 1.4129E+10 4.1836E+09 6.4110E+00 1.0000E−01 1.2193E+05 3.0711E+10
STD 5.5534E−04 3.9588E+08 3.2471E−01 2.5907E+09 5.1906E+08 8.9298E−01 4.2739E−02 9.7186E+04 1.1132E+09
F13 Best 1.0353E−05 1.4507E+08 1.1022E+02 1.2010E+10 8.1387E+09 1.7132E+03 1.4103E+01 7.7569E+06 5.1672E+10
Worst 1.1775E+00 1.1303E+09 1.4460E+02 2.9456E+10 1.2072E+10 4.4767E+03 6.8057E+01 1.8448E+07 5.8610E+10
Mean 1.7370E−01 4.5791E+08 1.2083E+02 2.1265E+10 9.1269E+09 2.2499E+03 3.7591E+01 1.2277E+07 5.5609E+10
STD 2.8391E−01 2.4115E+08 7.2282E+00 4.4422E+09 1.0247E+09 5.2335E+02 1.5478E+01 2.8423E+06 1.7930E+09
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ABDOLLAHZADEH
ET AL.
TABLE 6 Results of benchmark functions (F14–F23)
No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F14 Best 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01 9.9800E−01
ABDOLLAHZADEH

Worst 9.9800E−01 1.8304E+01 1.2671E+01 2.9821E+00 9.9800E−01 5.9288E+00 1.0763E+01 1.2193E+01 7.8740E+00
Mean 9.9800E−01 8.1307E+00 4.2305E+00 2.0238E+00 9.9800E−01 1.6270E+00 2.6971E+00 4.8545E+00 2.3481E+00
ET AL.

STD 0.0000E+00 5.7947E+00 4.0855E+00 9.9084E−01 2.6200E−11 1.1164E+00 3.2933E+00 2.7988E+00 1.9439E+00
F15 Best 3.0749E−04 3.0790E−04 3.0810E−04 5.3498E−04 5.4649E−04 3.0749E−04 3.1170E−04 7.0949E−04 6.4325E−04
Worst 1.2232E−03 8.8541E−02 2.0363E−02 1.7269E−03 2.0363E−02 2.0363E−02 2.2519E−03 1.2866E−02 2.0363E−02
Mean 3.9905E−04 7.5060E−03 3.7814E−03 1.0261E−03 3.4291E−03 1.1453E−03 7.5887E−04 4.1780E−03 1.8103E−03
STD 2.7940E−04 1.7318E−02 7.5471E−03 3.7296E−04 6.8812E−03 3.6434E−03 4.8856E−04 3.1142E−03 3.5415E−03
F16 Best −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00
Worst −1.0316E+00 −1.0000E+00 −1.0316E+00 −1.0315E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00
Mean −1.0316E+00 −1.0285E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00 −1.0316E+00
STD 6.7752E−16 9.6515E−03 2.3057E−08 4.2030E−05 1.9966E−07 6.7752E−16 1.8661E−09 6.7752E−16 6.7752E−16
F17 Best 3.9789E−01 3.9789E−01 3.9789E−01 3.9797E−01 3.9789E−01 3.9789E−01 3.9789E−01 3.9789E−01 3.9789E−01
Worst 3.9789E−01 3.9822E−01 3.9821E−01 4.0861E−01 3.9789E−01 3.9789E−01 3.9795E−01 3.9789E−01 3.9789E−01
Mean 3.9789E−01 3.9794E−01 3.9790E−01 4.0085E−01 3.9789E−01 3.9789E−01 3.9789E−01 3.9789E−01 3.9789E−01
STD 0.0000E+00 6.7526E−05 5.7899E−05 2.8817E−03 3.8452E−07 0.0000E+00 1.1259E−05 0.0000E+00 0.0000E+00
F18 Best 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00 3.0000E+00
Worst 3.0000E+00 8.4001E+01 8.4000E+01 3.0006E+00 3.0001E+00 3.0000E+00 3.0018E+00 4.2286E+00 3.0000E+00
Mean 3.0000E+00 2.4600E+01 5.7000E+00 3.0001E+00 3.0000E+00 3.0000E+00 3.0001E+00 3.0410E+00 3.0000E+00
STD 7.1892E−16 3.4300E+01 1.4789E+01 1.2351E−04 2.0705E−05 1.7337E−15 3.2667E−04 2.2431E−01 1.8895E−15

(Continues)
|
33
34

TABLE 6 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F19 Best −3.8628E+00 −3.8628E+00 −3.8628E+00 −3.8623E+00 −3.8628E+00 −3.8628E+00 −3.8628E+00 −3.8628E+00 −3.8628E+00
Worst −3.8628E+00 −3.8549E+00 −3.8609E+00 −3.8504E+00 −3.8628E+00 −3.8549E+00 −3.7435E+00 −3.8628E+00 −3.8549E+00
Mean −3.8628E+00 −3.8618E+00 −3.8626E+00 −3.8542E+00 −3.8628E+00 −3.8620E+00 −3.8529E+00 −3.8628E+00 −3.8625E+00
STD 2.7101E−15 2.3278E−03 4.3099E−04 2.6025E−03 1.3946E−06 2.4049E−03 2.1866E−02 2.2873E−06 1.4390E−03
F20 Best −3.3220E+00 −3.3208E+00 −3.3220E+00 −3.1214E+00 −3.3220E+00 −3.3220E+00 −3.3219E+00 −3.3220E+00 −3.3220E+00
Worst −3.2031E+00 −3.1350E+00 −3.0867E+00 −1.7911E+00 −3.1996E+00 −3.0867E+00 −2.8400E+00 −3.3220E+00 −3.1376E+00
Mean −3.2705E+00 −3.2601E+00 −3.2641E+00 −2.9024E+00 −3.2896E+00 −3.2753E+00 −3.2296E+00 −3.3220E+00 −3.2295E+00
STD 5.9923E−02 7.2163E−02 7.0831E−02 3.7287E−01 5.5667E−02 7.6210E−02 1.3905E−01 1.6840E−15 6.5249E−02
F21 Best −1.0153E+01 −1.0069E+01 −1.0153E+01 −8.2648E+00 −1.0153E+01 −1.0153E+01 −1.0153E+01 −1.0153E+01 −1.0153E+01
Worst −1.0153E+01 −2.6007E+00 −2.3316E+00 −4.9728E−01 −2.6304E+00 −2.6305E+00 −2.6248E+00 −2.6305E+00 −2.6305E+00
Mean −1.0153E+01 −6.3088E+00 −9.3828E+00 −2.7594E+00 −6.6184E+00 −5.3115E+00 −7.8525E+00 −5.2274E+00 −5.7977E+00
STD 6.5642E−15 3.1079E+00 2.0417E+00 2.1285E+00 3.1271E+00 3.3425E+00 2.9069E+00 3.5542E+00 3.2848E+00
F22 Best −1.0403E+01 −1.0293E+01 −1.0402E+01 −6.7740E+00 −1.0403E+01 −1.0403E+01 −1.0400E+01 −1.0403E+01 −1.0403E+01
Worst −1.0403E+01 −1.8302E+00 −5.0876E+00 −9.0261E−01 −2.7659E+00 −1.8376E+00 −2.7617E+00 −6.7587E+00 −2.7519E+00
Mean −1.0403E+01 −6.3959E+00 −1.0224E+01 −3.7007E+00 −7.6237E+00 −5.9104E+00 −7.1870E+00 −1.0182E+01 −6.4147E+00
STD 7.3759E−16 3.6566E+00 9.7016E−01 1.5485E+00 3.1657E+00 3.5717E+00 3.1381E+00 8.4434E−01 3.4084E+00
F23 Best −1.0536E+01 −1.0473E+01 −1.0536E+01 −8.1004E+00 −1.0536E+01 −1.0536E+01 −1.0535E+01 −1.0536E+01 −1.0536E+01
Worst −1.0536E+01 −2.4110E+00 −1.0533E+01 −9.4483E−01 −2.4273E+00 −1.6766E+00 −2.4158E+00 −2.4273E+00 −2.4217E+00
Mean −1.0536E+01 −7.6980E+00 −1.0535E+01 −4.0835E+00 −9.4802E+00 −7.3484E+00 −7.6994E+00 −9.9670E+00 −7.6010E+00
STD 1.2342E−15 3.4985E+00 8.5960E−04 1.7790E+00 2.7874E+00 3.7524E+00 3.3809E+00 1.9251E+00 3.6977E+00
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
ABDOLLAHZADEH

particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ET AL.
TABLE 7 Results of benchmark functions (F23–F52)
No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F24 Best 1.0584E+02 6.3089E+06 1.8768E+04 4.2284E+08 6.5078E+03 1.0403E+02 1.8119E+06 5.5814E+03 1.1776E+02
ABDOLLAHZADEH

Worst 1.3145E+04 7.1515E+09 5.6688E+08 2.5357E+09 7.8330E+04 3.4517E+09 5.7113E+07 7.6879E+04 2.1175E+09
Mean 2.3996E+03 2.5725E+09 1.8943E+07 1.2404E+09 3.4690E+04 2.7729E+08 1.7777E+07 2.9419E+04 2.8761E+08
ET AL.

STD 3.0872E+03 2.3675E+09 1.0349E+08 4.8379E+08 2.4736E+04 8.6181E+08 1.2834E+07 1.3362E+04 5.8274E+08
F25 Best 3.0000E+02 4.2713E+02 3.6514E+02 5.8872E+02 3.0016E+02 3.0000E+02 1.3525E+03 1.4805E+04 3.3001E+02
Worst 3.0001E+02 3.0971E+04 1.3714E+04 1.2232E+04 3.0065E+02 1.0168E+04 1.4469E+04 3.7765E+04 3.0950E+04
Mean 3.0000E+02 1.2595E+04 4.9310E+03 3.7761E+03 3.0035E+02 2.1079E+03 5.3221E+03 2.7146E+04 8.7341E+03
STD 1.0376E−03 8.0998E+03 3.4963E+03 1.9342E+03 1.4960E−01 2.7280E+03 3.0010E+03 5.6587E+03 8.9352E+03
F26 Best 4.0000E+02 4.0150E+02 4.0077E+02 4.1879E+02 4.0028E+02 4.0001E+02 4.0707E+02 4.0001E+02 4.0494E+02
Worst 4.3810E+02 2.7005E+03 4.4014E+02 5.8680E+02 4.3811E+02 2.4461E+03 5.9839E+02 4.1392E+02 5.1756E+02
Mean 4.1310E+02 8.0640E+02 4.2528E+02 4.8139E+02 4.1621E+02 6.1832E+02 4.7927E+02 4.0106E+02 4.4410E+02
STD 1.8021E+01 6.5143E+02 1.6946E+01 4.0557E+01 1.8567E+01 4.8112E+02 5.3013E+01 3.1886E+00 2.7976E+01
F27 Best 5.0597E+02 5.3249E+02 5.0302E+02 5.3007E+02 5.0996E+02 5.3144E+02 5.2992E+02 5.3482E+02 5.0796E+02
Worst 5.4079E+02 6.1590E+02 5.4212E+02 5.8668E+02 5.5673E+02 5.9213E+02 6.0367E+02 5.8159E+02 5.6200E+02
Mean 5.1912E+02 5.6824E+02 5.2117E+02 5.5406E+02 5.2301E+02 5.5373E+02 5.5534E+02 5.6096E+02 5.2936E+02
STD 8.4503E+00 1.9209E+01 1.0923E+01 1.1270E+01 1.3192E+01 1.4954E+01 2.0769E+01 1.3315E+01 1.2771E+01
F28 Best 6.0000E+02 6.0875E+02 6.0010E+02 6.1790E+02 6.0023E+02 6.1019E+02 6.2094E+02 6.2303E+02 6.0000E+02
Worst 6.0678E+02 6.8558E+02 6.0731E+02 6.3771E+02 6.2943E+02 6.4289E+02 6.7704E+02 6.5087E+02 6.1421E+02
Mean 6.0088E+02 6.3583E+02 6.0102E+02 6.2717E+02 6.0412E+02 6.2536E+02 6.4118E+02 6.3419E+02 6.0247E+02
STD 1.7453E+00 1.6244E+01 1.3902E+00 4.8424E+00 7.4697E+00 7.8174E+00 1.3859E+01 7.2540E+00 3.4067E+00

(Continues)
|
35
36

TABLE 7 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F29 Best 7.2142E+02 7.5562E+02 7.1554E+02 7.6677E+02 7.1725E+02 7.2226E+02 7.4840E+02 7.1699E+02 7.1118E+02
Worst 7.5250E+02 8.7913E+02 7.7412E+02 8.4006E+02 7.4184E+02 8.4773E+02 8.6015E+02 7.6508E+02 7.6507E+02
Mean 7.3320E+02 8.1258E+02 7.3993E+02 7.9711E+02 7.2772E+02 7.7544E+02 8.0093E+02 7.3280E+02 7.4003E+02
STD 8.6658E+00 3.2524E+01 1.6261E+01 1.7445E+01 7.7712E+00 3.1404E+01 2.8069E+01 1.1118E+01 1.2232E+01
F30 Best 8.0796E+02 8.2209E+02 8.0709E+02 8.3471E+02 8.0797E+02 8.2428E+02 8.1734E+02 8.2786E+02 8.1691E+02
Worst 8.4875E+02 8.9347E+02 8.4689E+02 8.8325E+02 8.4080E+02 8.8733E+02 8.8647E+02 8.7860E+02 8.5424E+02
Mean 8.2339E+02 8.6438E+02 8.2155E+02 8.5592E+02 8.2309E+02 8.4904E+02 8.4999E+02 8.5499E+02 8.2816E+02
STD 1.0463E+01 1.6646E+01 1.1402E+01 9.0694E+00 8.6113E+00 1.8164E+01 2.0530E+01 1.0902E+01 7.9486E+00
F31 Best 9.0000E+02 9.5355E+02 9.0002E+02 1.1124E+03 9.0001E+02 1.0203E+03 1.0234E+03 1.0027E+03 9.0000E+02
Worst 1.1661E+03 5.4453E+03 1.0391E+03 1.8136E+03 3.4924E+03 2.8990E+03 5.0582E+03 2.2357E+03 2.1432E+03
Mean 9.2701E+02 1.9449E+03 9.3495E+02 1.3403E+03 1.1136E+03 1.6064E+03 2.1755E+03 1.5848E+03 1.1207E+03
STD 6.1707E+01 1.0062E+03 3.7637E+01 1.7023E+02 6.7570E+02 4.3166E+02 8.3097E+02 3.4361E+02 2.7193E+02
F32 Best 1.1374E+03 1.4538E+03 1.0626E+03 2.1724E+03 1.2551E+03 1.6799E+03 1.7489E+03 1.3526E+03 1.1408E+03
Worst 2.3032E+03 2.6170E+03 2.5902E+03 2.7314E+03 2.0871E+03 3.0538E+03 3.1614E+03 2.5877E+03 2.8327E+03
Mean 1.6845E+03 2.0634E+03 1.6117E+03 2.4584E+03 1.6319E+03 2.4529E+03 2.2695E+03 1.9022E+03 1.9322E+03
STD 3.3734E+02 3.1149E+02 4.0326E+02 1.7544E+02 2.4605E+02 3.0102E+02 3.2707E+02 2.9607E+02 4.1286E+02
F33 Best 1.1010E+03 1.1303E+03 1.1176E+03 1.1998E+03 1.1066E+03 1.1160E+03 1.1991E+03 1.1683E+03 1.1013E+03
Worst 1.1385E+03 6.3816E+03 1.3505E+03 1.4800E+03 1.2196E+03 1.3052E+03 1.7010E+03 1.5848E+03 1.4925E+03
Mean 1.1101E+03 1.7907E+03 1.1839E+03 1.3042E+03 1.1626E+03 1.1788E+03 1.3845E+03 1.4258E+03 1.1611E+03
STD 1.0082E+01 1.2440E+03 5.7173E+01 8.3902E+01 3.2767E+01 5.3310E+01 1.4405E+02 8.2326E+01 8.7233E+01
ABDOLLAHZADEH
ET AL.
TABLE 7 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F34 Best 1.6898E+03 1.8502E+04 8.1101E+03 1.6703E+06 1.6366E+05 1.6597E+03 8.8291E+04 6.8468E+04 1.5370E+03
ABDOLLAHZADEH

Worst 2.3579E+04 3.4037E+08 3.6164E+06 3.3079E+07 3.0563E+06 3.2018E+04 1.4866E+07 2.4313E+06 9.8607E+06
ET AL.

Mean 9.5634E+03 4.6322E+07 5.9286E+05 8.4845E+06 9.6656E+05 1.2711E+04 2.9321E+06 1.0994E+06 8.3268E+05
STD 5.3047E+03 8.7422E+07 1.0730E+06 7.0682E+06 7.3858E+05 7.6356E+03 3.1526E+06 6.8897E+05 1.8411E+06
F35 Best 1.3240E+03 2.6863E+03 1.8704E+03 2.0275E+04 1.4357E+03 1.4233E+03 2.3708E+03 3.0453E+03 1.3494E+03
Worst 4.8401E+03 3.2656E+04 7.0840E+04 7.3188E+05 1.9949E+04 1.7429E+04 5.7614E+04 1.2726E+04 3.0492E+04
Mean 1.8270E+03 1.5708E+04 1.1700E+04 1.7975E+05 6.6548E+03 6.5754E+03 1.4664E+04 7.3242E+03 1.0554E+04
STD 6.7442E+02 8.8235E+03 1.2850E+04 1.8296E+05 6.0344E+03 4.9235E+03 1.3157E+04 2.1970E+03 7.9650E+03
F36 Best 1.4100E+03 1.4613E+03 1.4690E+03 1.6102E+03 1.4412E+03 1.4320E+03 1.5939E+03 3.5617E+03 1.4795E+03
Worst 1.7592E+03 8.5736E+03 8.0954E+03 7.6976E+03 2.8169E+04 7.6149E+03 9.6049E+03 1.1797E+04 1.6856E+04
Mean 1.4796E+03 3.5069E+03 4.1693E+03 3.5497E+03 6.4874E+03 2.7364E+03 3.5721E+03 6.2936E+03 4.2006E+03
STD 7.3138E+01 2.7583E+03 2.3604E+03 1.9726E+03 8.3434E+03 1.7338E+03 2.2551E+03 2.1628E+03 3.6721E+03
F37 Best 1.5074E+03 1.7343E+03 1.7424E+03 2.4373E+03 1.5523E+03 1.5438E+03 1.6640E+03 2.5084E+04 1.8108E+03
Worst 1.7375E+03 8.4734E+04 8.7138E+04 7.5449E+04 1.2105E+04 6.7568E+04 3.6357E+04 6.1849E+04 1.6771E+05
Mean 1.5721E+03 3.9358E+04 2.2007E+04 8.0718E+03 3.2544E+03 1.4031E+04 1.0719E+04 3.9068E+04 1.6030E+04
STD 5.3792E+01 3.6262E+04 2.3404E+04 1.4260E+04 3.2845E+03 1.8356E+04 1.1768E+04 9.4007E+03 3.2828E+04
F38 Best 1.6016E+03 1.6377E+03 1.6050E+03 1.6685E+03 1.6213E+03 1.6014E+03 1.6179E+03 1.7312E+03 1.6019E+03
Worst 1.8406E+03 2.1564E+03 1.8257E+03 2.0631E+03 1.8725E+03 2.1149E+03 2.5948E+03 2.4600E+03 2.0927E+03
Mean 1.6775E+03 1.8381E+03 1.7088E+03 1.8056E+03 1.7479E+03 1.8253E+03 1.8901E+03 2.0868E+03 1.7818E+03
STD 7.0705E+01 1.1504E+02 6.2221E+01 9.8415E+01 8.6990E+01 1.4774E+02 2.1005E+02 2.0805E+02 1.3603E+02

(Continues)
|
37
38

TABLE 7 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F39 Best 1.7086E+03 1.7324E+03 1.7190E+03 1.7669E+03 1.7262E+03 1.7262E+03 1.7809E+03 1.7431E+03 1.7203E+03
Worst 1.8416E+03 2.0074E+03 1.8951E+03 1.9358E+03 1.9044E+03 1.9870E+03 2.0226E+03 2.3540E+03 2.0041E+03
Mean 1.7442E+03 1.8716E+03 1.7621E+03 1.8186E+03 1.8072E+03 1.7873E+03 1.9086E+03 1.9952E+03 1.8105E+03
STD 2.9039E+01 8.0155E+01 3.4523E+01 3.8315E+01 5.6798E+01 5.3636E+01 6.9762E+01 1.8061E+02 8.2465E+01
F40 Best 1.8345E+03 7.5758E+03 2.4914E+03 1.6767E+04 4.5945E+03 2.0703E+03 3.0710E+03 3.9320E+03 3.3601E+03
Worst 2.3457E+03 7.1481E+08 9.1720E+04 1.1126E+06 9.5313E+04 8.0204E+04 1.0096E+05 2.4359E+04 6.5721E+06
Mean 1.9553E+03 2.4494E+07 2.7893E+04 2.3836E+05 2.7002E+04 2.4421E+04 5.2613E+04 1.1299E+04 4.8851E+05
STD 1.0014E+02 1.3042E+08 1.7539E+04 2.3865E+05 3.0200E+04 2.4474E+04 3.7991E+04 5.2008E+03 1.6531E+06
F41 Best 1.9055E+03 2.2795E+03 1.9713E+03 2.2105E+03 1.9064E+03 1.9201E+03 2.1985E+03 1.0553E+05 2.0445E+03
Worst 2.0449E+03 1.4130E+06 1.2518E+06 1.9760E+04 6.7077E+03 1.2805E+04 2.4005E+06 1.4838E+06 4.5944E+04
Mean 1.9576E+03 3.4726E+05 6.8507E+04 7.9774E+03 3.2506E+03 3.9208E+03 3.6259E+05 5.2067E+05 1.3363E+04
STD 4.1818E+01 5.0787E+05 2.2983E+05 4.2194E+03 1.3163E+03 2.4442E+03 6.9117E+05 4.0297E+05 1.3292E+04
F42 Best 2.0043E+03 2.0477E+03 2.0229E+03 2.0635E+03 2.0272E+03 2.0750E+03 2.0535E+03 2.2194E+03 2.0216E+03
Worst 2.0850E+03 2.4561E+03 2.2434E+03 2.2549E+03 2.2256E+03 2.3155E+03 2.3154E+03 2.5885E+03 2.2104E+03
Mean 2.0401E+03 2.2016E+03 2.0744E+03 2.1205E+03 2.0911E+03 2.1574E+03 2.1761E+03 2.4022E+03 2.0923E+03
STD 1.9792E+01 1.0099E+02 4.7632E+01 3.8545E+01 6.4332E+01 6.3037E+01 6.9976E+01 9.5365E+01 5.2051E+01
F43 Best 2.2000E+03 2.2164E+03 2.2149E+03 2.2142E+03 2.2000E+03 2.2030E+03 2.2115E+03 2.2171E+03 2.2000E+03
Worst 2.2096E+03 2.3907E+03 2.3441E+03 2.3571E+03 2.3473E+03 2.4033E+03 2.3874E+03 2.3613E+03 2.3634E+03
Mean 2.2003E+03 2.3349E+03 2.3115E+03 2.2391E+03 2.2908E+03 2.2973E+03 2.3166E+03 2.3107E+03 2.2943E+03
STD 1.7553E+00 4.9653E+01 2.7263E+01 3.5588E+01 5.7583E+01 5.5574E+01 6.4080E+01 2.9737E+01 5.6643E+01
ABDOLLAHZADEH
ET AL.
TABLE 7 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F44 Best 2.3000E+03 2.3118E+03 2.3025E+03 2.3788E+03 2.3030E+03 2.3260E+03 2.3106E+03 2.3000E+03 2.3000E+03
ABDOLLAHZADEH

Worst 2.3361E+03 3.7741E+03 3.2135E+03 2.5734E+03 3.3660E+03 4.4978E+03 3.7673E+03 3.8154E+03 2.6450E+03
ET AL.

Mean 2.3073E+03 2.8093E+03 2.3862E+03 2.4418E+03 2.4681E+03 3.1151E+03 2.6047E+03 2.7754E+03 2.3311E+03
STD 8.8348E+00 4.5365E+02 2.3356E+02 5.1593E+01 3.4503E+02 7.8387E+02 4.7085E+02 6.3360E+02 7.0478E+01
F45 Best 2.3000E+03 2.6414E+03 2.6113E+03 2.6482E+03 2.6168E+03 2.6331E+03 2.6291E+03 2.6610E+03 2.6144E+03
Worst 2.6377E+03 2.7596E+03 2.6438E+03 2.6739E+03 2.6730E+03 2.8430E+03 2.7049E+03 2.8399E+03 2.6542E+03
Mean 2.6133E+03 2.6885E+03 2.6252E+03 2.6605E+03 2.6347E+03 2.7133E+03 2.6612E+03 2.7506E+03 2.6350E+03
STD 5.9660E+01 3.4777E+01 9.8308E+00 6.3239E+00 1.5019E+01 4.9769E+01 1.8448E+01 5.2013E+01 1.0086E+01
F46 Best 2.5000E+03 2.7716E+03 2.7268E+03 2.7928E+03 2.7660E+03 2.6841E+03 2.7849E+03 2.5000E+03 2.5000E+03
Worst 2.8092E+03 2.9570E+03 2.7959E+03 2.8359E+03 2.7966E+03 2.9812E+03 2.8844E+03 3.0906E+03 2.8247E+03
Mean 2.7739E+03 2.8387E+03 2.7675E+03 2.8174E+03 2.7840E+03 2.8661E+03 2.8226E+03 2.8727E+03 2.7851E+03
STD 5.2575E+01 3.6083E+01 1.7976E+01 9.7396E+00 8.8582E+00 5.6766E+01 2.4696E+01 9.1783E+01 5.5270E+01
F47 Best 2.6002E+03 2.9309E+03 2.9272E+03 2.9449E+03 2.9266E+03 2.6030E+03 2.8826E+03 2.9995E+03 2.9264E+03
Worst 2.9806E+03 3.4103E+03 3.0030E+03 3.0628E+03 2.9494E+03 3.4386E+03 3.0815E+03 3.2074E+03 3.0214E+03
Mean 2.9225E+03 3.0874E+03 2.9408E+03 2.9951E+03 2.9356E+03 3.2037E+03 2.9852E+03 3.0529E+03 2.9440E+03
STD 8.9244E+01 1.5314E+02 1.6343E+01 2.5200E+01 1.0845E+01 1.5156E+02 5.1328E+01 4.7265E+01 2.0903E+01
F48 Best 2.6000E+03 2.9106E+03 2.6017E+03 3.0031E+03 2.6011E+03 2.8169E+03 2.8312E+03 2.8000E+03 2.8000E+03
Worst 3.4469E+03 4.2858E+03 3.5599E+03 3.8060E+03 3.4721E+03 3.9795E+03 4.2815E+03 4.1824E+03 3.6953E+03
Mean 2.8864E+03 3.5170E+03 3.1444E+03 3.4725E+03 3.1031E+03 3.4286E+03 3.4665E+03 3.9384E+03 3.1861E+03
STD 1.9924E+02 4.2678E+02 2.0195E+02 2.5753E+02 2.5678E+02 3.6440E+02 3.5397E+02 2.4755E+02 2.6585E+02

(Continues)
|
39
40

TABLE 7 (Continued)
|

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F49 Best 3.0612E+03 3.0729E+03 3.0604E+03 3.0664E+03 3.0596E+03 3.1039E+03 3.0671E+03 3.1155E+03 3.0602E+03
Worst 3.1225E+03 3.2695E+03 3.1219E+03 3.0865E+03 3.1138E+03 3.2724E+03 3.2828E+03 3.4901E+03 3.1143E+03
Mean 3.0651E+03 3.1449E+03 3.0801E+03 3.0767E+03 3.0664E+03 3.1661E+03 3.1472E+03 3.1763E+03 3.0699E+03
STD 1.0981E+01 4.8955E+01 2.0089E+01 5.4465E+00 1.3362E+01 4.8720E+01 4.8178E+01 7.7835E+01 1.3416E+01
F50 Best 3.0000E+03 3.1653E+03 3.1701E+03 3.1939E+03 3.1581E+03 3.0000E+03 3.0282E+03 3.1877E+03 3.1581E+03
Worst 3.1875E+03 3.5042E+03 3.2387E+03 3.3201E+03 3.1876E+03 3.3445E+03 3.2486E+03 3.2087E+03 3.3710E+03
Mean 3.1750E+03 3.2492E+03 3.1993E+03 3.2363E+03 3.1817E+03 3.1883E+03 3.2077E+03 3.1982E+03 3.2421E+03
STD 4.7578E+01 7.0163E+01 1.4271E+01 2.3818E+01 1.2210E+01 6.0068E+01 3.9678E+01 4.9328E+00 6.5196E+01
F51 Best 3.1453E+03 3.2436E+03 3.1542E+03 3.1870E+03 3.1651E+03 3.1728E+03 3.1818E+03 3.3145E+03 3.1517E+03
Worst 3.2484E+03 3.6724E+03 3.4666E+03 3.4169E+03 3.4572E+03 3.5514E+03 3.7038E+03 3.8365E+03 3.4199E+03
Mean 3.1857E+03 3.3945E+03 3.2472E+03 3.2749E+03 3.2893E+03 3.3546E+03 3.3403E+03 3.6195E+03 3.2386E+03
STD 2.7915E+01 1.0051E+02 8.6985E+01 4.4268E+01 1.0095E+02 1.0977E+02 1.4503E+02 1.2203E+02 8.0159E+01
F52 Best 3.2591E+03 6.4262E+03 3.3284E+03 4.9323E+04 3.3828E+03 3.8504E+03 4.9357E+03 3.6514E+03 3.5699E+03
Worst 8.4655E+03 7.6566E+06 1.0042E+06 2.0791E+06 8.4455E+05 2.6406E+07 2.1189E+06 1.2562E+06 3.9759E+05
Mean 4.1048E+03 1.4795E+06 4.8519E+04 4.2092E+05 7.9609E+04 2.0140E+06 4.6977E+05 8.9517E+04 4.6786E+04
STD 1.2665E+03 2.2684E+06 1.8127E+05 4.3817E+05 2.1475E+05 5.7538E+06 5.7150E+05 2.2891E+05 8.2842E+04
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ABDOLLAHZADEH
ET AL.
ABDOLLAHZADEH ET AL. | 41

F20 managed to find a far better solution than all optimizers. Overall, GTO performed well in the
benchmark functions (F14–F23), and in almost all cases, it is capable of obtaining high‐quality
solutions compared with other optimizers, thus generating better results.
On the basis of the results from the benchmark (F24–F52) CEC2017 shown in Table 7 and
Figure 12, GTO, in general, has been able to generate excellent and significant performance in
24 of the 29 benchmark functions found in this subset. It also had better performed than other
optimization algorithms. Upon further examination, one can conclude that GTO has generated
better solutions in all hybrid and composition benchmark functions than other optimizers, and
thus outperforming others; however, in contrast to the Shifted and Rotated benchmark func-
tions, it is clear it has performed better only in five out of nine of these functions than other
optimizers. Figure 12 demonstrates the convergence diagrams for the CEC2017 benchmark,
showing fast convergence and high performance and more efforts by the GTO effort. To obtain
better solutions and escape local optimization entrapment in all stages of optimization, these
evaluations have made it easy for us to understand that GTO has a very high and good
capability to explore and exploit phases. In general, GTO proved to be a robust and high‐
performance algorithm because, as all the evaluations have shown, it has had an acceptable
performance compared with other optimization algorithms.

4.5 | Running time analysis

This subsection examines the performance of GTO runtime with nine other optimization
algorithms using 52 standard benchmark functions. The average runtime in 30 independent
implementations for each benchmark function was calculated to conduct the runtime test for
each optimizer algorithm, and the results are summarized in Table 8. According to the results
in Table 8, it is clear that the GTO runtime to solve problems with a low ratio, of course, is
greater and more acceptable than other optimization algorithms because, in GTO, both ex-
ploration and exploitation operations are performed in each iteration on the entire population.
It requires more processing and prolongs the runtime. However, GTO has obtained less run-
time to solve some benchmark functions than SHO, PSO, and GSA algorithms. In general, even
though GTO runtime is longer than some optimization algorithms, it is clear that it has
excellent advantages over other optimization algorithms, considering the better capabilities and
performance it has shown in solving various problems. So even longer GTO runtime is of great
value for use in a variety of issues.

4.6 | Significance of superiority analysis

Wilcoxon rank‐sum statistical test with 5% accuracy was used to evaluate and discover essential
differences between the proposed model and other optimization techniques.76 Tables 9–14
illustrate the p‐values from the Wilcoxon rank‐sum statistical test with an accuracy of 5%. The
‘+’ and ‘−’ signs in Tables 9–14 indicate a positive and negative significant difference between
the algorithms. Moreover, the ‘=’ sign in Tables 9–14 states that no significant difference
between the algorithms or the difference cannot be determined using the Wilcoxon rank‐sum
test. It should also first be examined whether the optimizers' results are unequal or not for
multiple comparisons. If unequal, post hoc analysis should be performed to elucidate which of
the algorithms differs. It is why the nonparametric Friedman's test method82 is used.
42 | ABDOLLAHZADEH ET AL.

F I G U R E 12 Convergence curves of various types of functions for different number of iterations. GSA,
gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine
algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm [Color figure can be viewed at
wileyonlinelibrary.com]

The results of this test are summarized in Tables 16–21. In this table, the average ranking of the
optimization algorithms' results based on benchmark functions is provided in 6 series of
evaluations.
According to the p‐values in Table 9, statistically significant differences can be seen in
almost all results. According to the p‐values in Table 10, GTO has better solutions in almost all
ABDOLLAHZADEH ET AL. | 43

TABLE 8 Comparison of average running time results (seconds) over 30 runs


No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F1 0.2522 0.1126 0.1257 0.1279 0.2532 0.4791 0.0886 0.8066 0.1269
F2 0.2479 0.1225 0.1335 0.1302 0.2062 0.5438 0.1111 0.7949 0.1304
F3 0.8129 0.4778 0.4923 0.4914 0.6117 1.2792 0.5002 1.2706 0.4956
F4 0.2239 0.1083 0.1288 0.1171 0.2154 0.5650 0.0868 0.9281 0.1154
F5 0.2490 0.1349 0.1391 0.1353 0.2353 0.5808 0.1050 0.8116 0.1351
F6 0.2116 0.1216 0.1218 0.1200 0.2160 0.5753 0.0827 0.7799 0.1150
F7 0.3472 0.1808 0.2123 0.1882 0.2906 0.5882 0.1563 0.8550 0.1895
F8 0.2749 0.1369 0.1471 0.1459 0.1736 0.5709 0.1073 0.8055 0.1427
F9 0.2787 0.1229 0.1284 0.1274 0.2358 0.5479 0.0889 0.7898 0.1261
F10 0.2586 0.1294 0.1439 0.1502 0.2551 0.5864 0.0962 0.7997 0.1410
F11 0.3397 0.1404 0.1506 0.1551 0.2711 0.7612 0.1098 0.8197 0.1559
F12 0.6852 0.3332 0.3481 0.3570 0.4480 1.0099 0.3203 1.0135 0.3517
F13 0.7076 0.3319 0.3456 0.3502 0.4595 0.9970 0.3286 1.0109 0.3491
F14 1.3056 0.6637 0.6649 0.7001 0.9096 1.7131 0.7044 0.9289 0.6932
F15 0.2028 0.0720 0.0673 0.0830 0.1730 0.6384 0.0667 0.3603 0.0865
F16 0.1723 0.0479 0.0482 0.0630 0.0931 0.4319 0.0512 0.3075 0.0693
F17 0.1597 0.0414 0.0417 0.0597 0.0910 0.4197 0.0444 0.3037 0.0630
F18 0.1357 0.0399 0.0401 0.0574 0.0869 0.4277 0.0434 0.2998 0.0613
F19 0.2280 0.0943 0.0938 0.1289 0.1450 0.5446 0.1024 0.3762 0.1193
F20 0.2594 0.1008 0.1020 0.1247 0.1504 0.5383 0.1084 0.4261 0.1236
F21 0.3893 0.1738 0.1709 0.1822 0.2245 0.6627 0.1769 0.5769 0.1896
F22 0.4355 0.2245 0.2179 0.2392 0.2840 0.7584 0.2304 0.5340 0.2429
F23 0.5809 0.3684 0.2906 0.3009 0.3546 0.8733 0.3234 0.6067 0.3131
F24 0.2390 0.0893 0.1022 0.1083 0.1640 0.5696 0.0829 0.5098 0.1097
F25 0.2322 0.0876 0.0944 0.1040 0.1485 0.6318 0.0803 0.4713 0.1060
F26 0.2303 0.0853 0.0946 0.1039 0.1558 0.6815 0.0798 0.4704 0.1102
F27 0.2443 0.0919 0.1009 0.1109 0.1620 0.9098 0.0869 0.4787 0.1130
F28 0.2798 0.1133 0.1261 0.1316 0.1824 0.8433 0.1131 0.5013 0.1414
F29 0.2611 0.0941 0.1054 0.1118 0.1618 0.6034 0.1010 0.4839 0.1181
F30 0.2476 0.0928 0.1030 0.1119 0.1603 0.6080 0.0882 0.4785 0.1181
F31 0.2501 0.0945 0.1045 0.1121 0.1613 0.5836 0.1011 0.4792 0.1159
F32 0.2691 0.0975 0.1084 0.1164 0.1654 0.5573 0.0941 0.4851 0.1203
F33 0.2530 0.0896 0.0990 0.1083 0.1465 0.7033 0.0844 0.4789 0.1138
F34 0.2548 0.0916 0.1012 0.1098 0.1556 0.6443 0.0868 0.4768 0.1128

(Continues)
44 | ABDOLLAHZADEH ET AL.

TABLE 8 (Continued)

No. GTO TSA GWO SCA MVO PSO WOA GSA MFO
F35 0.2551 0.0922 0.1022 0.1093 0.1560 0.7267 0.0871 0.4826 0.1199
F36 0.2647 0.0982 0.1094 0.1137 0.1604 0.6125 0.0920 0.4868 0.1182
F37 0.2582 0.0886 0.0991 0.1054 0.1570 0.6042 0.0839 0.4755 0.1104
F38 0.2471 0.0922 0.1067 0.1101 0.1636 0.6436 0.0874 0.4785 0.1174
F39 0.2896 0.1109 0.1222 0.1319 0.1812 0.6622 0.1081 0.4994 0.1363
F40 0.2569 0.0935 0.1036 0.1115 0.1573 0.6295 0.0880 0.4819 0.1211
F41 0.4466 0.2086 0.2254 0.2424 0.2766 0.8117 0.2118 0.5973 0.2411
F42 0.2877 0.1133 0.1251 0.1377 0.1868 0.5900 0.1103 0.6792 0.1417
F43 0.3002 0.1155 0.1299 0.1404 0.1872 0.6646 0.1109 0.4998 0.1477
F44 0.3026 0.1302 0.1449 0.1491 0.2051 0.6738 0.1239 0.5169 0.1541
F45 0.3202 0.1305 0.1441 0.1549 0.2058 0.6838 0.1286 0.5948 0.1583
F46 0.4118 0.1354 0.1502 0.1591 0.2082 0.7103 0.1335 0.5225 0.1626
F47 0.6292 0.1240 0.1352 0.1526 0.2014 0.7202 0.1232 0.5244 0.1512
F48 0.7058 0.1428 0.1551 0.1667 0.3133 0.7127 0.1410 0.5310 0.1691
F49 0.7161 0.1487 0.1610 0.1739 0.3009 0.7327 0.1478 0.5343 0.1763
F50 0.6389 0.1397 0.1525 0.1611 0.2099 0.7625 0.1365 0.5300 0.1658
F51 0.3278 0.1355 0.1482 0.1638 0.2098 0.6633 0.1325 0.5225 0.1654
F52 0.5031 0.2310 0.2533 0.2655 0.3001 0.8735 0.2350 0.6208 0.2642
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

cases. According to the p‐values in Table 11, GTO is seen to have performed significantly better
than other optimization algorithms. The statistical results in Table 12 also confirm that GTO is
a significant difference between the results obtained in GTO and other optimization algorithms
in almost all cases. On the other hand, the p‐values in Tables 13 and 14 confirm that GTO, in
most cases, has significant performance compared with other optimization algorithms. Finally,
according to Table 15, GTO in benchmark functions (F1–F13) in all dimensions has a sig-
nificant advantage over almost all algorithms. However, in functions (F14–F23), GTO is seen to
have outperformed all optimization algorithms. On the other hand, in the functions (F23–F52),
GTO performed well in almost all functions and differed significantly from all optimization
algorithms, as the results suggested.
Tables 16–21 illustrate Friedman's test results. Examining these results, it is clear that GTO
ranks first in all series of evaluations in terms of ranking, so it is reaffirmed that GTO can produce
high‐quality solutions and is also statistically superior to all algorithms comparison. Finally, in all
the evaluations, GTO proved to play a constructive role in the future as a robust algorithm.
TABLE 9
ABDOLLAHZADEH

p‐Values of the Wilcoxon rank‐sum test with 5% significance for F1–F13 with 30 dimensions (p‐values ≥0.05 are shown in boldface)
Proposed Proposed Proposed Proposed Proposed Proposed Proposed Proposed
ET AL.

method method method method method method method method


versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
No. p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R
F1 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 +
F2 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F3 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 +
F4 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F5 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 5.0723E‐10 + 3.0199E‐11 + 9.9186E‐11 + 3.0199E‐11 +
F6 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 9.8329E‐08 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F7 3.0199E‐11 + 4.5043E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 8.9934E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F8 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F9 1.2118E‐12 + 1.1537E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.6080E‐01 = 1.2108E‐12 + 1.2118E‐12 +
F10 1.2118E‐12 + 1.0671E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 1.2954E‐08 + 1.2118E‐12 + 1.2118E‐12 +
F11 3.4507E‐07 + 5.5843E‐03 + 1.2118E‐12 + 1.2118E‐12 + 1.2118E‐12 + 3.3371E‐01 = 1.2118E‐12 + 1.2118E‐12 +
F12 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 5.5611E‐04 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
F13 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 + 8.0727E‐01 = 3.0199E‐11 + 3.0199E‐11 + 3.0199E‐11 +
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
45
46

T A B L E 10 p‐Values of the Wilcoxon rank‐sum test with 5% significance for F1–F13 with 100 dimensions (p‐values ≥0.05 are shown in boldface)
|

Proposed Proposed Proposed Proposed Proposed Proposed Proposed Proposed


method method method method method method method method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
No. p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R
F1 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +
F2 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F3 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +
F4 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F5 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F6 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F7 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 1.5465E−09 + 3.0199E−11 + 3.0199E−11 +
F8 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.4742E−10 + 3.0199E−11 + 3.0199E−11 +
F9 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 3.3371E−01 = 1.2118E−12 + 1.2118E−12 +
F10 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 2.8541E−07 + 1.2118E−12 + 1.2118E−12 +
F11 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + NaN = 1.2118E−12 + 1.2118E−12 +
F12 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F13 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ABDOLLAHZADEH
ET AL.
T A B L E 11 p‐Values of the Wilcoxon rank‐sum test with 5% significance for F1–F13 with 500 dimensions (p‐values ≥0.05 are shown in boldface)
Proposed Proposed Proposed Proposed Proposed Proposed Proposed Proposed
method method method method method method method method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
ABDOLLAHZADEH

No. p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R


ET AL.

F1 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +


F2 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F3 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +
F4 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F5 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F6 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F7 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 5.0723E−10 + 3.0199E−11 + 3.0199E−11 +
F8 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F9 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + NaN = 1.2118E−12 + 1.2118E−12 +
F10 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 2.7516E−11 + 1.2118E−12 + 1.2118E−12 +
F11 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + NaN = 1.2118E−12 + 1.2118E−12 +
F12 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F13 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
47
48

T A B L E 12 p‐Values of the Wilcoxon rank‐sum test with 5% significance for F1–F13 with 1000 dimensions (p‐values ≥0.05 are shown in boldface)
|

Proposed Proposed Proposed Proposed Proposed Proposed Proposed Proposed


method method method method method method method method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
No. p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R
F1 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +
F2 3.0199E−11 + 3.0199E−11 + 1.2118E−12 + 3.0199E−11 + 5.2190E−12 + 3.0199E−11 + 3.0199E−11 + 1.2118E−12 +
F3 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 +
F4 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F5 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F6 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F7 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.6459E−08 + 3.0199E−11 + 3.0199E−11 +
F8 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.3384E−11 + 3.0199E−11 + 3.0199E−11 +
F9 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + NaN = 1.2118E−12 + 1.2118E−12 +
F10 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2599E−08 + 1.2118E−12 + 1.2118E−12 +
F11 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 3.3371E−01 + 1.2118E−12 + 1.2118E−12 +
F12 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F13 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
ABDOLLAHZADEH
ET AL.
ABDOLLAHZADEH
ET AL.

T A B L E 13 p‐Values of the Wilcoxon rank‐sum test with 5% significance for F14–F23 problems (p‐values ≥0.05 are shown in boldface)
Proposed Proposed Proposed Proposed Proposed Proposed Proposed
method method method method method Proposed method method method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
No. p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R p‐values R
F14 1.2108E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 6.0793E−10 + 1.2118E−12 + 4.5736E−12 + 1.2656E−05 +
F15 8.2074E−09 + 1.3841E−08 + 4.8297E−09 + 4.8297E−09 + 4.9425E−08 + 2.3162E−08 + 5.2637E−11 + 3.3687E−09 +
F16 1.7203E−12 + 1.7203E−12 + 1.7203E−12 + 1.7203E−12 + NAN = 1.7203E−12 + NaN = NaN =
F17 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + NaN = 1.2118E−12 + NaN = NaN =
F18 1.7546E−11 + 1.7546E−11 + 1.7546E−11 + 1.7546E−11 + 5.4258E−01 = 1.7546E−11 + 2.8432E−11 + 3.2643E−03 +
F19 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.2118E−12 + 1.1002E−02 + 1.2118E−E−12 + 1.2118E−12 + 1.2118E−12 +
F20 5.9250E−04 + 5.9250E−04 + 1.4059E−11 + 5.9250E−04 + 4.2227E−01 + 2.3333E−03 + 7.9933E−01 = 7.3279E−03 +
F21 1.3369E−11 + 1.3369E−11 + 1.3369E−11 + 1.3369E−11 + 2.0611E−08 + 1.3369E−11 + 3.0888E−09 + 7.0284E−05 +
F22 5.1436E−12 + 5.1436E−12 + 5.1436E−12 + 5.1436E−12 + 3.8501E−06 + 5.1436E−12 + 1.9080E−08 + 2.9894E−04 +
F23 1.4488E−11 + 1.4488E−11 + 1.4488E−11 + 1.4488E−11 + 4.3452E−03 + 1.4488E−11 + 9.1144E−09 + 2.6270E−01 =
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
49
50

T A B L E 14 p‐Values of the Wilcoxon rank‐sum test with 5% significance for F4–F52 problems (p‐values ≥0.05 are shown in boldface)
|

Proposed Proposed Proposed Proposed Proposed Proposed Proposed


method method method method method method method Proposed method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
No. p‐values R p‐values R p‐values R p‐values p‐values R p‐values R p‐values R p‐values R
F24 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 4.5043E−11 4.2039E−01 = 3.0199E−11 + 3.0199E−11 + 4.1178E−06 +
F25 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 7.3891E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F26 1.4046E−07 + 9.1929E−06 + 9.1929E−06 + 7.9575E−04 8.9538E−03 + 1.9088E−08 + 5.8737E−04 ‐ 2.4940E−08 +
F27 4.9508E−11 + 3.1825E−01 = 3.1825E−01 + 1.3821E−02 1.4575E−10 + 2.3607E−10 + 4.9447E−11 + 4.9719E−04 +
F28 3.0199E−11 + 2.3243E−02 + 2.3243E−02 + 5.4620E−06 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 1.2597E−01 =
F29 3.0199E−11 + 1.0869E−01 = 1.0869E−01 = 3.6439E−02 1.0105E−08 + 4.0772E−11 + 8.0727E−01 = 1.5638E−02 +
F30 3.8180E−10 + 3.8709E−01 = 3.8709E−01 = 7.8446E−01 4.6836E−08 + 2.5711E−07 + 4.1973E−10 + 2.5581E−02 +
F31 1.4617E−10 + 7.2434E−02 = 7.2434E−02 = 5.5536E−02 5.4840E−11 + 3.6829E−11 + 4.0696E−11 + 1.9947E−05 +
F32 8.1465E−05 + 3.1830E−01 = 3.1830E−01 + 4.2039E−01 1.8567E−09 + 1.3594E−07 + 1.6285E−02 + 2.4157E−02 +
F33 3.6897E−11 + 1.6132E−10 + 1.6132E−10 + 6.5183E−09 2.3715E−10 + 3.0199E−11 + 3.0199E−11 + 8.6634E−05 +
F34 4.5043E−11 + 1.6980E−08 + 1.6980E−08 + 3.0199E−11 1.2235E−01 = 3.0199E−11 + 3.0199E−11 + 1.1747E−04 +
F35 3.6897E−11 + 1.2057E−10 + 1.2057E−10 + 7.0430E−07 1.4733E−07 + 7.3891E−11 + 4.5043E−11 + 2.0152E−08 +
F36 1.3111E−08 + 1.1737E−09 + 1.1737E−09 + 5.4620E−06 1.8682E−05 + 8.1527E−11 + 3.0199E−11 + 1.8567E−09 +
F37 3.3384E−11 + 3.0199E−11 + 3.0199E−11 + 4.4205E−06 2.2273E−09 + 4.0772E−11 + 3.0199E−11 + 3.0199E−11 +
F38 3.3520E−08 + 9.3341E−02 = 9.3341E−02 = 5.8737E−04 5.5611E−04 + 2.4913E−06 + 1.9568E−10 + 1.1738E−E−03 +
F39 3.1967E−09 + 8.5641E−04 + 8.5641E−04 + 2.0023E−06 1.8682E−05 + 8.9934E−11 + 8.8910E−10 + 5.8737E−04 +
F40 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 4.0772E−11 + 3.0199E−11 + 3.0199E−11 + 3.0199E−11 +
F41 3.0199E−11 + 1.4643E−10 + 1.4643E−10 + 9.2603E−09 1.0666E−07 + 3.0199E−11 + 3.0199E−11 + 3.3384E−11 +
F42 1.7769E−10 + 1.4067E−04 + 1.4067E−04 + 1.0188E−05 4.5043E−11 + 1.3289E−10 + 3.0199E−11 + 7.2208E−06 +
ABDOLLAHZADEH
ET AL.
TABLE 14 (Continued)

Proposed Proposed Proposed Proposed Proposed Proposed Proposed


method method method method method method method Proposed method
versus TSA versus GWO versus SCA versus MVO versus PSO versus WOA versus GSA versus MFO
ABDOLLAHZADEH

No. p‐values R p‐values R p‐values R p‐values p‐values R p‐values R p‐values R p‐values R


ET AL.

F43 3.0199E−11 + 3.0199E−11 + 3.0199E−11 + 7.3891E−11 3.6897E−11 + 3.0199E−11 + 3.0199E−11 + 4.5043E−11 +


F44 7.3891E−11 + 4.6371E−03 + 4.6371E−03 + 2.2257E−01 5.4941E−11 + 1.6947E−09 + 6.4142E−01 = 6.4142E−01 =
F45 3.0199E−11 + 6.4142E−01 = 6.4142E−01 = 1.1228E−02 6.0658E−11 + 1.9568E−10 + 3.0199E−11 + 1.1058E−04 +
F46 1.8567E−09 + 9.0307E−04 ‐ 9.0307E−04 + 7.5059E−01 1.3111E−08 + 5.0723E−10 + 6.7220E−10 + 1.6798E−03 +
F47 3.3520E−08 + 6.7350E−01 = 6.7350E−01 = 9.6263E−02 4.6159E−10 + 2.0058E−04 + 3.0199E−11 + 9.1171E−01 =
F48 1.0702E−09 + 5.6073E−05 + 5.6073E−05 + 1.5178E−03 2.0283E−07 + 6.0104E−08 + 3.1589E−10 + 7.5991E−07 +
F49 8.9719E−11 + 5.2603E−04 + 5.2603E−04 + 2.7716E−01 4.9630E−11 + 5.4806E−11 + 4.0671E−11 + 2.0661E−02 +
F50 4.6159E−10 + 4.6159E−10 + 4.6159E−10 + 4.7445E−06 1.2023E−08 + 5.9673E−09 + 3.0199E−11 + 6.0459E−07 +
F51 3.3384E−11 + 5.3221E−03 + 5.3221E−03 + 7.0430E−07 1.8500E−08 + 5.9673E−09 + 3.0199E−11 + 5.0842E−03 +
F52 6.6915E−11 + 1.8724E−07 + 1.8724E−07 + 2.4374E−09 2.1532E−10 + 5.4907E−11 + 6.1177E−10 + 1.2864E−09 +
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm.
|
51
52
|

T A B L E 15 Statistical results of WSRT obtained by GTO


GTO versus GTO versus GTO versus GTO versus GTO versus GTO versus GTO versus GTO versus
Function type TSA (+/=/−) GWO (+/=/−) SCA (+/=/−) MVO (+/=/−) PSO (+/=/−) WOA (+/=/−) GSA (+/=/−) MFO (+/=/−)
Function 13/0/0 13/0/0 13/0/0 13/0/0 12/1/0 11/2/0 13/0/0 13/0/0
F1–F13 (D30)
Function 13/0/0 13/0/0 13/0/0 13/0/0 13/0/0 11/2/0 13/0/0 13/0/0
F1–F13 (D100)
Function 13/0/0 13/0/0 13/0/0 13/0/0 13/0/0 11/2/0 13/0/0 13/0/0
F1–F13 (D500)
Function 13/0/0 13/0/0 13/0/0 13/0/0 13/0/0 12/1/0 13/0/0 13/0/0
F1–F13 (D100)
Function F14–F23 10/0/0 10/0/0 10/0/0 10/0/0 7/3/0 10/0/0 7/3/0 7/3/0
Function F23–F52 29/0/0 20/8/1 23/6/0 21/7/1 27/2/0 29/0/0 26/2/1 26/3/0
Total 91/0/0 82/8/1 85/6/0 83/7/1 85/6/0 84/7/0 85/5/1 85/6/0
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; MVO, multiverse optimizer; PSO,
particle swarm optimization; SCA, sine–cosine algorithm; TSA, tunicate swarm algorithm; WOA, whale optimization algorithm; WSRT, wilcoxon static rank test.
ABDOLLAHZADEH
ET AL.
ABDOLLAHZADEH ET AL. | 53

T A B L E 16 Results of Friedman test of iterative version on (F1–F13) with 30 dimensions


Evaluation of F1–F13 with 30 dimensions
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 1.5859 5.9487 3.9705 8.3410 6.5513 5.8462 4.2141 6.5308 8.6410
Rank 1 6 3 9 8 5 4 7 10
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

T A B L E 17 Results of Friedman test of iterative version on (F1–F13) with 100 dimensions


Evaluation of F1–F13 with 100 dimensions
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 1.4718 5.5205 3.7333 8.4103 6.9974 5.8846 3.5987 7.3641 8.9513
Rank 1 5 4 9 7 6 3 8 10
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

T A B L E 18 Results of Friedman test of iterative version on (F1–F13) with 500 dimensions


Evaluation of F1–F13 with 500 dimensions
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 1.4333 5.7615 3.8333 8.1949 7.5821 5.9769 3.1859 6.8795 9.0821
Rank 1 5 4 9 8 6 3 7 10
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

T A B L E 19 Results of Friedman test of iterative version on (F1–F13) with 1000 dimensions


Evaluation on F1–F13 with 1000 dimensions
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 1.4423 5.8615 3.9744 8.2269 7.6744 5.9436 3.1103 6.6974 9.1115
Rank 1 5 4 9 8 6 3 7 10
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

5 | ENGINEERING OPTIMIZATION PROBLEM

One of the important research areas is the use of P‐metaheuristics to solve engineering problems.
For this, in this section, the GTO performance in solving engineering problems was tested. For this
test, seven low‐ and high‐variable common engineering problems in CLEC 2011 Real World
Optimization Problems83 were used. Also, results from GTO performance were compared with
54 | ABDOLLAHZADEH ET AL.

T A B L E 20 Results of Friedman test of iterative version on (F14–F23)


Evaluation of F14–F23
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 1.9200 7.0667 5.5300 8.1233 5.5767 3.7233 6.0367 4.5133 4.0700
Rank 1 8 5 9 6 2 7 4 3
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

T A B L E 21 Results of Friedman test of iterative version on (F24–F52)


Evaluation of F24–F52
GTO TSA GWO SCA MVO PSO WOA GSA MFO
AVG 2.0567 6.7689 3.9067 6.3078 3.6056 5.4289 6.3978 6.4700 4.3489
Rank 1 9 3 6 2 5 7 8 4
Abbreviations: GSA, gravitational search algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO,
moth–flame optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; SCA, sine–cosine algorithm; TSA,
tunicate swarm algorithm; WOA, whale optimization algorithm.

GWO,3 MFO,19 FFA,2 TSA,74 PSO,28 and EO optimizers.84 This evaluation is based on 30 in-
dependent runs using 30 populations and a maximum of 500 iterations, and finally, the best
solution obtained in each of the optimization algorithms was used for comparison.

5.1 | Parameter estimation for Frequency‐Modulated (FM) sound


waves

FM sound wave synthesis is one of the most critical factors in modern music systems and plays an
important role. This issue has six dimensions to optimize the FM synthesizer parameter. At
X = {a1, ω1, a2 , ω2 , a3, ω3}A vector is given to the following equation for optimization as a sound
wave. This problem is a highly complex one and a multimodal issue, having strong epistasis. This
problem has the lowest value f (Xsol ⃗ ) = 0. This problem is mathematically modeled as follows:

y (t ) = a1 · sin (ω1 · t · θ + a2 · sin (ω2 · t · θ + a3 · sin (ω3 · t · θ ))),


y0 (t ) = (1.0) · sin ((5.0) · t · θ − (1.5) · sin ((4.8) · t · θ + (2.0) · sin ((4.9) · t · θ ))).
In the above equation, θ = 2π /100 and the parameters are defined in the range [−6.4, 6.35].
The cost function is calculated using the sum of the square errors between the estimated wave
and the target wave as follows:
100
f (X⃗ ) = ∑ (y (t ) − y0 (t ))2.
t=0
According to Table 22, there is almost a similar performance between GTO and EO
algorithms, with both finding high‐quality solutions. Compared with other optimizers, GTO
has performed very well.
ABDOLLAHZADEH ET AL. | 55

T A B L E 22 Comparison of results for parameter estimation for frequency‐modulated (FM) sound waves
Algorithms MFO PSO GWO TSA FFA EO GTO
x(1) 0.6141 −0.5886 −0.6654 0.3415 −0.5627 −1.0000 −1.0000
x(2) 0.0432 5.0145 −0.1684 4.7881 0.0525 −5.0000 −5.0000
x(3) −4.3251 −3.2779 1.5173 1.4309 −3.4797 −1.5000 1.5000
x(4) 4.7923 −4.9324 −0.1287 0.1158 4.8930 −4.8000 4.8000
x(5) 0.8339 −0.8562 −4.1335 0.0975 1.1491 −2.0000 2.0000
x(6) 0.1278 −0.1476 −4.8997 0.5480 −4.8345 4.9000 4.9000
Maximum cost 11.8969 13.1807 8.4725 25.1052 17.4291 8.4450E−12 2.2811E−27
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray
Wolf Optimization; MFO, moth–flame optimization; PSO, particle swarm optimization; TSA, tunicate swarm algorithm.

5.2 | Circular antenna array design problem

The circular‐shaped antenna has diverse radar and commercial satellite, mobile, and sonar
communication systems.85–87 This problem has high complexity with 12 dimensions.
The array factor for the circular array is written as follows:
N
AF (φ) = ∑ In exp ⎡⎣jkr (cos (φ − φang
n
) − cos (φ0 − φangn)) + βn⎤⎦,
n =1
where
n
φang = 2π (n − 1)/ N .

Table 23 illustrates the GTO performance results and other optimization algorithms in
solving the Circular Antenna Array Design problem. According to the results, one can easily
conclude that GTO can find high‐quality solutions to this problem and has an outstanding
performance compared with other algorithms.

5.3 | Spread spectrum radar polyphase code design probem

In designing radar systems involving pulse compression, one must pay attention to the way
waveform is selected. Many radar pulse modulation methods make pulse compression possible.
Polyphase codes are very focused attention because of their unique features, digital pro-
cessing techniques, and easy implementation. In Reference [88], a new polyphase pulse
compression code synthesis method is provided. This problem is modeled as a min‐max
nonlinear nonconvex optimization problem in a continuous optimization space with
multiple local optimizations. In this case, the goal is to minimize the biggest among the
socialized autocorrelation function samples. It is an NP‐hard issue and has 20 constraints.
It can be expressed as follows:
global min x ∈ X f (x ) max {φ1 (x ), …, φ2m (x )},
X = {x1, …, x n} ∈ Rn {0 ≤ x j ≤ 2π , j = 1, …, n},
where m = 2n − 1 and
56 | ABDOLLAHZADEH ET AL.

T A B L E 23 Comparison of results for parameter estimation for circular antenna array design problem
Algorithms MFO PSO GWO TSA EO FFA GTO
x(1) 0.9545 0.8036 0.9866 0.6173 1.0000 1.0000 0.7733
x(2) 0.3995 0.5723 0.4619 0.2384 0.8062 0.6024 0.4872
x(3) 0.3198 0.2002 0.3347 0.2417 0.2000 0.2000 0.2708
x(4) 0.2511 0.2189 0.2036 0.2000 0.2068 0.3067 0.2000
x(5) 0.2018 0.2001 0.2643 0.2602 0.2000 0.2000 0.3088
x(6) 0.8685 0.4562 0.8556 0.2275 0.7701 0.7403 0.3453
x(7) −26.5547 165.1796 −27.8905 180.0000 162.0306 164.3265 −28.7507
x(8) 35.4189 180.0000 38.0731 −180.0000 −166.5521 −175.2702 21.3763
x(9) −75.3217 166.0494 −79.5042 −180.0000 179.6123 −180.0000 −98.3224
x(10) −44.5045 −179.7188 −37.4055 −180.0000 −179.9335 180.0000 26.6935
x(11) 88.7929 180.0000 79.0364 180.0000 179.9995 −164.5573 89.9415
x(12) −13.1863 175.4279 −12.8653 180.0000 165.0296 −180.0000 −21.0545
Minimum cost −17.5798 −12.1064 −17.7263 −10.3666 −12.9313 −12.3261 −19.6726
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray
Wolf Optimization; MFO, moth–flame optimization; PSO, particle swarm optimization; TSA, tunicate swarm algorithm.

n
⎛ j

⎜ ⎟
φ2i −1 (x ) = ∑cos ⎜ ∑ xk ⎟, i = 1, …, n,
⎜ ⎟
j=i ⎝ k = 2i − j −1 +1 ⎠
n
⎛ j

⎜ ⎟
φ2i (x ) = 0.5 + ∑ cos ⎜ ∑ xk ⎟ , i = 1, …, n − 1,
⎜ k = 2i − j +1 ⎟
j = i +1 ⎝ ⎠
φm + i (x ) = −φi (x ), i = 1, …, m .

According to Table 24 on findings from experiments performed by the GTO algorithm and
other comparable optimizers, it is easily determined that GTO has performed well in this
regard, outperforming other optimizers. However, as the dimensions of high‐quality solutions
increase, it still has reasonable‐quality solutions.

5.4 | Cassini 2: Spacecraft trajectory optimization problem

One of the excellent engineering problems that can be solved using global optimization algo-
rithms is how space missions are designed. Multiple Gravity Assist (MGA) problem is a
mathematically optimized problem that includes nonlinear limit and finite‐dimensions. MGA
is used to find the best possible path for spaceship space voyages. On the other hand, MGA also
has certain constraints. These constraints are necessary to find the best path; they can also be
obtained using the MGA with deep space maneuver (MGA‐1DSM) problem. Finally, using this
technique, large‐scale optimization problems can be solved, the details of which are fully
described in References [89–92]. In the Cassini 2 problem, finding the best possible path to
ABDOLLAHZADEH ET AL. | 57

T A B L E 24 Comparison of results for parameter estimation for spread spectrum radar polyphase code
design problem
Algorithms MFO PSO GWO TSA FFA EO GTO
x(1) 0.0044 4.3971 5.4869 1.2869 6.2667 4.3585 6.1488
x(2) 4.2016 1.4516 5.3875 5.8586 6.2107 0.7205 5.0841
x(3) 1.7748 4.6356 4.2862 3.2758 2.7031 0.8324 1.6206
x(4) 6.2832 4.2592 2.7940 2.6828 3.6306 6.0732 4.2628
x(5) 1.4021 6.2832 1.5939 2.2665 2.1111 2.0552 5.0733
x(6) 1.7472 4.4308 4.7279 0.3632 6.2527 0.2552 5.2813
x(7) 1.8514 2.2753 5.5036 2.1706 1.2884 2.5698 3.6056
x(8) 3.2907 2.9177 3.3613 3.7991 3.3188 2.2662 5.1049
x(9) 4.0260 3.7852 2.2454 2.8526 4.1096 3.1314 5.1965
x(10) 2.4005 2.9126 5.4152 5.8588 4.1900 2.8781 5.4175
x(11) 0.0000 0.4020 4.6036 0.4441 4.5901 1.5814 2.9723
x(12) 1.5084 2.6156 5.1030 1.4573 4.7653 5.2234 2.2919
x(13) 1.2992 4.5284 6.1617 1.2557 2.3749 4.1046 2.5131
x(14) 0.9298 3.0150 0.5646 1.8607 2.5506 5.3812 2.4686
x(15) 5.6946 6.0450 0.4183 6.1467 0.2602 4.0543 2.6674
x(16) 1.3546 4.0678 3.9228 0.8792 1.1331 3.3622 1.5515
x(17) 2.6873 3.0968 4.6189 0.5678 1.1350 4.7883 4.2706
x(18) 3.5593 5.1608 2.6261 2.4740 3.2753 5.9641 5.7299
x(19) 3.2745 5.2118 2.9798 2.3281 2.0097 5.1956 5.3923
x(20) 5.0690 3.3131 5.9047 3.5174 4.5150 0.0395 5.4838
Minimum cost 1.1455 1.1367 1.0128 1.5923 1.4710 0.9992 0.6971
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray
Wolf Optimization; MFO, moth–flame optimization; PSO, particle swarm optimization; TSA, tunicate swarm algorithm.

Saturn's space voyage uses deep space maneuvers. Considering many patterns, we can say that
this issue has much complexity, including 22 constraints.
The results from GTO and other comparable optimizers are shown in Table 25. According to
Table 25, it is concluded that GTO, compared with other optimizers, has been able to find an
excellent path for Cassini 2 problem. Also, the solution found is significantly better than other
optimizers.

5.5 | Messenger: spacecraft trajectory optimization problem

Messenger is also a space voyage design problem for Mercury, where the MGA‐1DSM problem
is used to travel to Mercury. Given the planets' order in this problem's space path, this issue is
complex and has 26 constraints. GTO and other optimization algorithms are compared using
58 | ABDOLLAHZADEH ET AL.

T A B L E 25 Comparison of results for Cassini 2: spacecraft trajectory optimization problem


Algorithms MFO PSO GWO TSA FFA EO GTO
Tt0 −209.4633 −203.9965 −39.2299 −432.3912 −728.9201 −572.5230 −704.6948
Vinf 3.2830 3.0740 4.7766 3.0000 4.9911 3.0057 3.0000
u 0.4244 0.4622 0.8265 0.0032 0.3588 0.0002 0.4875
v 0.7853 0.7687 0.2854 0.1203 0.2695 0.7075 0.4047
T1 131.3982 158.6309 366.9611 247.3835 137.8282 368.8591 166.8063
T2 461.8716 336.5469 111.8384 453.4750 449.4077 113.2429 394.8529
T3 211.4500 254.5485 177.6345 100.8880 132.7894 288.4974 298.7698
T4 508.1599 573.5396 517.0413 709.2399 691.1092 525.3567 584.0526
T5 1891.0147 2199.9056 2199.2514 2069.9806 2088.2445 2190.2062 1989.4309
eta1 0.2226 0.0108 0.3645 0.4596 0.0100 0.3527 0.0262
eta2 0.5635 0.6326 0.0918 0.0796 0.0100 0.0100 0.0103
eta3 0.0831 0.3883 0.4794 0.0157 0.0100 0.0100 0.0435
eta4 0.0709 0.1366 0.0101 0.3100 0.1989 0.0100 0.1007
eta5 0.0100 0.0107 0.1984 0.0258 0.5660 0.6405 0.0100
r_p1 2.4774 2.3331 1.0503 3.7745 1.9316 1.3821 1.4122
r_p2 3.0812 5.9848 1.2747 1.8825 1.4553 1.4523 4.5920
r_p3 1.1500 1.1500 1.1500 6.1972 2.3022 1.1893 1.1500
r_p4 71.4332 175.6207 4.4314 289.3773 68.2538 286.7064 290.9606
b_incl1 1.2980 0.5502 2.4412 −1.9077 −1.7969 1.4596 −1.3009
b_incl2 −0.7445 0.1282 −0.4835 2.1423 1.0122 −1.8030 1.6104
b_incl3 −1.5539 −2.4487 −1.6798 1.4694 1.7192 −1.6927 −1.8006
b_incl4 1.4787 1.7738 −1.4784 −1.2725 −1.5228 0.8913 −1.3151
Minimum cost 19.4199 21.7884 19.4901 24.5091 20.7313 17.2389 14.6652
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray
Wolf Optimization; MVO, multiverse optimizer; PSO, particle swarm optimization; TSA, tunicate swarm algorithm.

Messenger: Spacecraft Trajectory l2Optimization Problem is illustrated in Table 26. GTO has
been able to find a better solution than other optimizers. GTO's performance in solving space
voyage problems indicates that GTO can solve such problems.

5.6 | Lennard–Jones (LJ) potential problem

LJ potential problem is used to optimize potential energy to minimize molecular potential


energy by considering pure LJ cluster.93,94 The LJ Potential problem is a multimodal optimi-
zation problem93 with 30 constraints.
ABDOLLAHZADEH ET AL. | 59

pi⃗ = {x i⃗ , yi⃗ , z i⃗ }, i = 1, …, N ,

which is given as follows:


N −1 N
VN (p) = ∑ ∑ (r−12
ij )
− 2 · rij−6 ,
i =1 j = i +1

where rij = ||pi⃗ − pi⃗ ||2 with gradient


N
∇j VN (p) = −12 ∑ (r −14
ij )(
− rij−8 pj⃗ − pi⃗ , ) j = 1, …, N .
i =1, i ≠ j

The first variable due to the second atom, that is, x1 ∈ [0, 4], t, then the second and third
variables are such that x2 ∈ [0, 4] a and x3 ∈ [0, π ]. The coordinates x i for any other atom is
taken to be bound in the range:
⎡ 1 ⎢i − 4 ⎥ 1 ⎢ i − 4 ⎥⎤
⎢−4 − ⎢⎣ ⎥⎦ , 4 + ⎢⎣ ⎥ ⎥.
⎣ 4 3 4 3 ⎦⎦
The GTO experiment results and other comparable optimization algorithms in solving the
LJ Potential Problem are demonstrated in Table 27.
Table 27 about the LJ Potential Problem test results shows that GTO performed better than
other optimizers and managed to provide a much better solution. It has also performed well as
constraints on engineering problems have increased.

5.7 | Static economic load dispatch (ELD) problem (instance 4)

The static ELD problem is used to minimizing the fuel cost of production units in a particular
period. This problem's constraints are based on the generator's operating constraints, con-
sidering the constraints created in the ramp rate and prohibiting some functional areas, which
have 40 constraints. Two different models are used to use this problem, including smooth cost
functions and nonsmooth cost functions. These two models are as follows:
Objective function: The objective production cost function can be considered as follows:
NG
Minimize:F = ∑fi (Pi).
i =1
The cost function can be described for a unit with a valve point loading effect as follows:

fi (Pi ) = ai Pi2 + bi Pi + ci + eisin fi Pimin − Pi (( )) .


There are several limitations to this, including power balance constraints to consider the
energy balance and ramp rate limits and prohibited operating zones.
Power balance constraints or demand constraints: These constraints are based on balancing
the total system output and the total system load (PD) and losses (PL ).
These constraints are based on balancing the total system output and the total system load
(PD) and losses (PL ).
60 | ABDOLLAHZADEH ET AL.

T A B L E 26 Comparison of results for Messenger: Spacecraft Trajectory Optimization Problem.


Algorithms MFO PSO GWO TSA FFA EO GTO
t0 2110.2178 2028.0783 2054.0056 2041.8393 2140.2196 2119.5171 2121.2328
Vinf 2.5000 2.9441 3.0905 2.7982 3.0454 4.0500 2.8345
u 0.4146 0.2958 0.2655 0.2669 0.4382 0.7054 0.5514
v 0.6156 0.5843 0.3372 0.8695 0.1656 0.1104 0.4439
T1 287.1243 303.5172 254.6830 263.0881 220.9636 244.3515 181.0357
T2 308.5598 263.2428 272.2233 254.2745 118.4991 234.7506 318.0092
T3 270.1088 243.7431 247.5046 263.5861 328.6136 221.0144 354.1881
T4 263.8982 257.8017 255.3651 263.9089 253.2550 265.2100 181.2484
T5 263.9142 265.1121 267.7502 263.9660 269.9542 264.0798 348.8881
T6 265.4221 266.0375 265.5928 264.8469 262.2830 264.2948 174.2908
eta1 0.4636 0.4248 0.4110 0.3621 0.4329 0.2957 0.4517
eta2 0.4730 0.4114 0.2368 0.4103 0.2402 0.2757 0.4682
eta3 0.5260 0.4655 0.3955 0.4005 0.6469 0.0100 0.7889
eta4 0.0100 0.0342 0.3971 0.5022 0.0765 0.3001 0.2114
eta5 0.0100 0.3659 0.3356 0.0367 0.2173 0.0303 0.6735
eta6 0.4166 0.2607 0.3852 0.0547 0.0161 0.1470 0.0846
r_p1 4.1688 5.5503 6.0000 1.1003 2.2844 6.0000 1.9351
r_p2 1.8143 1.4124 2.4637 1.2682 1.1013 3.1474 3.0574
r_p3 1.0500 2.7356 2.7016 6.0000 2.5748 4.4835 3.4400
r_p4 1.2249 2.4546 6.0000 4.6373 6.0000 6.0000 5.7437
r_p5 5.9336 3.1903 3.2996 1.0500 2.3220 1.9974 2.3688
b_incl1 0.2979 −0.6263 −0.6711 −0.4830 0.1810 −3.1414 2.7598
b_incl2 −1.3690 −0.8602 −1.0240 −0.9280 −0.9396 −1.0629 −1.4527
b_incl3 −1.3990 −1.5479 −0.7454 −1.3936 −3.1416 −0.3684 −3.0640
b_incl4 −0.1553 −0.3599 −0.7031 −0.8976 2.7587 −0.6907 1.2169
b_incl5 −3.0797 −3.0978 1.6515 −0.9712 1.8902 −2.0546 3.1407
Minimum cost 16.9754 19.0315 19.4928 20.8464 17.9505 16.1245 15.5848
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray
Wolf Optimization; MFO, moth–flame optimization; PSO, particle swarm optimization; TSA, tunicate swarm algorithm.
NG
∑Pi = PD + PL,
i =1

where PL is obtained using B‐coefficients, given by


NG NG NG
PL = ∑∑ Pi Bij Pj + ∑B0i Pi + B00.
i =1 j =1 i =1
ABDOLLAHZADEH ET AL. | 61

T A B L E 27 Comparison of results for the Lennard–Jones potential problem


Algorithms MFO PSO GWO TSA FFA EO GTO
x(1) 0.2671 0.5101 0.0885 0.3952 1.6833 0.5063 0.3898
x(2) 1.3870 0.6014 0.0564 0.8701 0.0000 0.3234 0.0533
x(3) 0.0000 0.3334 0.0001 0.2346 3.1416 0.0993 0.0149
x(4) 0.2431 1.1261 −0.4250 0.0311 2.1714 −0.4830 0.2374
x(5) 0.6003 −0.1796 0.7549 0.1771 0.3154 0.2170 −0.0356
x(6) 0.6206 0.3573 0.4617 0.8615 3.9527 0.1251 −0.9689
x(7) −0.0579 −0.1758 0.5549 −0.4712 4.2500 −1.4984 0.2327
x(8) 0.5313 0.3317 0.5691 0.4195 −0.7042 −0.9648 0.9752
x(9) −0.3676 −0.3018 0.6796 0.0113 −4.2500 −0.4689 0.3120
x(10) 0.1998 −0.2444 −0.0760 0.8097 4.5000 −0.7647 −0.0475
x(11) −0.2608 −0.5978 0.8217 −0.0967 0.1022 −0.3281 −0.7009
x(12) 0.1920 1.1275 1.3947 −0.9175 −3.7141 −0.6517 0.4753
x(13) 0.1327 0.1736 0.7013 0.1488 −4.7500 0.0279 −0.4348
x(14) −0.3662 −0.3807 −0.4230 0.6235 −4.7500 −0.6178 0.2358
x(15) −0.7768 0.2576 0.5965 −0.7126 4.7500 −1.1716 0.5067
x(16) 0.0037 −0.3448 0.5578 4.4230 1.1704 −1.6942 0.3967
x(17) −0.8250 0.3149 −0.7315 1.6414 0.3347 −0.0508 −0.8329
x(18) −2.8792 0.7309 −0.3505 0.1900 3.9295 −0.8346 −0.4146
x(19) 0.3441 0.6694 −0.8490 0.4160 1.6469 −1.3933 −0.3016
x(20) −0.3913 −0.9487 −0.1402 −0.0585 0.0724 −0.1601 0.6182
x(21) 1.1478 0.9023 0.2260 −0.0029 4.7661 0.1095 −0.4357
x(22) −0.7464 0.1325 −0.1549 −0.1764 1.5774 −0.0008 1.1382
x(23) −0.0224 0.2311 −0.8594 1.6307 5.5000 0.2862 −0.1329
x(24) 0.0111 −1.2411 0.3347 0.1136 −5.5000 −0.7550 −0.5768
x(25) −0.4349 0.5326 −0.4152 0.1417 1.6534 −0.0934 −0.4794
x(26) −0.8681 0.0108 −0.5623 2.5093 −0.5413 1.0809 −0.3347
x(27) 0.6086 1.1170 −0.5804 −0.2428 3.9793 −0.1651 −0.3088
x(28) −0.5140 0.4693 −0.1642 −4.3317 0.7903 0.0836 0.6724
x(29) 0.0372 0.2896 −0.0697 −3.5969 5.1388 −0.5345 0.7786
x(30) 0.9538 −2.1906 0.9489 −0.8126 −6.0000 −0.1812 −0.5759
Minimum cost −19.9700 −21.9253 −27.3717 −14.2024 −11.1041 −23.4774 −28.2927
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer;
GWO, Gray Wolf Optimization; MFO, moth–flame optimization; PSO, particle swarm optimization; TSA, tunicate swarm
algorithm.
62

T A B L E 28 Comparison of results for static economic load dispatch problem


|

Algorithms MFO PSO GWO TSA FFA EO GTO


x(1) 65.7732 110.7118 103.0205 108.5997 86.7764 57.1636 111.8975
x(2) 104.7878 42.2252 36.0000 89.7401 109.3903 91.9088 100.4746
x(3) 63.6692 105.2938 120.0000 110.8107 98.5011 95.7470 64.3511
x(4) 172.5876 162.8263 189.6338 157.1027 189.3158 181.0211 145.3649
x(5) 77.1357 92.2553 97.0000 49.1501 76.4745 96.5571 50.4449
x(6) 99.2621 68.0081 136.5854 94.5158 86.0896 109.7787 138.5457
x(7) 183.2318 121.1933 273.9911 256.8686 298.9501 208.6967 260.0106
x(8) 287.4259 139.7436 274.2183 158.8165 173.9513 189.2741 230.5568
x(9) 287.7518 279.9977 285.4365 200.3664 205.2633 297.9077 247.9155
x(10) 296.1459 288.2853 299.6217 254.1104 219.6204 283.1402 285.6880
x(11) 304.0601 303.1599 371.6609 374.9980 314.6719 374.9649 94.0000
x(12) 320.7162 349.3907 317.7769 333.6486 113.9803 226.6591 337.9839
x(13) 482.7255 482.9042 311.9762 493.6140 155.4001 315.8057 497.1965
x(14) 459.5395 499.8803 161.0828 474.1602 488.5325 491.8038 443.7384
x(15) 314.7693 487.0160 500.0000 486.8390 500.0000 397.6386 500.0000
x(16) 355.7806 499.9725 500.0000 464.3867 491.4372 439.2792 460.5043
x(17) 487.0186 419.7674 500.0000 274.4959 421.1742 475.9742 316.5877
x(18) 459.9431 500.0000 413.1645 437.8435 486.2448 346.1074 336.7837
x(19) 541.0980 271.4420 242.0234 452.2642 488.1976 475.8074 539.5702
x(20) 336.0434 421.5196 416.0744 550.0000 525.6506 429.0016 420.1015
x(21) 482.8277 438.2721 550.0000 549.9985 549.4358 528.7221 544.1446
ABDOLLAHZADEH
ET AL.
TABLE 28 (Continued)

Algorithms MFO PSO GWO TSA FFA EO GTO


x(22) 525.4078 518.0395 372.5198 536.9445 447.9415 549.9375 493.7023
ABDOLLAHZADEH

x(23) 326.4664 454.4803 549.5955 549.9456 549.9983 432.4329 542.9135


ET AL.

x(24) 530.0683 526.5453 548.7954 550.0000 509.0542 529.5707 546.2014


x(25) 508.2652 503.4424 550.0000 491.4257 549.9992 360.7464 482.2555
x(26) 489.2674 511.7457 529.8968 455.9687 536.6741 519.5425 481.6343
x(27) 10.0000 13.6759 58.2096 10.0193 73.1212 39.3881 25.2666
x(28) 53.3558 22.6828 51.1016 63.9740 17.9213 43.9776 10.1979
x(29) 68.5106 28.2811 25.0228 21.6121 44.8808 64.4903 10.0081
x(30) 63.5712 66.8016 91.4735 49.6835 72.5417 49.3813 97.0000
x(31) 164.5171 164.6338 60.0007 178.2766 147.4712 176.1681 172.7217
x(32) 181.5380 178.5046 189.9991 71.5426 123.4563 188.9978 189.1844
x(33) 114.3790 174.2031 60.0000 189.8273 148.6756 160.2995 187.9121
x(34) 156.6048 129.2586 196.8030 195.1742 187.6350 186.4852 196.4992
x(35) 186.4688 193.9308 184.9826 186.4474 90.0000 180.8683 119.2518
x(36) 166.1395 101.9305 200.0000 90.7939 141.1374 117.0903 196.7932
x(37) 91.7811 81.6841 55.3783 35.7710 108.1165 109.8832 102.1149
x(38) 70.8606 94.2705 25.0000 110.0000 77.5612 75.8633 90.2210
x(39) 110.0000 102.8126 106.4758 33.7037 81.5039 52.2144 25.6793
x(40) 500.5054 549.2117 545.4791 306.5603 513.2528 549.7036 404.5824
Minimum cost 132535.6506 131817.4757 133014.4698 134808.4967 132076.5271 132681.9214 130651.7695
Abbreviations: EO, equilibrium optimizer; FFA, Farmland Fertility Algorithm; GTO, Gorilla Troops Optimizer; GWO, Gray Wolf Optimization; MFO, moth–flame optimization; PSO, particle
|

swarm optimization; TSA, tunicate swarm algorithm.


63
64 | ABDOLLAHZADEH ET AL.

Generator constraints: The upper and lower bounds of each generating unit using a pair of
inequality constraints are as follows:

Pimin ≤ Pi ≤ Pimax .
Ramp rate limits: The existence of units' limitations is due to ramp rate constraints, which
are as follows.
Constraints in units are due to ramp rate constraints, which are as follows.
If power generation increases, Pi − Pit −1 ≤ URi .
If power generation decreases, Pit −1 − Pi ≤ DRi .
Limitations related to generator performance are as follows:
Constraints related to generator performance are as follows:

max (Pimin, URi − Pi ) ≤ Pi ≤ min (Pimax , Pit −1 − DRi ).


Prohibited operating zones: Prohibition of work activity in some areas to save is described
below:
Prohibition of work activity in some areas to save is as follows:
pz pz
Pi ≤ P ̆ and Pi ≥ Pˆ .
The GTO algorithm and other comparable optimization algorithms in solving the static
ELD problem are shown in Table 28.
According to the results in Table 28 about testing the static ELD problem using GTO and
other optimizers under comparison, it is seen that GTO is still able to provide a better solution
than other optimizers, as it has even been able to maintain its search features and show much
better performance as constraints have increased. Table 28 indicates that trailing GTO, MFO,
FFA, and EO have performed competitively and near. According to GTO experiments in sol-
ving engineering problems, with different dimensions and much complexity, it is determined
that GTO has an excellent ability to solve various problems, even as dimensions and constraints
have increased. It can also be regarded as an excellent option to solve various optimization
problems even if dimensions and constraints increase.

6 | CONCLUS ION AND F EATU RE WORKS

This article provided a new metaheuristic algorithm called GTO, inspired by the Gorilla group
and their social way of life in nature. The GTO algorithm makes use of a different procedure to
change the exploration and exploitation phases. Also, because various mechanisms are used in
this algorithm, it has shown an excellent performance that can play a robust metaheuristic
algorithm to solve various problems. Because the results from the various standard functions
used for the GTO test indicate an excellent performance, it is required to have proper ex-
ploration and exploitation operations to generate excellent results in some of the standard
functions under experiment. The proposed algorithm is tested on 52 benchmark function
standards: standard, diverse and tested, and seven engineering problems provided at CEC‐2011.
On the other hand, the GTO algorithm has been compared with nine other powerful
metaheuristic algorithms to appraise its performance. This paper's statistical results suggest
that the GTO algorithm has a better solution with better convergence than its competitors. For
a fair comparison, Friedman's test and Wilcoxon rank‐sum test were used. On the basis of the
experimental results, it is concluded that the GTO algorithm applies to real‐world case studies
ABDOLLAHZADEH ET AL. | 65

with unknown search spaces. This algorithm can also be applied to solve multi‐objective
optimization problems. In the meantime, GTO can be appraised in the future and used to solve
recombination optimization problems and diverse problems with different anchors because
metaheuristic algorithms are used in a wide range of problems.

ORCID
Benyamin Abdollahzadeh https://orcid.org/0000-0003-3618-6620
Farhad Soleimanian Gharehchopogh http://orcid.org/0000-0003-1588-1659
Seyedali Mirjalili http://orcid.org/0000-0002-1443-9458

REFERENCES
1. Gharehchopogh FS, Gholizadeh H. A comprehensive survey: whale optimization algorithm and its ap-
plications. Swarm Evol Comput. 2019;48:1‐24.
2. Shayanfar H, Gharehchopogh FS. Farmland fertility: a new metaheuristic algorithm for solving continuous
optimization problems. Appl Soft Comput. 2018;71:728‐746.
3. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Software. 2014;69:46‐61.
4. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Software. 2016;95:51‐67.
5. Holland JH. Genetic algorithms. Sci Am. 1992;267(1):66‐73.
6. Hussain SF, Iqbal S. Genetic ACCGA: co‐similarity based co‐clustering using genetic algorithm. Appl Soft
Comput. 2018;72:30‐42.
7. Geem ZW, Kim JH, Loganathan GV. A new heuristic optimization algorithm: harmony search. Simulation.
2001;76(2):60‐68.
8. Cheng MY, Prayogo D, Wu YW, Lukito MM. A hybrid Harmony search algorithm for discrete sizing
optimization of truss structure. Autom Constr. 2016;69:21‐33.
9. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the International Conference on
Neural Networks (ICNN'95) (Vol 4). Perth, WA: IEEE; 1995:1942‐1948.
10. Karaboga D. An idea based on honey bee swarm for numerical optimization. Technical Report‐tr06. Erciyes
University, Engineering Faculty, Computer and Engineering; 2005.
11. Yang XS. Firefly algorithm. In: Nature‐inspired metaheuristic algorithms (Vol 2). Luniver Press; 2010:1‐148.
12. Yang XS. A new metaheuristic bat‐inspired algorithm. In: Nature Inspired Cooperative Strategies for
Optimization (NICSO 2010). Berlin, Heidelberg: Springer; 2010:65‐74.
13. Rashedi E, Nezamabadi‐Pour H, Saryazdi S. GSA: a gravitational search algorithm. Inf Sci. 2009;179(13):
2232‐2248.
14. Tilahun NHSL, Sathasivam S, Choon OH. Prey–predator algorithm as a new optimization technique using
in radial basis function neural networks. Res J Appl Sci. 2013;8(7):383‐387.
15. Bansal JC, Sharma H, Jadon SS, Clerc M. Spider monkey optimization algorithm for numerical optimi-
zation. Memetic Comput. 2014;6(1):31‐47.
16. Abedinia O, Naslian MD, Bekravi M. A new stochastic search algorithm bundled honeybee mating for
solving optimization problems. Neural Comput Appl. 2014;25(7‐8):1921‐1939.
17. Cheng M‐Y, Prayogo D. Symbiotic organisms search: a new metaheuristic optimization algorithm. Comput
Struct. 2014;139:98‐112.
18. Saha S, Mukherjee V. A novel chaos‐integrated symbiotic organisms search algorithm for global optimi-
zation. Soft Comput. 2018;22(11):3797‐3816.
19. Mirjalili S. moth–flame optimization algorithm: a novel nature‐inspired heuristic paradigm. Knowl‐Based
Syst. 2015;89:228‐249.
20. Qi X, Zhu Y, Zhang H. A new meta‐heuristic butterfly‐inspired algorithm. J Comput Sci. 2017;23:226‐239.
21. Zhong F, Li H, Zhong S. A modified ABC algorithm based on improved‐global‐best‐guided approach and
adaptive‐limit strategy for global optimization. Appl Soft Comput. 2016;46:469‐486.
22. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowl‐Based Syst. 2016;96:
120‐133.
66 | ABDOLLAHZADEH ET AL.

23. Sang H‐Y, Pan Q‐K, Duan P‐y. Self‐adaptive fruit fly optimizer for global optimization. Nat Comput. 2019;
18(4):785‐813.
24. Ruttanateerawichien K, Kurutach W, Pichpibul T. An improved golden ball algorithm for the capacitated
vehicle routing problem. In: Bio‐Inspired Computing—Theories and Applications. Berlin, Heidelberg:
Springer; 2014:341‐356.
25. Gandomi AH, Yang X‐S, Alavi AH. Cuckoo search algorithm: a metaheuristic approach to solve structural
optimization problems. Eng Comput. 2013;29(1):17‐35.
26. Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science. 1983;220(4598):
671‐680.
27. Hsiao Y‐T, Chuang C‐L, Jiang J‐A, Chien C‐C. A novel optimization algorithm: space gravitational opti-
mization. In: 2005 IEEE International Conference on Systems, Man and Cybernetics (Vol 3). IEEE; 2005:
2323‐2328.
28. Simon D. Biogeography‐based optimization. IEEE Trans Evol Comput. 2008;12(6):702‐713.
29. Shah‐Hosseini H. Principal components analysis by the galaxy‐based search algorithm: a novel meta-
heuristic for continuous optimisation. Int J Comput Sci Eng. 2011;6(1‐2):132‐140.
30. Eita MA, Fahmy MM. Group counseling optimization: a novel approach. In: Research and Development in
Intelligent Systems. Vol XXVI. London: Springer; 2010:195‐208.
31. De Castro LN, Zuben FJ Von. The clonal selection algorithm with engineering applications. In: Proceedings
of the GECCO. Las Vegas, Nevada, USA: Morgan Kaufmann; 2000:36‐39.
32. Askarzadeh A. Bird mating optimizer: an optimization algorithm inspired by bird mating strategies.
Commun Nonlinear Sci Numer Simulation. 2014;19(4):1213‐1228.
33. Cuevas E, Cienfuegos M, Zaldivar D, Perez‐Cisneros M. A swarm optimization algorithm inspired in the
behavior of the social‐spider. Expert Syst Appl. 2013;40(16):6374‐6384.
34. Atashpaz‐Gargari E, Lucas C. Imperialist competitive algorithm: an algorithm for optimization inspired by
imperialistic competition. In: 2007 IEEE Congress on Evolutionary Computation. Singapore: IEEE; 2007:
4661‐4667.
35. Shah‐Hosseini H. The intelligent water drops algorithm: a nature‐inspired swarm‐based optimization al-
gorithm. Int J Bio‐Inspired Comput. 2009;1(1‐2):71‐79.
36. Kaveh A, Mahdavi V. Colliding bodies optimization: a novel meta‐heuristic method. Comput Struct. 2014;
139:18‐27.
37. Kashan AH. League championship algorithm (LCA): an algorithm for global optimization inspired by sport
championships. Appl Soft Comput. 2014;16:171‐200.
38. Storn R, Price K. Differential evolution—a simple and efficient heuristic for global optimization over
continuous spaces. J Global Optim. 1997;11(4):341‐359.
39. Kaveh A, Talatahari S. A novel heuristic optimization method: charged system search. Acta Mech. 2010;
213(3‐4):267‐289.
40. Kaveh A, Khayatazad M. A new meta‐heuristic method: ray optimization. Comput Struct. 2012;112:283‐294.
41. Kaveh A, Bakhshpoori T. Water evaporation optimization: a novel physically inspired optimization algo-
rithm. Comput Struct. 2016;167:69‐85.
42. Krishnanand K, Ghose D. Glowworm swarm optimization for simultaneous capture of multiple local
optima of multimodal functions. Swarm Intell. 2009;3(2):87‐124.
43. Kaveh A, Farhoudi N. A new optimization method: dolphin echolocation. Adv Eng Software. 2013;59:53‐70.
44. Eskandar H, Sadollah A, Bahreininejad A, Hamdi M. Water cycle algorithm—a novel metaheuristic op-
timization method for solving constrained engineering optimization problems. Comput Struct. 2012;110:
151‐166.
45. Wolpert DH, Macready WG. No free lunch theorems for search. Technical Report SFI‐TR‐95‐02‐010. Santa
Fe Institute; 1995.
46. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Comput. 1997;1(1):
67‐82.
47. Ho Y‐C, Pepyne DL. Simple explanation of the no‐free‐lunch theorem and its implications. J Optim Theory
Appl. 2002;115(3):549‐570.
48. Neri F, Cotta C, Moscato P. Handbook of Memetic Algorithms. Studies in computational intelligence
(Vol 379). Springer; 2011:1‐367.
ABDOLLAHZADEH ET AL. | 67

49. McDermott J. When and why metaheuristics researchers can ignore “no free lunch” theorems. SN Comput
Sci. 2020;1(1):1‐18.
50. Bonis SA. Contentment in “Songs of the Gorilla Nation: My Journey through Autism”: a human becoming
hermeneutic study. Adv Nurs Sci. 2012;35(3):273‐283.
51. Prince‐Hughes D. Songs of the gorilla nation. My Journey Through Autism. Crown; 2004:1‐240.
52. McNeilage A. Diet and habitat use of two mountain gorilla groups in contrasting habitats in the Virunga.
In: Mountain Gorillas: Three Decades of Research at Karisoke. Cambridge studies in biological and evolu-
tionary anthropology. Germany: Max‐Planck‐Institut für Evolutionäre Anthropologie; 2005:1‐448.
53. Tutin CEG. Ranging and social structure of lowland gorillas in the Lopé Reserve, Gabon. Great ape societies.
1996:58‐70.
54. Yamagiwa J, Mwanza N, Yumoto T, Maruhashi T. Seasonal change in the composition of the diet of eastern
lowland gorillas. Primates. 1994;35(1):1‐14.
55. Elizabeth Rogerslizabeth Rogers M, Maisels F, Williamson EA, Fernandez M, Tutin CEG, Tutin CEG.
Gorilla diet in the Lope Reserve, Gabon. Oecologia. 1990;84(3):326‐339.
56. Yamagiwa J, Kahekwa J, Basabose AK. Intra‐specific variation in social organization of gorillas: implica-
tions for their social evolution. Primates. 2003;44(4):359‐369.
57. Watts DP. Comparative Socio‐ecology of Gorillas. Great Ape Societies; 1996:16‐28.
58. Stokes EJ, Parnell RJ, Olejniczak C. Female dispersal and reproductive success in wild western lowland
gorillas (Gorilla gorilla gorilla). Behav Ecol Sociobiol. 2003;54(4):329‐339.
59. Yamagiwa J. Intra‐ and inter‐group interactions of an all‐male group of Virunga mountain gorillas (Gorilla
gorilla beringei). Primates. 1987;28(1):1‐30.
60. Harcourt AH, Stewart K, Fossey D. Male emigration and female transfer in wild mountain gorilla. Nature.
1976;263(5574):226‐227.
61. Gibeault S, MacDonald SE. Spatial memory and foraging competition in captive western lowland gorillas
(Gorilla gorilla gorilla). Primates. 2000;41(2):147‐160.
62. Robbins MM, Bermejo M, Cipolletta C, et al. Social structure and life‐history patterns in western gorillas
(Gorilla gorilla gorilla). Official Journal of the American Society of Primatologists. 2004;64(2):145‐159.
63. Scott J, Lockard JS. Competition coalitions and conflict interventions among captive female gorillas. Int
J Primatol. 2007;28(4):761‐781.
64. Sicotte P. Effect of male competition on male–female relationships in bi‐male groups of mountain gorillas.
Ethology. 1994;97(1‐2):47‐64.
65. Watts DP. Relations between group size and composition and feeding competition in mountain gorilla
groups. Anim Behav. 1985;33(1):72‐85.
66. Watts DP. Mountain gorilla life histories, reproductive competition, and sociosexual behavior and some
implications for captive husbandry. Zoo Biol. 1990;9(3):185‐200.
67. Yamagiwa J. Dispersal patterns, group structure and reproductive parameters of eastern lowland gorillas at
Kahuzi in the absence of infanticide. In: Mountain Gorillas: Three Decades of Research at Karisoke.
Cambridge University Press; 2001:89‐122.
68. Watts DP. Infanticide in mountain gorillas: new cases and a reconsideration of the evidence. Ethology.
1989;81(1):1‐18.
69. Robbins MM. Variation in the social system of mountain gorillas: the male perspective. In: Mountain
Gorillas: Three Decades of Research at Karisoke. Cambridge University Press; 2001.
70. Watts DP. 12 Gorilla social relationships: a comparative overview. In: Gorilla Biology: A Multidisciplinary
Perspective; 2003:302‐372.
71. Watts DP. Social relationships of female mountain gorillas. In: Cambridge Studies in Biological and Evo-
lutionary Anthropology (Vol 47); 2001:215‐240.
72. Stewart KJ. Social relationships of immature gorillas and silverbacks. Mountain Gorillas: Three Decades of
Research at Karisoke. Cambridge University Press; 2001:183‐213.
73. Harcourt A, Hauser M, Stewart K. Functions of wild gorilla ‘close’ calls. I. Repertoire, context, and in-
terspecific comparison. Behaviour. 1993;124(1‐2):89‐122.
74. Kaur S, Awasthi LK, Sangal AL, Dhiman G. Tunicate swarm algorithm: a new bio‐inspired based meta-
heuristic paradigm for global optimization. Eng Appl Artif Intell. 2020;90:1‐29.
68 | ABDOLLAHZADEH ET AL.

75. Mirjalili S, Mirjalili SM, Hatamlou A. Multi‐verse optimizer: a nature‐inspired algorithm for global opti-
mization. Neural Comput Appl. 2016;27(2):495‐513.
76. Derrac J, García S, Molina D, Herrera F. A practical tutorial on the use of nonparametric statistical tests as
a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput. 2011;
1(1):3‐18.
77. Balachandran M, Devanathan S, Muraleekrishnan R, Bhagawan SS. Optimizing properties of
nanoclay–nitrile rubber (NBR) composites using face centred central composite design. Mater Des. 2012;35:
854‐862.
78. Benyamin A, Farhad SG, Saeid B. Discrete farmland fertility optimization algorithm with metropolis
acceptance criterion for traveling salesman problems. Int J Intell Syst. 2021;36(3):1270‐1303.
79. Abdollahzadeh B, Gharehchopogh FS. A multi‐objective optimization algorithm for feature selection
problems. Eng Comput. 2021:1‐19.
80. Choong SS, Wong L‐P, Lim CP. An artificial bee colony algorithm with a modified choice function for the
traveling salesman problem. Swarm Evol Comput. 2019;44:622‐635.
81. Van den Bergh F, Engelbrecht AP. A study of particle swarm optimization particle trajectories. Inf Sci.
2006;176(8):937‐971.
82. Sheskin, DJ, Handbook of Parametric and Nonparametric Statistical Procedures (Vol 1). Boca Raton:
Chapman and Hall/CRC; 2020:1‐1928.
83. Das S, Suganthan PN. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing
Evolutionary Algorithms on Real World Optimization Problems. Kolkata: Jadavpur University, Nanyang
Technological University; 2010:341‐359.
84. Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S, Mirjalili S. Equilibrium optimizer: a novel optimi-
zation algorithm. Knowl‐Based Syst. 2020;191:105190.
85. Dessouky M, Sharshar H, Albagory Y. A novel tapered beamforming window for uniform concentric
circular arrays. J Electromagn Waves Appl. 2006;20(14):2077‐2089.
86. Dessouky MI, Sharshar HA, Albagory YA. Efficient sidelobe reduction technique for small‐sized concentric
circular arrays. Prog Electromagn Res. 2006;65:187‐200.
87. Gurel L, Ergul O. Design and simulation of circular arrays of trapezoidal‐tooth log‐periodic antennas via
genetic optimization. Prog Electromagn Res. 2008;85:243‐260.
88. Dukic ML, Dobrosavljevic ZS. A method of a spread‐spectrum radar polyphase code design. IEEE J Sel
Areas Commun. 1990;8(5):743‐749.
89. Cassioli A, Lorenzo D, Locatelli M, Schoen F, Sciandrone M. Machine learning for global optimization.
Comput Optim Appl. 2012;51(1):279‐303.
90. Izzo D. Global optimization and space pruning for spacecraft trajectory design. Spacecr Trajectory Optim.
2010;1:178‐200.
91. Schlueter M. Nonlinear Mixed Integer Based Optimization Technique for Space Applications (Doctoral dis-
sertation). University of Birmingham; 2012.
92. Vinkó T, Izzo D. Global optimisation heuristics and test problems for preliminary spacecraft trajectory
design. In: ACT TECHNICAL REPORT, ACT‐TNT‐MAD‐GOHTPPSTD; 2008. https://www.esa.int/gsp/
ACT/doc/INF/pub/ACT-TNT-INF-2008-GOHTPPSTD.pdf
93. Hoare M. Structure and dynamics of simple microclusters. Adv Chem Phys. 1979;40:49‐135.
94. Moloi N, Ali M. An iterative global optimization algorithm for potential energy minimization. Comput
Optim Appl. 2005;30(2):119‐132.

How to cite this article: Abdollahzadeh B, Soleimanian Gharehchopogh F, Mirjalili S.


Artificial gorilla troops optimizer: A new nature‐inspired metaheuristic algorithm for
global optimization problems. Int J Intell Syst. 2021;1‐72.
https://doi.org/10.1002/int.22535
ABDOLLAHZADEH ET AL. | 69

APPENDIX A
See Tables A1–A4

T A B L E A1 Details of unimodal benchmark functions


No. Type Function Dimensions Range Fmin
d
F1 US f (x ) = ∑i =1 x i2 30,100,500,1000 [−100,100]d 0
d d
F2 UM f (x ) = ∑i =1 xi + ∏i =1 xi 30,100,500,1000 [−10,10]d 0
2
F3 UM
d
(
f (x ) = ∑i =1 ∑ j =1 x j
i
) 30,100,500,1000 [−100,100]d 0

F4 US f (x ) = max i {| x i | , 1 ≤ i ≤ d} 30,100,500,1000 [−100,100]d 0


d −1 ⎡ 2 2⎤
F5 UM (
f (x ) = ∑i =1 ⎢⎣100 x i +1 − x i2 ) + (x i )
− 1 ⎥⎦ 30,100,500,1000 [−30,30]d 0
2
F6 US
d
f (x ) = ∑i =1 ( x + 0.5 )
i 30,100,500,1000 [−100,100]d 0
d
F7 US f (x ) = ∑i =1 ix i4 + random [0,1) 30,100,500,1000 [−128,128]d 0

Abbreviations: UM, multimodal; US, unimodal.


70
|

T A B L E A2 Details of multimodal benchmark functions


No. Type Function Dimensions Range Fmin
F8 MS d ⎛ ⎛ ⎞⎞ 30,100,500,1000 [−500,500]d −418.9829 × n
f (x ) = −∑i =1 ⎜x isin ⎜ xi ⎟ ⎟
⎝ ⎝ ⎠⎠

d ⎡ ⎤ 30,100,500,1000 [−5.12,5.12]d 0
F9 MS f (x ) = 10d + ∑i =1 ⎣x id − 10cos 2πx i ⎦ ( )
1 d 1 d
F10 MS f (x ) = −20exp −0.2 ( ∑
d i =1
x i2 d
) − exp ( ∑ i =1
cos 2πx i ) + 20 + e 30,100,500,1000 [−32,32]d 0

1 d d xi 30,100,500,1000 [−600,600]d 0
F11 MS f (x ) = ∑
4000 i =1
x i2 − ∏i =1 cos ( )+1 i

π d −1 2 2 d
f (x ) = 2 U x i , 10,100,4
F12 MS d 1 i =1 i
{10sin (πy ) + ∑ (y − 1) ⎡⎣1 + 10sin (πy ) ⎤⎦ + (y − 1) } + ∑ i +1 d i =1 ( ) 30,100,500,1000 [−50,50]d 0
xi + 1
yi = 1 + 4

⎧ m
(
⎪ k x i − a x i > a,

)
U x i , a, k , m = ⎨ 0 − a < x i < a,
( )
⎪ m
⎪ k −x i − a x i < −a
( )

d 2⎡ ⎤ 2⎡ ⎤ 30,100,500,1000 [−50,50]d 0
F13 MS ( ) (
f (x ) = 0.1 sin2 3πx1 + ∑i =1 x i − 1 ⎣1 + sin2 3πx i + 1 ⎦ + x d − 1 ⎣1 + sin2 2πx d ⎦
{ ( ) ) ( ) ( ) }
d
+ ∑i =1 U x i , 5,100,4
( )
Abbreviation: MS, multimodal scalable.
ABDOLLAHZADEH
ET AL.
T A B L E A3 Details of fixed‐dimension multimodal benchmark functions
No. Type Function Dimensions Range Fmin
⎡ ⎤−1
F14 FM 1 25 1 2 [−65,65]d 1

ABDOLLAHZADEH

f (x ) = ⎢ 500 + ∑i =1 6
⎢⎣ i + ∑2j =1 (xj − aj, i ) ⎥⎦
ET AL.

⎡ 2 4 [−5,5]d 0.00030
d x1 bi2 + bi x 2
(
F15 FM f (x ) = ∑i =1 ⎢ai −
) ⎤⎥
bi2 + bi x3 + x 4
⎣ ⎦

1
F16 FM f (x ) = 4x 12 − 2.1x 14 + 3 x 16 + x1 x2 − 4x 22 + 4x 24 2 [−5,5]d −1.0316

5.1 2 5 2 1 2 [−5,5]d 0.398


F17 FM f (x ) = x2 −
( x
4π 2 1
+ π x1 − 6 ) + 10 1 −
( 8π 1
) cos x + 10

2
F18 FM f x = [1 + x1 + x2 + 1 (19 − 14x1 + 3x 12 − 14x2 + 6x1 x2 + 3x 22)]×
() ( ) 2 [−2,2]d 3

[30 + (2x1 − 3x2)2 × (18 − 32x1 + 12x 12 + 48x2 − 36x1 x2 + 27x 22)]

4 3 2 3 [1,3]d −3.86
F19 FM f (x ) = −∑i =1 aiexp −∑ j =1 bij x j − pij
( ( ))
4 6 2
F20 FM f (x ) = −∑i =1 aiexp −∑ j =1 bij x j − pij
( ( )) 6 [0,1]d −3.32

5 −1 4 [0,10]d −10.1532
F21 FM f (x ) = −∑i =1 ⎡⎣ (X − ai )(X − ai )T + ci ⎤⎦

7 −1
F22 FM f (x ) = −∑i =1 ⎡⎣ (X − ai )(X − ai )T + ci ⎤⎦ 4 [0,10]d −10.4028

10 −1 4 [0,10]d −10.5363
F23 FM f (x ) = −∑i =1 ⎡⎣ (X − ai )(X − ai )T + ci ⎤⎦

Abbreviation: FM: fixed dimensions multimodal.


|
71
72 | ABDOLLAHZADEH ET AL.

T A B L E A4 CEC2017 benchmark tests


No. Function Name of the function Class Optimum
F24 C01 Shifted and Rotated Bent Cigar Function Unimodal 100
F25 C03 Shifted and Rotated Zakharov Function Unimodal 300
F26 C04 Shifted and Rotated Rosenbrock's Function Multimodal 400
F27 C05 Shifted and Rotated Rastrigin's Function Multimodal 500
F28 C06 Shifted and Rotated Expanded Schaffer's F6 Function Multimodal 600
F29 C07 Shifted and Rotated Lunacek Bi‐Rastrigin Function Multimodal 700
F30 C08 Shifted and Rotated Noncontinuous Rastrigin's Function Multimodal 800
F31 C09 Shifted and Rotated Lévy Function Multimodal 900
F32 C10 Shifted and Rotated Schwefel's Function Multimodal 1000
F33 C11 Hybrid Function 1 (N = 3) Hybrid 1100
F34 C12 Hybrid Function 2 (N = 3) Hybrid 1200
F35 C13 Hybrid Function 3 (N = 3) Hybrid 1300
F36 C14 Hybrid Function 4 (N = 4) Hybrid 1400
F37 C15 Hybrid Function 5 (N = 4) Hybrid 1500
F38 C16 Hybrid Function 6 (N = 4) Hybrid 1600
F39 C17 Hybrid Function 7 (N = 5) Hybrid 1700
F40 C18 Hybrid Function 8 (N = 5) Hybrid 1800
F41 C19 Hybrid Function 9 (N = 5) Hybrid 1900
F42 C20 Hybrid Function 10 (N = 6) Hybrid 2000
F43 C21 Composition Function 1 (N = 3) Composition 2100
F44 C22 Composition Function 2 (N = 3) Composition 2200
F45 C23 Composition Function 3 (N = 4) Composition 2300
F46 C24 Composition Function 4 (N = 4) Composition 2400
F47 C25 Composition Function 5 (N = 5) Composition 2500
F48 C26 Composition Function 6 (N = 5) Composition 2600
F49 C27 Composition Function 7 (N = 6) Composition 2700
F50 C28 Composition Function 8 (N = 6) Composition 2800
F51 C29 Composition Function 9 (N = 3) Composition 2900
F52 C30 Composition Function 10 (N = 3) Composition 3000

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy