0% found this document useful (0 votes)
3 views23 pages

Application of Hybrid Algorithm Based On Ant Colony Optimization and Sparrow Search in UAV Path Planning

This research article presents a hybrid algorithm that combines Ant Colony Optimization (ACO) and Sparrow Search Algorithm (SSA) to improve path planning for the Traveling Salesman Problem (TSP). The proposed method enhances the global search capability and convergence speed of ACO by optimizing pheromone distribution and evaporation rates, resulting in a 12% increase in accuracy and a 45.6% reduction in running time compared to traditional ACO. The study offers a novel approach for solving large-scale TSP problems and insights for future algorithm design in complex optimization scenarios.

Uploaded by

utkarsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views23 pages

Application of Hybrid Algorithm Based On Ant Colony Optimization and Sparrow Search in UAV Path Planning

This research article presents a hybrid algorithm that combines Ant Colony Optimization (ACO) and Sparrow Search Algorithm (SSA) to improve path planning for the Traveling Salesman Problem (TSP). The proposed method enhances the global search capability and convergence speed of ACO by optimizing pheromone distribution and evaporation rates, resulting in a 12% increase in accuracy and a 45.6% reduction in running time compared to traditional ACO. The study offers a novel approach for solving large-scale TSP problems and insights for future algorithm design in complex optimization scenarios.

Uploaded by

utkarsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

International Journal of Computational Intelligence Systems (2024) 17:286

https://doi.org/10.1007/s44196-024-00652-z

RESEARCH ARTICLE

Application of Hybrid Algorithm Based on Ant Colony Optimization


and Sparrow Search in UAV Path Planning
Yangyang Tian1 · Jiaxiang Zhang2 · Qi Wang3 · Shanfeng Liu1 · Zhimin Guo1 · Huanlong Zhang2

Received: 6 September 2023 / Accepted: 30 August 2024


© The Author(s) 2024

Abstract
The Traveling Salesman Problem (TSP) is a classic problem in combinatorial optimization, aiming to find the shortest path that
traverses all cities and eventually returns to the starting point. The ant colony optimization algorithm has achieved significant
results, but when the number of cities increases, the ant colony algorithm is prone to fall into local optimal solutions, making
it difficult to obtain the global optimal path. To overcome this limitation, this paper proposes an innovative hybrid ant
colony algorithm. Our main motivation is to introduce other optimization strategies to improve the global search ability and
convergence speed of the ant colony algorithm in solving TSP problems. We first incorporate the iterative solution of the
sparrow search algorithm (SSA) into the ant colony algorithm to provide a better initial pheromone distribution. Second, we
improve the pheromone update method to enhance the algorithm’s diversity during the search process and reduce the risk of
falling into local optima. Finally, we define a dynamic pheromone evaporation factor to adjust the pheromone evaporation
rate according to real-time changes in the search process. Through simulation tests on large-scale TSP problems and practical
applications, we find that the hybrid ant colony algorithm outperforms the ant colony algorithm in both accuracy and running
time. In Eg.2, the average accuracy of ISSA-ACO is improved by 12%, and the average running time is reduced by 45.6%.
This study not only provides a new and effective method for solving large-scale TSP problems but also provides valuable
references and insights for the application of ant colony algorithms in solving other complex optimization problems. At the
same time, our research further verifies the effectiveness of improving heuristic algorithms by fusing different optimization
strategies, providing new ideas and directions for future algorithm design and optimization.

Keywords TSP · Fusion · Pheromone · Dynamic

1 Introduction
Jiaxiang Zhang, Qi Wang, Shanfeng Liu, Zhimin Guo and Huanlong
Zhang have contributed equally to this work. The traveler problem is a classic combination optimization
B Jiaxiang Zhang problem [1–5]. In this problem, merchants need to visit N
z17603850821@163.com cities, each of which can only visit once, and finally return
Yangyang Tian to the city where they originated to find the optimal path.
tianyangyang199306@163.com Many problems in life can be abstractly understood as trav-
Qi Wang eler problems, There are many optimization algorithms to
wangqi2@ha.sgcc.com.cn solve traveler problems, including genetic algorithms [6–
Shanfeng Liu 9], particle swarm algorithms [10–12], simulated annealing
liushanfeng@ha.sgcc.com.cn algorithms [13–15], ant colony algorithms [16–19], Tabu
Zhimin Guo Search [20–22], etc. Among them, ant colony algorithm is
guozhimin@ha.sgcc.com.cn
Huanlong Zhang 2 College of Electrical and Information Engineering,
zzuli407@163.com Zhengzhou University of Light Industry, No. 136 Ke Xue
1 Avenue, Zhengzhou 450000, Henan Province, China
State Grid Henan Electricity Research Institute, No. 85
3 State Grid Henan Electric Power Company, No. 56 Jinshui
Songshan South Road, Zhengzhou 450000, Henan Province,
China East Road, Zhengzhou 450000, Henan Province, China

0123456789().: V,-vol 123


286 Page 2 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

the main optimization algorithm for solving TSP problems adaptive ant colony algorithm, PF3SACO, and designed a
at present. Ant colony algorithm is a distributed intelligent dynamic parameter adjustment mechanism based on particle
bionic algorithm proposed by M. Dorigo and other schol- swarm algorithm and fuzzy system to adaptively adjust the
ars to simulate the foraging behavior of ants. It has the parameters of PF3SACO.
advantages of high positive feedback, robustness and dis- In exploring the applications of swarm intelligence opti-
tributed computing. However, the search of the ant colony mization algorithms, we can’t help but ponder the commonal-
algorithm is too random at the beginning of the algorithm, ities between these optimization techniques and optimization
which makes the search efficiency of the algorithm low, so problems in other fields. For instance, in the biomedical
that the overall convergence speed of the algorithm is slow, domain, Umar et al. proposed a stochastic computational
and the pheron concentration is too concentrated in the mid- program to solve the prevention dynamics problem in HIV
dle and late stages of the algorithm, resulting in the algorithm systems, and their optimization approach shares similari-
search to stagnation and easy to fall into the local opti- ties with our hybrid algorithm in seeking optimal solutions
mal solution. In response to these problems, many scholars [31]. Similarly, when studying the numerical simulation of
have made improvements and optimizations. T. Studtzle and a fractional-order Leptospirosis model, Mukdasai et al. also
others [23] proposed the MAX-MIN Ant System (MMAS) adopted a comparable optimization approach, albeit using
to limit the maximum and minimum levels of pheromones a supervised neural network [32]. Moreover, our hybrid
by setting thresholds. When the algorithm falls into local algorithm draws inspiration from optimization strategies in
optimization, MMAS will re-initialize the pheromones and other fields when tackling optimization problems in complex
improve the performance of the ant colony algorithm. J B. systems. For example, in fluid dynamics and heat trans-
Escario et al. [24] proposed an Ant Colony Extended, which fer research, Shahzad et al.’s study on thin film flow and
divides tasks between patrol ants and foraging ants, imple- heat transfer provides insights into finding optimal solutions
ments regulatory strategies to control the number of each in complex environments [33]. Concurrently, Sadaf et al.’s
ants during the search process, and improves the adaptabil- analysis of solitary wave behavior demonstrates the impor-
ity of the algorithm. Ş.Gülcü et al. [25] proposed a parallel tance of finding stable solutions in nonlinear systems, which
collaborative hybrid algorithm (PACO-3 Opt) based on ant aligns with the stability considerations in our path planning
colony optimization. The algorithm is based on the master– problem [34].While designing and implementing our hybrid
slave mode and works in parallel by multiple groups in a algorithm, we also drew inspiration from seemingly unre-
distributed computing environment. Deng et al. [26] pro- lated fields. For instance, although Waqas et al.’s research on
posed an improved ant colony optimization algorithm based the flow and heat transfer of hybrid nanofluids in stenosed
on multiple swarm strategies, collaborative evolution mecha- arteries focuses on biomedical engineering problems, their
nisms, pheromone update strategies and pheromone diffusion in-depth understanding of fluid dynamics offers valuable ref-
mechanisms. It decomposed the optimization problem into erences for optimizing our swarm intelligence algorithms
several sub-problems, and divided the ants in the popula- [35]. Likewise, Ali et al.’s study on the extended nonlinear
tion into elite ants and ordinary ants to improve the speed of Schrödinger equation provides fresh perspectives on wave
convergence and avoid falling into Local optimal value. F. propagation and optimization in complex systems [36, 37].
Dahan et al. [27] proposed an improved flight ant colony Finally, Zafar et al.’s research on the shallow water nonlin-
optimization algorithm (DFACO), which uses a dynamic ear M-fractional evolution equation showcases methods for
neighborhood selection mechanism to balance exploration finding exact solutions in complex environments, which has
and utilization, so that flying ants are equal to half that of ants significant guiding implications for optimizing our swarm
to reduce FACO execution time. Zhu et al. [28] proposed a intelligence algorithms [38].
multi-group ant colony optimization algorithm (PCCACO) Based on the research of predecessors, this paper pro-
based on Pearson correlation coefficients. Combined with poses a fusion ant colony algorithm. In essence, the initial
distance and pheromone factors, they proposed the unit pheromone distribution of the ant colony algorithm is set
distance-pheromone operator (UDPO) as a single population, using the iterative solution of the sparrow search algo-
introduced Pearson correlation coefficient as the evaluation rithm. This strategy leverages the sparrow search algorithm’s
criterion, and similar path parameters are rewarded with strength in global search. By initializing the ant colony algo-
adaptive frequency. Zeng et al. [29] designed an improved rithm’s pheromone with the optimal solution found by the
ant colony algorithm based on dynamic heuristic informa- SSA, we essentially "warm up" or "guide" the ant colony
tion, abstracted the utilization of personnel and equipment algorithm, making it more likely for the ants to initially move
in the transportation industry, proposed a more general TSP towards the global optimum. Furthermore, we dynamically
with a supply arc, established an optimization model aimed at update the pheromone concentration based on the ants’ fit-
minimizing the total travel time, and improved the solution of ness values. This implies that ants with better performance
standard TSP. Zhou et al. [30] proposed a parameter-based will leave more pheromone on the paths they traverse, encour-

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 3 of 23 286

aging subsequent ants to follow those paths. This strategy not where α is a pheromone heuristic factor, reflecting the impor-
only makes the search process more directional but also helps tance of pheron concentration. The larger α, the more ants
the algorithm escape local optima. When the pheromone on tend to choose higher concentration nodes. β is the expected
a path becomes excessive, causing the search to become heuristic factor, reflecting the importance of heuristic infor-
trapped in a local optimum, the pheromone on other paths mation. allowedk is a collection of nodes that ants can
with better performance guides the ants to explore new pos- choose. ηi j (t) represents the heuristic information of node
sibilities. Finally, by dynamically adjusting the evaporation i to node j. For specific expressions, see the formula (2):
rate, we can better control the rate at which pheromone
evaporates, ensuring that the algorithm maintains sufficient 1
ηi j = (2)
diversity throughout the search process. For instance, we can di j
set a higher evaporation rate in the early stages of the search
to encourage ants to explore more paths. In the later stages, where di j represents the Euclidean distance from node i to
we can lower the evaporation rate to allow for more in-depth node j.
exploration around the already discovered good solutions. After an iteration of all ants, the pheromone update rule
This dynamic adjustment strategy not only improves the is used to update the pheromone concentration of the current
overall performance of the algorithm but also accelerates path taken by the ants. The pheromone concentration update
its convergence speed. The organizational structure of this formula is:
paper is as follows. Section 2 introduces the ant colony algo-
rithm, Sect. 3 introduces the sparrow search algorithm, Sect. τi j (t + 1) = (1 − ρ)τi j (t) + ρτi j (t) (3)
4 proposes the fusion ant colony algorithm, Sect. 5 obtains
the experimental results, Sect. 6 summarizes our work and
where ρ is the pheromone volatile factor, and τi j (t) indi-
describes the prospects for the future work.
cates the increment of pheromones between node i and node
j in this iteration process. The specific expressions are:

Q
2 Ant Colony Optimization , (i j ∈ Lk)
τi j (t) = Lk (4)
0, other wise.
The ant colony algorithm was first proposed by Dorigo [39].
The idea of the algorithm comes from the foraging behavior
where Q is the pheromone intensity, and L k is the total dis-
of ants: the pheromones carried by the ant colony will be left
tance of the kth ant in the current iteration.
on the path they pass, the accumulated pheromones volatilize
over time, and eventually more pheromones will accumulate
on the path shorter path from food. Continue the ant as a
reference information when choosing the path. Deneubourg 3 Sparrow Search Algorithm
et al. [40] have deeply studied the pheromones spawning
and following behavior of ants. In the twin-bridge experi- In SSA, two types of sparrows are usually composed of
ment, two bridges are connected between the nest and food discoverers and followers, and a reconnaissance and early
of ants, from which ants are looking for food. At first, each warning mechanism are set up. Among them, discoverers
ant randomly chose a path, but because pheromones will be generally have high adaptability values, are in charge of
released during the ants’ search process, the concentration locating food sources, as well as giving followers infor-
of shorter bridge pheromones will be higher after a period of mation about their location and orientation. To get better
time, attracting more ants, and finally the whole ant colony food, the followers will always follow the discoverer and
will gather on the same bridge. There are two key steps in might compete for the food resources of the discoverer to
the ant colony algorithm, namely, calculating the probability raise their predatory rate. Meanwhile, a certain proportion
of state transfer and pheromones update. of sparrow individuals are randomly selected throughout the
Assuming that the pheromones on the path of ant k from population for vigilance detection. When the danger is found,
node i to node j at t time is τi j (t), the transition probability the sparrow population immediately carries out anti-predator
from node i to j is: behavior.
The position update equation is as follows: the discoverers
typically represent 10–20% of the population.

⎪ [τ (t)]α ·[ηi j (t)]β
⎨  ij
[τi j (t)]α ·[ηi j (t)]β
, j ∈ allowedk  
t · exp −i , R < ST
Pi,k j = j∈allowedk (1) t+1 xi,d α·M 2

⎩ 0, other wise. xi,d = t + Q · Ł, R ≥ ST .
(5)
xi,d 2

123
286 Page 4 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 1 Double bridge


experiment

Considering a seek space with d-dimensions with N spar-


rows, where i = 1, 2, . . . N , d = 1, 2, . . . D, and xi,d depicts ⎧
where the ith sparrow is located within the dth dimension. ⎨xbestdt+1 + β · |xi,d
t − xbest t |, f  = f
d i g
t+1
t means the present iteration, M is the largest number itera- xi,d = |xi,d
t −xwor st t | (7)
⎩xi,d
t +K ·
| f i − f w |+e
d
, fi = f g ,
tions achievable, α ∈ (0, 1] is an arbitrary number, Q is an
unpredictable number with a normal distribution, and L rep-
resents a 1×d matrix when each component is 1. R2 ∈ [0, 1] where β is the step length correction coefficient based on
represents a value for an alarm, and ST ∈ [0.5, 1] symbol- the normative standard distribution; f i is the current spar-
izes the safety limit. If R2 < ST , it shows that there are row’s capacity for adaptation, f g and f w show the current
no nearby predators, allowing the discoverers to engage in sparrow’s best and poorest adaptabilities, respectively, and
extensive exploration. If R2 ≥ ST , it shows that sparrows e is the tiniest constant. When f i = f g , the sparrow is an
find predators, and every sparrow will instantly fly to secure outlier in the population and is therefore prone to abuse, as
regions.The position update equation in SSA is designed to evidenced by this. When f i = f g , the fact that the spar-
mimic the foraging and anti-predation behavior of sparrows, row grouped with other sparrows so rapidly indicates that it
promoting efficient movement and exploration within the is aware of the predator’s threat.In the absence of predator
search space. These equations typically involve exponential threats, sparrows can explore the search space more freely.
and additive terms that work together to update the sparrows’ In this case, the exponential term in the position update equa-
positions. tion may be more likely to guide sparrows toward global or
The remaining sparrows, with the exception of the discov- local optimal positions. When sparrows perceive the pres-
erer, are followers, as well as the following position update ence of predators, their primary concern is to quickly escape
equation: from the danger zone. In this situation, the exponential term
⎧ in the position update equation may play a role in enabling

⎪ xwor stdt −xi,d
t
sparrows to quickly move away from the danger zone. At

⎪ Q · exp , i> n

⎨ i2 2 the same time, the additive term may reduce its influence to
1 
t+1 D
xi,d = t+1
+ (rand{−1, 1}|xi,d
t ensure that the sparrows’ escape path has sufficient random-

⎪ xbest d D

⎪ d=1 ness, thereby reducing the risk of being preyed upon.

⎩−xbest t+1 |,
d i ≤ n2 ,
(6)
4 Proposed Method (ISSA-ACO)
where xwor stdt is the sparrow’s worst possible posture in
the d dimensional during the t iteration of the population, 4.1 Improve the Initial Pheromone
xbestdt+1 is the sparrow’s ideal posture in the d dimensional
over the course of the population’s t + 1 iteration. When i ≥ The initial pheromone concentration is the same in the basic
n ant colony algorithm, which leads to high randomness in
2 , it indicates that the i follower was not fed, and the fitness
is low. To obtain higher fitness, it must fly to distant locations the early stage of iteration and increases the time cost of
to hunt. When i < n2 , the i follower will haphazardly locate the search path. To solve this problem, this paper integrates
itself around the best foraging location. the sparrow search algorithm into the ant colony algorithm,
The number of sparrows used for detection and early warn- which has the characteristics of strong optimization ability
ing typically ranges from 10 to 20%, the following equation and fast convergence speed. The better solution generated by
represents the position update: the sparrow search algorithm is used as the initial informa-
tion of the pheromones in the ant colony algorithm, which

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 5 of 23 286

gives the ant a better direction in the early stage of itera- simulated annealing algorithm, there is a certain probability
tion, improves the quality of understanding, and lays a good of accepting a poor solution, which is conducive to jumping
foundation for the rapid convergence of the algorithm. out of local optimization. When the ants meet standard 3,
this part of the ants is called abnormal ants, and the solution
4.2 Improve the Way Pheromone Updates quality is low. We reduce the current pheromone concentra-
tion and reduce the impact of the poor path on subsequent
When all the ants in the first generation of ants complete iterations.
the path search and update the path pheromone, due to the
positive feedback mechanism of the ant colony algorithm, 4.3 Evaporation Weight Factor of Pheromone
over time, the pheromone concentration on each path is more
concentrated on the current optimal path, so that the prob- The pheromone volatilization coefficient plays a key role
ability of each path selected by each ant tends to be stable, in the ant colony algorithm, affecting the convergence speed
which is easy at Fall into the possibility of local optimiza- and path length of the algorithm. In the traditional ant colony
tion. To enhance the global search ability of the algorithm, algorithm, the pheromone volatilization coefficient is a con-
this paper improves the pheromone increment update rules stant, which cannot balance the global exploration and local
and proposes a reward and punishment mechanism to update development capabilities of the algorithm. In this paper,
the pheromone increment. Rank according to the length of the adaptive weight factor and the adaptive adjustment of
the path currently traversed by each ant. The shorter the path, pheromone volatilization coefficient are used. In the early
the higher the ranking. Create three grade criteria, standard 1: stage of algorithm iteration, the higher pheromone concen-
ants ranked in the top 25%, standard 2: ants ranked 25–75%, tration can improve the ant’s global exploration ability. At the
and standard 3: ants ranked in the bottom 25%. After each end of the algorithm iteration, if the pheromone concentration
ant updates its own pheromone, it compares the grade stan- is too large, it is easy to fall into local optimization. We have
dard. When the ant meets the standard 1, it will reward the improved the volatilization of pheromones and improved
pheromone from the optimal path on the basis of the original the local development ability of the algorithm The adaptive
pheromone. When the ant meets standard 2, the pheromone weight factor expression is:
from the sub-optimal path is rewarded on the basis of the
original pheromone. When the ant meets standard 3, the 2 πt 7
ω= sin + (9)
pheromone from the worst path will be punished on the basis 5 2T 10
of the original pheromone. The specific expressions are:
Therefore, the new pheromone concentration update for-
Q mula is:
standar d1 : τi j (t) = λ1 ×
L best
Q τi j (t + 1) = ω · τi j (t) + τi j (t) (10)
standar d2 : τi j (t) = λ2 × s−best (8)
L
Q
standar d3 : τi j (t) = λ3 × wor st In the formula, the specific expression of τi j (t) is updated
L according to the formula (8) according to the different levels
where, λ1 , λ2 and λ3 are coefficients, and this article takes of the ant.
0.5, 0.6 and −0.5 respectively. L best represents the length of
the optimal path, L s−best represents the length of the sub- 4.4 Algorithm ISSA-ACO Implementation Steps
optimal path, and L wor st represents the length of the worst
path. At present, when the ant meets standard 1, the opti- ISSA-ACO integrates the sparrow search algorithm and the
mal pheromone information is added to enhance the guiding ant colony algorithm, improves the pheromone update rules
effect of subsequent ant iteration. It can be understood that of the ant colony algorithm, and improves the convergence
the ant of standard 1 is a pioneer ant, playing the role of charg- speed and settlement accuracy of the algorithm. The specific
ing and taking the lead. When the ant meets the standard 2, steps of ISSA-ACO are:
the pheromone information of the suboptimal path is added,
which improves the pheromone of the ant and improves the Step 1: Read the node coordinates and calculate the distance
quality of the subsequent solution. This part of the ants fol- between the nodes.
low the ants, accounting for 50% of the population, and play
the role of the mainstay, and the pheromone concentration of Step 2: Set the specific parameters of the sparrow search
the following ants may be greater than the pheromone con- algorithm, population number pop, dimension dim,
centration of the pioneer ants. Drawing on the idea of the maximum iteration max, upper boundary ub, lower

123
286 Page 6 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 2 ISSA-ACO Flowchart

boundary lb. The dimension dim is equal to the num- Step 8: When the kth ant traversal is completed, update the
ber of nodes.When a sparrow individual reaches the pheromone concentration according to the formula
boundary, it can be fixed to the boundary and allowed (10).
to continue participating in the search in subsequent Step 9: When all ants iterate once, update the standard level
iterations. This method may reduce the diversity of and increase the number of iterations by 1.
the population, but it helps the algorithm to conduct Step 10: Repeat steps 7–9 until the maximum iteration times
a more refined search near the boundary. T is reached, and output the optimal path and optimal
Step 3: Calculate the adaptability value of each sparrow and solution length. The flowchart is shown in Fig. 2.
rank it from small to large. The value of adaptability
in the algorithm is the distance of traversing all nodes
at a time. 5 Experimental Simulation and Analysis
Step 4: Update the position with the number one sparrow
as the discoverer, and the rest of the sparrow as the In this section, we conduct an experimental analysis of ISSA-
follower to carry out different update strategies, and ACO. The simulation experiment of this paper is carried out
randomly select 10% of the sparrow as the scout. under Windows 10 64-bit operating system, Intel Core i7
Step 5: Carry out boundary detection, recalculate the adapt- CPU, 2.50GHz, 16GB, Python3.7. The experiment is divided
ability and rank, and continue to iterate. into two groups according to the size of the problem: small-
Step 6: Output the solution of the sparrow search algorithm scale and large-scale. To compare the effectiveness of the
and initialize the pheromone matrix. Initialize the algorithm, we use 8 different TSP instances (berlin52, st70,
parameters of the ant colony algorithm, pheromone korA100, eil101, lin105, ch130, ch150, korB200) for small-
heuristic factor α, expected factor β, number of ants scale problems, and choose 1 from large-scale problems.
m, maximum iterations T . Two different TSP instances (lin318, rd400, fl417, pr439,
Step 7: Randomly assign m ants to each node, and search pcb442, d493, u574, rat575, p654, d657, u724, rat783) are
for the next node according to the state transfer rule compared and analyzed. All instances are from TSPL IB ref-
formula (1). erence library, these instances are usually used to optimize

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 7 of 23 286

Table 1 Set the parameters of


Algorithm Ants α β ρ Q Max-iteration SSA-POP SSA-Maxiter
each algorithm
ACO 20 1 5 0.3 100 100 – –
GA-ACO 10 1 5 0.3 100 100 – –
ISSA-ACO 10 1 5 ω 100 100 10 100

Table 2 Experimental results of


Instance Algorithm Max-value Min-value Avg-value Max-time Min-time Avg-time
small-scale problems
ACO 8.00E+03 7.57E+03 7.82E+03 4.38E+00 3.73E+00 3.96E+00
berlin52 GA-ACO 7.97E+03 7.62E+03 7.81E+03 3.37E+00 3.02E+00 3.20E+00
ISSA-ACO 8.00E+03 7.54E+03 7.80E+03 2.02E+00 1.98E+00 1.99E+00
ACO 7.65E+02 7.15E+02 7.35E+02 7.05E+00 6.02E+00 6.26E+00
st70 GA-ACO 7.68E+02 7.32E+02 7.43E+02 5.40E+00 5.12E+00 5.23E+00
ISSA-ACO 7.63E+02 7.18E+02 7.40E+02 3.34E+00 3.31E+00 3.32E+00
ACO 2.39E+04 2.24E+04 2.34E+04 1.23E+01 1.20E+01 1.21E+01
kroA100 GA-ACO 2.42E+04 2.29E+04 2.37E+04 9.72E+00 9.27E+00 9.43E+00
ISSA-ACO 2.46E+04 2.32E+04 2.39E+04 7.24E+00 6.18E+00 6.42E+00
ACO 7.39E+02 7.16E+02 7.27E+02 1.38E+01 1.20E+01 1.23E+01
eil101 GA-ACO 7.51E+02 7.17E+02 7.34E+02 1.03E+01 9.71E+00 9.91E+00
ISSA-ACO 7.47E+02 7.08E+02 7.27E+02 7.46E+00 6.53E+00 6.83E+00
ACO 1.59E+04 1.48E+04 1.52E+04 1.47E+01 1.25E+01 1.28E+01
lin105 GA-ACO 1.54E+04 1.46E+04 1.51E+04 1.12E+01 1.08E+01 1.09E+01
ISSA-ACO 1.61E+04 1.49E+04 1.53E+04 7.83E+00 6.71E+00 6.94E+00
ACO 7.11E+03 6.65E+03 6.91E+03 2.12E+01 1.86E+01 1.89E+01
ch130 GA-ACO 7.27E+03 6.64E+03 6.94E+03 1.56E+01 1.51E+01 1.52E+01
ISSA-ACO 7.15E+03 6.76E+03 6.96E+03 1.12E+01 9.71E+00 9.96E+00
ACO 7.29E+03 6.93E+03 7.13E+03 2.70E+01 2.51E+01 2.54E+01
ch150 GA-ACO 7.34E+03 6.85E+03 7.10E+03 2.07E+01 1.93E+01 2.01E+01
ISSA-ACO 7.44E+03 6.82E+03 7.02E+03 1.55E+01 1.36E+01 1.38E+01
ACO 3.55E+04 3.37E+04 3.46E+04 6.32E+01 4.40E+01 4.61E+01
kroB200 GA-ACO 3.60E+04 3.44E+04 3.51E+04 3.49E+01 3.22E+01 3.32E+01
ISSA-ACO 3.50E+04 3.26E+04 3.41E+04 2.39E+01 2.15E+01 2.18E+01

the testing of algorithms, and the number at the end of the ple or low-dimensional problems, a smaller population size
instance name indicates the size of the problem. In the simu- may be sufficient to find a satisfactory solution; whereas
lation experiment, the parameters of the algorithm are shown for complex or high-dimensional problems, a larger popu-
in Table 1. For each TSP instance, all algorithms are run 10 lation size may be necessary to ensure search thoroughness
times, and the maximum, minimum, average value and run- and diversity.The maximum number of iterations determines
ning time of 10 results are analyzed. The experimental data the total number of algorithm runs, significantly affecting
results are shown in Tables 2 and 3. the algorithm’s convergence speed and final solution quality.
Population size refers to the number of individuals in an Insufficient iterations may lead to inadequate search of the
algorithm’s population. This parameter has a direct impact solution space, causing the algorithm to fall into local optima;
on the algorithm’s search ability and computational effi- whereas excessive iterations may increase unnecessary com-
ciency. A larger population size can increase search diversity, putational time, even leading to overfitting. We should set the
enhance the algorithm’s global search capability, and help number of iterations based on the problem’s nature and the
find the global optimal solution. However, an excessively algorithm’s convergence speed. If the algorithm converges
large population size can also increase the algorithm’s com- quickly, we can reduce the number of iterations to improve
putational cost and time complexity. When selecting the efficiency; if the convergence speed is slow or the prob-
population size, we should balance the complexity of the lem complexity is high, we should increase the number of
specific problem and the required search precision. For sim- iterations to ensure search thoroughness. Additionally, we

123
286 Page 8 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Table 3 Experimental results of


Instance Algorithm Max-value Min-value Avg-value Max-time Min-time Avg-time
large-scale problems
ACO 5.34E+04 4.96E+04 5.16E+04 1.14E+02 1.05E+02 1.07E+02
lin318 GA-ACO 5.33E+04 5.03E+04 5.17E+04 8.83E+01 7.98E+01 8.20E+01
ISSA-ACO 5.27E+04 4.95E+04 5.10E+04 5.57E+01 5.31E+01 5.38E+01
ACO 2.09E+04 2.02E+04 2.06E+04 1.71E+02 1.63E+02 1.64E+02
rd400 GA-ACO 2.04E+04 1.87E+04 1.98E+04 1.39E+02 1.27E+02 1.30E+02
ISSA-ACO 2.00E+04 1.87E+04 1.93E+04 8.77E+01 8.57E+01 8.62E+01
ACO 1.74E+04 1.56E+04 1.63E+04 1.81E+02 1.79E+02 1.80E+02
fl417 GA-ACO 1.72E+04 1.40E+04 1.56E+04 1.46E+02 1.36E+02 1.38E+02
ISSA-ACO 1.56E+04 1.52E+04 1.54E+04 9.13E+01 8.94E+01 8.98E+01
ACO 1.52E+05 1.42E+05 1.47E+05 2.00E+02 1.95E+02 1.98E+02
pr439 GA-ACO 1.48E+05 1.38E+05 1.40E+05 1.57E+02 1.47E+02 1.49E+02
ISSA-ACO 1.39E+05 1.32E+05 1.35E+05 1.05E+02 1.02E+02 1.03E+02
ACO 7.58E+04 7.16E+04 7.40E+04 2.15E+02 2.00E+02 2.03E+02
pcb442 GA-ACO 7.45E+04 6.91E+04 7.10E+04 1.61E+02 1.51E+02 1.53E+02
ISSA-ACO 6.85E+04 6.52E+04 6.70E+04 1.05E+02 9.99E+01 1.02E+02
ACO 5.03E+04 4.73E+04 4.91E+04 2.59E+02 2.56E+02 2.58E+02
d493 GA-ACO 4.80E+04 4.60E+04 4.70E+04 2.06E+02 1.94E+02 1.96E+02
ISSA-ACO 4.60E+04 4.45E+04 4.50E+04 1.30E+02 1.27E+02 1.29E+02
ACO 5.78E+04 5.49E+04 5.63E+04 3.35E+02 3.33E+02 3.34E+02
u574 GA-ACO 5.43E+04 5.16E+04 5.34E+04 2.64E+02 2.53E+02 2.55E+02
ISSA-ACO 5.06E+04 4.84E+04 4.94E+04 1.78E+02 1.71E+02 1.73E+02
ACO 1.07E+04 1.02E+04 1.04E+04 3.43E+02 3.33E+02 3.36E+02
rat575 GA-ACO 9.40E+03 8.57E+03 9.12E+03 2.68E+02 2.51E+02 2.55E+02
ISSA-ACO 8.04E+03 7.35E+03 7.58E+03 1.72E+02 1.69E+02 1.70E+02
ACO 5.80E+04 5.08E+04 5.55E+04 4.31E+02 4.28E+02 4.29E+02
p654 GA-ACO 6.02E+04 5.20E+04 5.51E+04 3.46E+02 3.32E+02 3.35E+02
ISSA-ACO 5.15E+04 4.85E+04 4.96E+04 2.33E+02 2.25E+02 2.28E+02
ACO 8.16E+04 7.55E+04 7.86E+04 4.39E+02 4.28E+02 4.33E+02
d657 GA-ACO 7.52E+04 7.08E+04 7.30E+04 3.43E+02 3.32E+02 3.35E+02
ISSA-ACO 7.08E+04 6.62E+04 6.80E+04 2.33E+02 2.30E+02 2.32E+02
ACO 7.51E+04 6.89E+04 7.12E+04 5.46E+02 5.42E+02 5.44E+02
u724 GA-ACO 6.83E+04 6.29E+04 6.55E+04 4.10E+02 4.02E+02 4.04E+02
ISSA-ACO 6.19E+04 5.90E+04 6.07E+04 3.10E+02 2.77E+02 2.86E+02
ACO 1.85E+04 1.75E+04 1.81E+04 6.13E+02 6.03E+02 6.07E+02
rat783 GA-ACO 1.69E+04 1.62E+04 1.66E+04 5.05E+02 4.79E+02 4.85E+02
ISSA-ACO 1.61E+04 1.48E+04 1.54E+04 3.25E+02 3.21E+02 3.23E+02
The meaning of data bold represents the best and the best effect

can adopt a dynamic iteration adjustment strategy, flexibly may increase search complexity and computational cost. The
adjusting the number of iterations based on the algorithm’s boundary setting should be determined based on the actual
real-time convergence situation. Boundary setting defines the problem situation and prior knowledge. If there is no clear
search space range, significantly affecting the algorithm’s prior knowledge, we can determine a suitable boundary range
search direction and efficiency. Reasonable boundary setting through preliminary experiments or analysis. Additionally,
can ensure the algorithm searches for the optimal solution we can adopt a dynamic boundary adjustment strategy, flex-
within an effective search space, avoiding invalid searches ibly adjusting the boundary range based on the algorithm’s
and computations. If the boundary setting is too narrow, it real-time search situation and solution distribution.
may limit the algorithm’s search range, causing it to miss the Bold text represents a better result in the comparison of
global optimal solution; if the boundary setting is too wide, it algorithms. Through small-scale problems, we can find that

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 9 of 23 286

Fig. 3 Iterative curve and optimal path for small-scale problems

123
286 Page 10 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 4 Iterative curve and optimal path for small-scale problems

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 11 of 23 286

Fig. 5 Iterative curve and optimal path for small-scale problems

123
286 Page 12 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 6 Iterative curve and optimal path for small-scale problems

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 13 of 23 286

Fig. 7 Iterative curve and optimal path for large-scale problems

123
286 Page 14 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 8 Iterative curve and optimal path for large-scale problems

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 15 of 23 286

Fig. 9 Iterative curve and optimal path for large-scale problems

123
286 Page 16 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 10 Iterative curve and optimal path for large-scale problems

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 17 of 23 286

Fig. 11 Iterative curve and optimal path for large-scale problems

123
286 Page 18 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 12 Iterative curve and optimal path for large-scale problems

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 19 of 23 286

Fig. 13 Statistics of tsp


experimental results

in the berlin52 map, except for the maximum value, all indi- in handling large-scale, complex spatial path optimization
cators of ISSA-ACO are better than ACO and GA-ACO. In problems. To more intuitively showcase these experimen-
the st70 map, the largest indicator of ISSA-ACO is better than tal results, we have specially prepared Fig. 8, which details
ACO. In eil101, the minimum index of ISSA-ACO is better the performance metric comparisons of various algorithms
than ACO. In ch150, the minimum and average indicators of after optimization. This chart clearly reflects the outstanding
ISSA-ACO are better than ACO. Although the maximum, performance of ISSA-ACO in handling large-scale physical
minimum and average values of ISSA-ACO in korA100, space optimization problems, further emphasizing the algo-
lin105 and ch130 are not as good as ACO and GA-ACO, rithm’s immense potential and value in practical applications.
the overall gap is small, and the running time is better than To improve the efficiency of the strategy, this study imple-
the other two algorithms, saving nearly half of the time com- mented path planning for two actual pole tower maps. The
pared with the original algorithm ACO. We can also observe specific steps are as follows:
that with the increase of the number of cities, the optimiza-
tion performance of ISSA-ACO is getting better and better Step 1: Calculate the straight-line distance between each
compared with the ant colony algorithm. pair of pole towers based on the geographical coor-
A comprehensive large-scale experimental analysis con- dinates (including longitude, latitude, and altitude)
ducted in this study reveals the remarkable advantages of of the pole towers to be inspected by the drone. In
ISSA-ACO in addressing the Traveling Salesman Prob- flat areas, if the altitude difference between each pole
lem (TSP), a classic physical space optimization challenge. tower is not significant, the influence of altitude on
ISSA-ACO consistently finds optimal solutions in all tested distance calculation can be ignored.
TSP benchmark instances, demonstrating exceptional com- Step 2: Using the calculated straight-line distances, a com-
putational efficiency. Notably, in the tests on large-scale TSP prehensive distance matrix was constructed.
instances u724 and rat783, ISSA-ACO’s average computa- Step 3: The parameters of the sparrow search algorithm and
tional accuracy improved by 14% compared to traditional ant colony algorithm were initialized to lay the foun-
ACO algorithms, while its average computation time was dation for the subsequent optimization process.
significantly reduced by 47%. More impressively, in the Step 4: Combined with the information of the pole towers to
instance rat575 problem, ISSA-ACO achieved an average be inspected by the drone, the sparrow search algo-
accuracy 27% higher than ACO and a 49% reduction in run- rithm generated an initial pheromone distribution for
time. These data convincingly demonstrate the efficiency and the ant colony algorithm.
accuracy of ISSA-ACO in handling large-scale spatial opti- Step 5: Based on these initial pheromones, the ant colony
mization problems. It is worth noting that while ISSA-ACO’s algorithm was used to obtain the optimal path for
performance may be slightly inferior to ant colony algo- drone inspection. To verify the performance of
rithms in small-scale map tests, its performance advantage the algorithm, we conducted multiple experimental
becomes particularly evident when dealing with large-scale tests.
maps. This is mainly reflected in the significant reduction in
error rate, the acceleration of algorithm convergence speed, All algorithms were run 10 times, and the results were
and the remarkable improvement in search performance. analyzed in depth from multiple dimensions, including maxi-
This trend becomes increasingly pronounced as the problem mum value, minimum value, average value, and running time
size expands, highlighting ISSA-ACO’s unique advantage (Table 4). The experimental results show that:

123
286 Page 20 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

Fig. 14 Iterative curve and optimal path of the actual tower position

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 21 of 23 286

Table 4 Analysis of actual


Instance Algorithm Max-value Min-value Avg-value Max-time Min-time Avg-time
tower results
ACO 3.00E+02 2.53E+02 2.78E+02 2.48E+02 2.46E+02 2.47E+02
Eg.1 GA-ACO 3.21E+02 2.69E+02 2.89E+02 2.12E+02 1.92E+02 1.98E+02
ISSA-ACO 2.81E+02 2.44E+02 2.60E+02 1.43E+02 1.31E+02 1.38E+02
ACO 3.88E+01 3.23E+01 3.59E+01 7.23E+01 7.18E+01 7.20E+01
Eg.2 GA-ACO 4.83E+01 4.23E+01 4.55E+01 5.64E+01 5.15E+01 5.32E+01
ISSA-ACO 3.42E+01 2.86E+01 3.16E+01 3.94E+01 3.90E+01 3.91E+01
The meaning of data bold represents the best and the best effect

Table 5 Rank sum test p value


TSP berlin52 st70 kroA100 eil101 lin105 ch130 ch150
between ISSA-ACO and ACO
p-value 8.50E−01 4.96E−01 4.13E−02 8.80E−01 3.26E−01 4.50E−01 1.12E−01

Table 6 Rank sum test p value


TSP korB200 lin318 rd400 fl417 pr439 pcb442 d493
between ISSA-ACO and ACO
p-value 1.31E−01 3.26E−01 1.57E−04 1.57E−04 1.57E−04 1.57E−04 1.57E−04

We can see that in the actual problem, the accuracy and hypothesis is rejected, indicating that the comparison algo-
running time of ISSA-ACO are better than the original algo- rithm has a significant difference. Otherwise, accepting the
rithm ACO. In Eg.1, the average accuracy of ISSA-ACO is hypothesis indicates that the optimization ability of the com-
increased by 6.6%, and the average running time is reduced parison algorithm is the same as a whole. Tables 5, 6 and
by 43.8%. In Eg.2, the average accuracy of ISSA-ACO has 7 shows the rank sum test p value between ISSA-ACO and
been increased by 12%, and the average running time has the original algorithm under 20 TSP instances. It can be con-
been shortened by 45.6%. Among them, Eg.1 includes a total cluded from the table that most of the p-values are far less
of 480 points such as Dong 4 line, Dong 7 line, Tan 3 line, than 5%, which shows that the superiority of ISSA-ACO is
etc., and Eg.2 includes a total of 255 points such as the front significant.
white line, the front white line Longmen, and the front white
line Hexi. Through a large number of experiments, it can be
seen that when solving the TSP problem, ISSA-ACO has a 6 Conclusion
shorter time and fewer convergences than the original algo-
rithm ACO, so it has better advantages, which also shows the To improve the convergence speed, broaden the search scope,
effectiveness of improvement. and enhance the optimization capabilities of the traditional
For the evaluation of improved algorithm performance, ant colony algorithm, this paper presents an innovative
we also need to carry out statistical tests. In other words, it hybrid ant colony algorithm. Initially, our method seam-
is not enough to compare the advantages and disadvantages lessly integrates solutions iterated by the sparrow search
of the algorithm based on the optimal distance and running algorithm into the ant colony algorithm to strategically dis-
time. Statistical tests are needed to prove that ISSA-ACO perse pheromones. This integration notably accelerates the
has significant improvement advantages over the original initial convergence pace of the ant colony algorithm. More-
algorithm. This paper uses the Wilcoxon rank sum test [41], over, we have fine-tuned the pheromone updating mechanism
which is a non-parametric statistical method, which is often during the iterative process, thereby elevating the algo-
used to compare whether there is a significant difference in rithm’s likelihood of escaping local minima. Additionally,
the median of two independent samples. At the significance a dynamic pheromone evaporation factor has been incor-
level of 5%, it is judged whether each result of ISSA-ACO porated to maintain a harmonious equilibrium between the
is statistically significantly different from the best result of global exploration prowess and local exploitation abilities of
the original algorithm. When the p value is less than 5%, the the algorithm. These advancements are carefully designed

Table 7 Rank sum test p value


TSP u574 rat575 p654 d657 u724 rat783
between ISSA-ACO and ACO
p-value 1.57E−04 1.57E−04 3.27E−04 1.57E−04 1.57E−04 1.57E−04

123
286 Page 22 of 23 International Journal of Computational Intelligence Systems (2024) 17:286

to deliver exceptional solutions in a shorter timeframe while References


cleverly avoiding local minima traps. To rigorously assess the
algorithm’s effectiveness, we performed a comparative anal- 1. Applegate, D.L., Bixby, R.E., Chvátal, V., Cook, W.J.: The Trav-
eling Salesman Problem: A Computational Study. Princeton Uni-
ysis utilizing eight distinct TSP instances for smaller-scale
versity Press, Princeton (2011)
problems and twelve unique TSP cases for larger-scale chal- 2. Lin, S.: Computer solutions of the traveling salesman problem. Bell
lenges. The findings underscore the remarkable optimization Syst. Tech. J. 44(10), 2245–2269 (1965)
performance of our algorithm, positioning it as a significant 3. Reinelt, G.: Tsplib–a traveling salesman problem library. ORSA J.
Comput. 3(4), 376–384 (1991)
reference for addressing large-scale TSP problems in pur-
4. Gutin, G., Punnen, A.P.: The Traveling Salesman Problem and Its
suit of optimal routes. Moving forward, we are dedicated to Variations, vol. 12. Kluwer Academic Publishers, London (2006)
further examining the impact of various parameters on exper- 5. Little, J.D., Murty, K.G., Sweeney, D.W., Karel, C.: An algorithm
imental results, refining the algorithm’s optimization prowess for the traveling salesman problem. Oper. Res. 11(6), 972–989
(1963)
in smaller-scale TSP settings, and ultimately leveraging this
6. Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press,
refined methodology to tackle real-world obstacles. Cambridge (1998)
7. Davis, L.: Handbook of Genetic Algorithms. Van Nostrand Rein-
Acknowledgements The author thanks the State Grid Henan Electric hold, New York (1991)
Power Research Institute and Zhengzhou University of Light Industry 8. Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–73 (1992)
for their technical support. 9. Sivanandam, S., Deepa, S., Sivanandam, S., Deepa, S.: Introduction
to Genetic Algorithms. Springer, Berlin (2008)
Author Contributions TYY and ZJX proposed the idea and wrote the 10. Kennedy, J., Eberhart, R.C.: A discrete binary version of the parti-
manuscript. All authors took part in designing the solution. TYY imple- cle swarm algorithm. In: 1997 IEEE International Conference on
mented the algorithms and compared them. WQ and LSF participated Systems, Man, and Cybernetics. Computational Cybernetics and
in the design of the solution and conducted verification and analysis. Simulation, vol. 5, p. 4104–4108 (1997)
GZM and ZHL verify and analyze the implementation results to ensure 11. Shi, Y., et al.: Particle swarm optimization: developments, appli-
the feasibility and effectiveness of the study. All authors reviewed and cations and resources. In: Proceedings of the 2001 Congress on
proofread the final manuscript. Evolutionary Computation, vol. 1, p. 81–86. IEEE (2001)
12. Poli, R., Kennedy, J., Blackwell, T.: Particle swarm optimization:
Funding This paper is supported by the National Natural Science An overview. Swarm Intell. 1, 33–57 (2007)
Foundation of China (No.62006213, 62102373, 62272423), Henan 13. Bertsimas, D., Tsitsiklis, J.: Simulated annealing. Statist. Sci. 8(1),
Province Key R&D Project (241111210400), Henan Provincial Sci- 10–15 (1993)
ence and Technology Research Project No.222102320321, Henan 14. Van Laarhoven, P.J., Aarts, E.H., Laarhoven, P.J., Aarts, E.H.: Sim-
Youth Talent Promotion Project No.2022HYTP005. Starry Sky Maker ulated Annealing: Theory and Applications. Springer, Dordrecht
Space Incubation Project Zhengzhou University of Light Industry (1987)
(No.2021ZCKJ306). 15. Corana, A., Marchesi, M., Martini, C., Ridella, S.: Minimizing
multimodal functions of continuous variables with the “simulated
Data Availability The data used to support the findings of this study are annealing” algorithm-corrigenda for this article is available here.
available from the corresponding author upon request. ACM Trans. Math. Softw. (TOMS) 13(3), 262–280 (1987)
16. Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization.
Declarations IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
17. Dorigo, M., Di Caro, G.: Ant colony optimization: a new meta-
heuristic. In: Proceedings of the 1999 Congress on Evolutionary
Computation, vol. 2, pp. 1470–1477. IEEE (1999)
Conflict of interest The authors declare that they have no Conflict of
18. Blum, C.: Ant colony optimization: Introduction and recent trends.
interest.
Phys. Life Rev. 2(4), 353–373 (2005)
19. Parpinelli, R.S., Lopes, H.S., Freitas, A.A.: Data mining with an
Open Access This article is licensed under a Creative Commons
ant colony optimization algorithm. IEEE Trans. Evolut. Comput.
Attribution-NonCommercial-NoDerivatives 4.0 International License,
6(4), 321–332 (2002)
which permits any non-commercial use, sharing, distribution and repro-
20. Glover, F., Laguna, M.: Tabu search. In: Du, D.Z., Pardalos, P.M.
duction in any medium or format, as long as you give appropriate credit
(eds.) Handbook of Combinatorial Optimization. Springer, Boston
to the original author(s) and the source, provide a link to the Creative
(1998)
Commons licence, and indicate if you modified the licensed mate-
21. Glover, F.: Tabu search–part i. ORSA J. Comput. 1(3), 190–206
rial. You do not have permission under this licence to share adapted
(1989)
material derived from this article or parts of it. The images or other
22. Glover, F.: Tabu search–part ii. ORSA J. Comput. 2(1), 4–32 (1990)
third party material in this article are included in the article’s Creative
23. Stützle, T., Hoos, H.H.: Max-min ant system. Future Gener. Com-
Commons licence, unless indicated otherwise in a credit line to the
put. Syst. 16(8), 889–914 (2000)
material. If material is not included in the article’s Creative Commons
24. Escario, J.B., Jimenez, J.F., Giron-Sierra, J.M.: Ant colony
licence and your intended use is not permitted by statutory regula-
extended: experiments on the travelling salesman problem. Expert
tion or exceeds the permitted use, you will need to obtain permission
Syst. Appl. 42(1), 390–410 (2015)
directly from the copyright holder. To view a copy of this licence, visit
25. Gülcü, Ş, Mahi, M., Baykan, Ö.K., Kodaz, H.: A parallel cooper-
http://creativecommons.org/licenses/by-nc-nd/4.0/.
ative hybrid method based on ant colony optimization and 3-opt
algorithm for solving traveling salesman problem. Soft Comput.
22, 1669–1685 (2018)

123
International Journal of Computational Intelligence Systems (2024) 17:286 Page 23 of 23 286

26. Deng, W., Xu, J., Zhao, H.: An improved ant colony optimization 36. Ali, K.K., Tarla, S., Ali, M.R., Yusuf, A.: Modulation instability
algorithm based on hybrid strategies for scheduling problem. IEEE analysis and optical solutions of an extended (2+ 1)-dimensional
Access 7, 20281–20292 (2019) perturbed nonlinear Schrödinger equation. Results Phys. 45,
27. Dahan, F., El Hindi, K., Mathkour, H., AlSalman, H.: Dynamic 106255 (2023)
flying ant colony optimization (dfaco) for solving the traveling 37. Ali, K.K., Tarla, S., Ali, M.R., Yusuf, A., Yilmazer, R.: Physical
salesman problem. Sensors 19(8), 1837 (2019) wave propagation and dynamics of the Ivancevic option pricing
28. Zhu, H., You, X., Liu, S.: Multiple ant colony optimization based model. Results Phys. 52, 106751 (2023)
on Pearson correlation coefficient. IEEE Access 7, 61628–61638 38. Zafar, A., Raheel, M., Mahnashi, A.M., Bekir, A., Ali, M.R.,
(2019) Hendy, A.: Exploring the new soliton solutions to the nonlinear
29. Zeng, X., Song, Q., Yao, S., Tian, Z., Liu, Q.: Traveling sales- m-fractional evolution equations in shallow water by three analyt-
man problems with replenishment arcs and improved ant colony ical techniques. Results Phys. 54, 107092 (2023)
algorithms. IEEE Access 9, 101042–101051 (2021) 39. Colorni, A., Dorigo, M., Maniezzo, V., et al.: Distributed opti-
30. Zhou, X., Ma, H., Gu, J., Chen, H., Deng, W.: Parameter adaptation- mization by ant colonies. In: Proceedings of ECAL91 - European
based ant colony optimization with dynamic hybrid mechanism. Conference on Artificial Life, vol. 142, pp. 134–142. Paris, France
Eng. Appl. Artif. Intell. 114, 105139 (2022) (1991)
31. Umar, M., Amin, F., Al-Mdallal, Q., Ali, M.R.: A stochastic com- 40. Goss, S., Aron, S., Deneubourg, J.-L., Pasteels, J.M.: Self-
puting procedure to solve the dynamics of prevention in hiv system. organized shortcuts in the argentine ant. Naturwissenschaften
Biomed. Signal Process. Control 78, 103888 (2022) 76(12), 579–581 (1989)
32. Mukdasai, K., Sabir, Z., Raja, M.A.Z., Sadat, R., Ali, M.R., Singk- 41. Wilcoxon, F.: Individual comparisons by ranking methods. In:
ibud, P.: A numerical simulation of the fractional order leptospirosis Kotz, S., Johnson, N.L. (eds.) Breakthroughs in Statistics. Springer
model using the supervise neural network. Alexandria Eng. J. Series in Statistics, pp. 196–202. Springer, New York (1992)
61(12), 12431–12441 (2022)
33. Shahzad, A., Liaqat, F., Ellahi, Z., Sohail, M., Ayub, M., Ali, M.R.:
Thin film flow and heat transfer of cu-nanofluids with slip and
Publisher’s Note Springer Nature remains neutral with regard to juris-
convective boundary condition over a stretching sheet. Sci. Reports
dictional claims in published maps and institutional affiliations.
12(1), 14254 (2022)
34. Sadaf, M., Arshed, S., Akram, G., Ali, M.R., Bano, I.: Analyti-
cal investigation and graphical simulations for the solitary wave
behavior of Chaffee-Infante equation. Results Phys. 54, 107097
(2023)
35. Waqas, H., Farooq, U., Hassan, A., Liu, D., Noreen, S., Makki,
R., Imran, M., Ali, M.R.: Numerical and computational simulation
of blood flow on hybrid nanofluid with heat transfer through a
stenotic artery: Silver and gold nanoparticles. Results Phys. 44,
106152 (2023)

123

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy