0% found this document useful (0 votes)
82 views8 pages

An Adaptive Particle Swarm Optimization Algorithm Based On Cat Map

Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm which has been applied to many fields. But it also has drawbacks. It is easy to fall into local optimum and has a low convergence rate in the late iterative process. An adaptive particle swarm. Optimization algorithm based on Cat map (ACPSO) is proposed.

Uploaded by

menguemengue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views8 pages

An Adaptive Particle Swarm Optimization Algorithm Based On Cat Map

Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm which has been applied to many fields. But it also has drawbacks. It is easy to fall into local optimum and has a low convergence rate in the late iterative process. An adaptive particle swarm. Optimization algorithm based on Cat map (ACPSO) is proposed.

Uploaded by

menguemengue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Journal of Computational Information Systems 9: 1 (2013) 97104

Available at http://www.Jofcis.com
An Adaptive Particle Swarm Optimization Algorithm
Based on Cat Map

Ming LI, Huiya ZHAO

, Fuzhong NIAN
School of Computer and Communication, Lanzhou University of Technology, Lanzhou 730050, China
Abstract
Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm which has been
applied to many elds. But it also has drawbacks. It is easy to fall into local optimum in the whole
iterative process and has a low convergence rate in the late iterative process. To resolve these problems,
an adaptive particle swarm optimization algorithm based on Cat map (ACPSO) is proposed. Based on
the ergodicity and regularity of Cat map, these particles can be scattered uniformly over the search space.
And this can increase the diversity of particles. At the same time, the strategy of local concentration
of particles is employed to adaptively adjust the inertia weight to improve the convergence rate. A
smaller inertia weight is given to enhance the local search ability of algorithm when the concentration of
particles is bigger. This improves the accuracy of algorithm. A larger inertia weight is given to enhance
the ability of global search of algorithm when the concentration of particles is smaller. This improves
the convergence rate of algorithm. The simulation results indicate that the proposed algorithm is not
easy to fall into lacal minima and has a faster convergence rate in the late iterative process than other
algorithms.
Keywords: Chaotic Mutation Operation; Cat Map; Local Optimum; Global Convergence Rate; Particle
Swarm Optimization Algorithm
1 Introduction
The Swarm Optimization Algorithm is a population-based stochastic optimization algorithm pro-
posed by Kennedy and Eberhart in 1995 [1]. The global search strategy and simple velocity-
position mode is adopted to avoid the complicated genetic operators such as mutation and
crossover. The particle swarm optimization algorithm has been successfully applied into many ar-
eas such as classication and pattern recognition because of its fast computing speed, robustness
and other characteristics [2].

Project supported by the National Natural Science Foundation of China (No. 61263019), the Natural Science
Foundation of Gansu Province (No. 1014RJZA028, No. 1112RJZA029) and the Fundamental Research Funds for
the Gansu Universities(No. 1114ZTC144).

Corresponding author.
Email address: shirley200803@126.com (Huiya ZHAO).
15539105 / Copyright 2013 Binary Information Press
January 1, 2013
98 M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104
There are many new improved variants of particle swarm optimization. An linearly decreasing
inertia weight particle swarm optimization (LDWPSO) is introduced in [3], which improves the
global search and local search ability to some extent and has a faster convergence rate. But the
linearly decreasing inertia weight may lead to PSO fall local optimum easily. An adaptive inertia
weight particle swarm optimization (APSO) is presented in [4], which can avoid the PSO being
trapped in local optimum. But it has high computational complexity. New PSO algorithms based
on dierent topologies are proposed in [5], which can reduce time consumption. Above improved
methods have advantages and disadvantages. Another improved method of the combination of
chaos and PSO is more and more popular recently [6]. This method can take advantage of the
merits of both two approaches which is the mainstream method.
In this paper, an adaptive chaotic particle swarm optimization (ACPSO) algorithm is proposed.
These particles can be scattered uniformly over the search space by using Cat map. While the
strategy of local concentration of particles is employed to adaptively adjust the inertia weight to
improve the convergence rate. The simulation results indicate that the proposed algorithm is not
easy to fall into local minima and has a faster convergence rate in the late iterative process than
other algorithms.
The rest of the paper is organized as follows: Review of the original PSO is summarized in
Section 2. Section 3 explains the proposed method. In Section 4 , simulation results and analysis
are presented. Finally, section 5 outlines our conclusions and future research.
2 The Original Particle Swarm Optimization
In the original PSO, each particle is composed of three n-dimensional vectors, where n is the
dimensionality of the search space. These are the position vector x
i
and the velocity vector v
i
.
We can evaluate the quality of the particles by calculating the tness value. The best position
encountered by itself denoted as p
b
and the best position encountered by the whole particles
denoted as p
g
.The velocity and position of the particles at the next iteration are updated according
to the following equations:
x
i+1
= x
i
+ v
i
(1)
v
i+1
= v
i
+ c
1
r
1
(p
b
x
i
) + c
2
r
2
(p
g
x
i
) (2)
3 Adaptive Chaotic Particle Swarm Optimization Algo-
rithm
3.1 The comparison of three chaotic maps
Chaos, a seemingly irregular movement, appears to be stochastic in the deterministic nonlinear
systems under deterministic conditions. With the ergodicity, randomness and regularity proper-
ties, chaos sequence can traverse all the states without repetition in a certain range [7]. Logistic
map [8] and Tent map [9] is used to improve the PSO. In this paper, Cat map[10]is adopted to
improve the PSO. And the three maps are analyzed as follows:
M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104 99
0 0.2 0.4 0.6 0.8 1
0
1000
2000
logistic map
X
I
t
e
r
a
t
i
o
n
0 0.2 0.4 0.6 0.8 1
0
200
400
tent map
X
I
t
e
r
a
t
i
o
n
0 0.2 0.4 0.6 0.8 1
250
300
350
cat map
X
I
t
e
r
a
t
i
o
n
0 0.2 0.4 0.6 0.8 1
250
300
350
cat map
Y
I
t
e
r
a
t
i
o
n
Fig. 1: Distributions of three maps about iterating 30000 times
(1) Logistic Map:
x
i+1
= f(x
i
, u) = ux
i
(1 x
i
) (3)
(2) Tent Map:
x
i+1
=
{
2x
i
0 x
i
0.5
2 (1 x
i
) 0.5 < x
i
1
(4)
(3)Cat Map:
(
x
i+1
y
i+1
)
=
(
1 1
1 2
)(
x
i
y
i
)
= C
(
x
i
y
i
)
mod 1 (5)
The simulation experiments are done on the Matlab R2010a. The distribution of each map
iterating respectively 30000 times in the range [0,1] is presented in Fig.1. Logistic map has the
distribution of the two high and low in the middle, which makes the algorithm need several
iterations to get the optimal solution when the optimal value falls in the middle position. And
this greatly reduces the searching speed and eciency. Tent map will tend to a xed point due to
the limited computer word length. The binary sequence of the fractional part will tend to all zeros
after a certain number of unsigned left shift operators. In this paper, the proposed algorithm is
based on Cat map whose x sequence and y sequence have better traverse uniformity and faster
iteration speed than Logistic map and Tent map. The x and y sequence initialize the position
and velocity of particles, which increases the diversity of the population, improves the uniform
ergodicity and obtains a more eective search space. This can avoid the problems of signicantly
low search eciency and accuracy when facing high-dimensional complex test functions.
3.2 Adaptive chaotic particle swarm optimization algorithm
To resolve the problems of PSO which is easy to fall into local optimum in the whole iterative
process and has a low convergence rate in the late iterative process. An adaptive chaotic particle
swarm optimization algorithm is proposed. The basic idea is mainly reected in the following
two aspects:
100 M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104
(1) The strategy of chaotic initialization
The Cat map will be used for initializing the population according to the comparison in the
chapter 3.1. With the ergodicity and regularity features, a more evenly distribution and larger
search space are obtained instead of a heterogeneity space generated by a random sequence. And
the y sequence is used for initializing the velocity of particles reducing the time consumption in
the initialization process.
Using Cat map to initialize the population: The variable x
i
representing the position of the
particle in the Eq.6 is given n tiny dierence initial values denoted as x
1
= (x
11
, x
12
, , x
1n
).
And the variable y
i
representing the velocity of the particle is given other n tiny dierence initial
values denoted as y
1
= (y
11
, y
12
, , y
1n
). Each particle x
i
represents a potential solution.
(2) The Adaptive Inertia Weight
The adaptive adjustment process: inertia weight is a key parameter to balance the ability of
local search and global search. A larger inertia weight value facilitates global exploration which
enables the algorithm to search new areas, while a smaller one tends to facilitate local exploitation
to increase the search accuracy [11]. With the randomness of the chaotic sequence, the algorithm
will have an unstable enough performance [12]. In view of this, the inertia weight will be adjusted
adaptively by the local concentration of particles in this paper. Firstly, the denition of the
local concentration of particles will be given, then the equation of adaptive weight factor of each
particle.
Denition 1 To a set of particles, the local concentration of particles is dened as follows:
Where, N is the number of particles. k is the number of neighbor particles. d(x
i
, p
n
) is the
distance between particles and the neighbor particles. p
n
is the best position encountered by the
neighbor particles. is the set threshold of distance. (x
i
) is the number of particles which the
distance between and k neighbor particles is less than . The value of (x
i
) is in the range [0,1].
In the each iteration, the particle with the best tness is selected rstly and the distance with
other particles in the population is calculated and sorted by ascending. The rst k particles are
selected as the neighbor particles for the particle x
i
. When d(x
i
, p
n
) , the value of the (x
i
)
pluses 1. The best tness particle remained will be selected, until all the particles have been
traversed. A larger value of (x
i
) represents a larger concentration of particles and also indicates
that particles are relatively concentrated.
Denition 2 Adaptive weight factor is dened as follows:
w(x
i
) = e
(x
i
)
(6)
The range of the Eq.8 is [0.3679,1] which conforms the normal range of [0.4,1.4] proposed in [13].
The adaptive weight factor is a nonlinear decreasing function with the particle concentration
increasing. And a smaller weight should be given to these k+1 particles which are possible to
gather near the optimal value. This can improve the local search ability and the search accuracy.
On the contrary, a larger weight should be given to k+1 particles which are decentralized and away
from the global optimum. This can improve the global search ability and the search eciency of
algorithm.
The strategy of making the k+1 particles as the updated unit not only somewhat overcomes
the drawbacks of updating all the particles with a same weight and without considering the
dierences between particles, but also avoids the computational complexity of calculating the
weights of all the particles. The proposed adaptive weight factor is adjusted by the value of
M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104 101
the local concentration of particles. It can balance the global and local search capabilities and
eectively avoid the problem of easily being trapped into local optimum of PSO. Especially, it
can speed up the convergence speed in the later iteration.
The procedure for implementing the ACPSO is given by the following steps:
(1) Initialization of swarm positions and velocities: Initialize a population of N particles by
using Cat map in the n-dimensional problem space. The acceleration constant and the
maximum iteration are set.
(2) Evaluation the tness of particles: Evaluate the tness value of the each particle.
(3) Comparison to p
b
: Compare the tness of the each particle with its p
b
. If the current value
is better than p
b
, then reset the p
b
value equal to the current value.
(4) Comparison to p
g
: Compare the tness of each particle with the best position encountered
by whole particles. If the current value is better than p
g
, reset the p
g
value equal to the
current value.
(5) The following operations are done to each particle in the population.
a. Set the iteration value of j equal to 0 and the initial value equal to 0, j = 1, 2, , N;
b. The local concentration of particles are calculated according to Eq. 6. The adaptive
weight is calculated according to Eq. 7. The k+1 is the updated unit for the weight of
particle. And the FLAG of updated particles is set to 1. Find the next particle with the
best tness and FLAG=0;
c. Updating the velocities of particles;
d. j = j + k + 1;
e. Return to b until a stop criterion is met, usually the maximum number N is met.
(6) Repeating the evolutionary cycle: Return to Step (2) until a stop criterion is met, usually
a suciently good tness or a maximum number of iterations.
4 Simulation Results and Analysis
4.1 Simulation results and experimental analysis
Four important functions are considered to test the ecacy of the proposed methods. And the
four test functions as follows:
(1) Griewank function: f(x) =
1
4000
n

i=1
x
2
i

n

i=1
cos(
x
i

i
) + 1, here ,x
i
[600, 600].
(2) Rosenbrock function: f(x) =
n1

i=1
[100(x
i+1
x
2
i
)
2
+ (x
i
1)
2
], here ,x
i
[2.048, 2.048].
(3) Shubert function: f(x, y) =
{
5

i=1
i cos [(i + 1)x + i]
}

{
5

i=1
i cos [(i + 1)y + i]
}
,here, 10
x, y 10.
102 M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104
(4) Rastrigin function: f(x
i
) =
n

i=1
[x
2
i
10 cos(2x
i
) + 10],here,x
i
[5.12, 5.12].
The parameters in the experiments are set as follows: the number of particles is set to 20 and
the iteration number is 100. c
1
= c
2
= 2, m is set to 5 and is 1E 8. The initial value of x is
0.1 and y is 0.2 for Cat chaotic sequence. And the initial value of x is 0.1 for the Logistic chaotic
sequence. w [0.4, 0.9] is used for the LDWPSO. The average value and the best value of function
values obtained through 100 simulation runs by the proposed ACPSO, PSO, LDWPSO and LPSO
algorithms are taken as the measures.The experimental results are recorded and presented in table
1.
Table 1: Simulation results obtained from four methods for four functions
Function
PSO LDWPSO LPSO ACPSO
Mean Best Mean Best Mean Best Mean Best
f
1
178.800 186.628 1.23E08 0.00E+00 7.69E04 1.56E13 7.94E07 1.40E10
f
2
181.252 186.684 0.000634 9.45E05 0.50359 0.096911 0.155697 0.006387
f
3
182.362 186.714 0.000832 0.00E+00 0.240173 3.55E14 0.122278 3.83E07
f
4
183.704 186.728 0.000247 0.00E+00 0.100619 0.00E+00 0.018017 3.48E13
The results clearly show how superior ACPSO algorithm, which initializes the population by
using Cat map, increases the diversity of particles through
introducing the chaotic mutation operation and modie s parameter w based on the local
concentration of particles. For f
3
, the average and the best values of the proposed algorithm is
better than the other algorithms. For the f
1
, f
2
and f
4
, the average and the best values of the
proposed algorithm is superior to PSO and LDWPSO algorithms. Although the average value
of the proposed algorithm is no better than the LPSO algorithm, the best value of the proposed
algorithm is superior to LPSO algorithm.
0 20 40 60 80 100
40
35
30
25
20
15
10
5
0
5
Iteration
L
o
g
1
0
(
f
i
t
n
e
s
s
)


PSO
LDWPSO
LPSO
ACPSO
Fig. 2: The evolutionary curve for Grienwank
0 20 40 60 80 100
40
35
30
25
20
15
10
5
0
Iteration
L
o
g
1
0
(
f
i
t
n
e
s
s
)


PSO
LDWPSO
LPSO
ACPSO
Fig. 3: The evolutionary curve for Rosenbrock
The curves of the evolutionary optimization of four algorithms for four test functions are pre-
sented in Fig. 3, Fig. 4, Fig. 5 and Fig. 6. The accuracy of the tness value can not be portrayed
accurately when the tness closes to zero. Therefore, taking the 10 logarithm operation is intro-
duced to resolve this problem. And the logarithmic value of the tness will be set 40 when the
calculation result is 0.
M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104 103
0 20 40 60 80 100
200
150
100
50
0
50
Iteration
f
i
t
n
e
s
s


PSO
LDWPSO
LPSO
ACPSO
Fig. 4: The evolutionary curve for Shubert
0 10 20 30 40 50 60 70 80 90 100
14
12
10
8
6
4
2
0
2
Iteration
L
o
g
1
0
(
f
i
t
n
e
s
s
)


PSO
LDWPSO
LPSO
ACPSO
Fig. 5: The evolutionary curve for Rastrigin
These gures show that the proposed algorithm is not easy to fall into local optimum and
has faster convergence rate in the late iteration because of introduction of the chaotic mutation
operator and adaptive weighted factor. The convergence performance and search accuracy of
PSO is the worst. PSO and LPSO are easy to fall into local optimum and still can not jump out
of the local optimum after many iterations, especially in the performance for f
1
and f
2
. Although
LDWPSO is not easy to fall into local optimum, its search accuracy is not high which is equally
ecient with LPSO as well. The convergence performance of the proposed algorithm is equally
ecient with other algorithms for f
3
. And for the f
1
, f
2
and f
4
, the proposed algorithm not only
has the fastest convergence rate, and can quickly jump out of local optima after several iterations,
then nds the global optimum.
4.2 The convergence and computational complexity analysis of ACP-
SO
(1) Convergence analysis of ACPSO.The original PSO is convergent, which is proved in [11]. In
this paper, the initialization based on Cat map, chaotic mutation operation and the adaptive
adjustment of the inertia weight factor are adopted to improve original PSO, which do not
change the search mechanism of original PSO. So the proposed method is also convergent.
This can also be seen from the evolutionary curves for four test functions in section 4.1.
(2) Computational complexity analysis of ACPSO.The maximum iteration is denoted as Iteratio-
n max. The number of particles is denoted as n. The dimension of the search space is denoted
as d. In the original PSO, the computational complexity is O(2nd + n
2
+ 2nd). In ACPSO,
the computational complexity is O(2nd + n
2
+ 2nd nlg n). The increased computational
complexity part is caused by sorting operation. The computational complexity of ACPSO is
accepted when it is applied to solve the high dimensional and complex problems.
5 Conclusions
In this paper, an adaptive chaotic particle swarm optimization algorithm was proposed. Uniform
initialization of the population is obtained by using Cat map. The inertia weight has been adjusted
by the particle concentration to improve the convergence rate. The simulation results indicate
104 M. Li et al. /Journal of Computational Information Systems 9: 1 (2013) 97104
that the proposed algorithm is not easy to fall into local minima and has a faster convergence
rate in the late iterative process than other algorithms. It is crucial to select chaotic maps for
improving PSO. Dierent chaotic maps have a large dierence in the uniformity of the distribution
and the algorithm complexity. So, how to nd a better chaotic map used for improving PSO need
to be studied further.
Acknowledgement
This research is supported by the National Natural Science Foundation of China (No.61263019),
Natural Science Foundation of Gansu Province (No.1014RJZA028, No.1112RJZA029) and the
Fundamental Research Funds for the Gansu Universities(No.1114ZTC144).
References
[1] J. Kennedy, R. Eberhert. Particle Swarm Optimization, IEEE International Conference on Neural
Networks, 1995, 1942 1948.
[2] Y. W. Jeong, J. B. Park, S. H. Jang et al. A new quantum-inspired binary pso: application to
unit commitment problems for power systems, Power Systems, 25(2010), 1486 1495.
[3] P. Riccardo, J. Kennedy, T. Blackwell. Particle Swarm Optimization: An overview, Swarm Intell,
1(2007), 33 57.
[4] X. M. Tao, L. B. Yang. Cultural particle swarm optimization algorithm with adaptive guidance,
Computer Engineering and Applications, 47(2011), 37 41.
[5] P. Huang, J. Y. Yu, Y. Q. Yuan. Improved Niching Multi-objective Particle Swarm Optimization
Algorithm, Computer Engineering, 37(2011), 1 3.
[6] A. Al. Particle swarm optimization algorithm with dynamic inertia weight for online parameter
identication applied to lorenz chaotic system, International Journal of Innovative Computing,
Information and Control, 8(2012), 1191 1203.
[7] B. Alatas, E. Akin, O. A. Bedri. Chaos embedded particle swarm optimization algorithms, Chaos,
Solitons & Fractals, 40(2009), 1715 1734.
[8] R. F. Liu, X. Y. Wang. Simplied particle swarm optimization algorithm using chaotic inertia
weight, Computer Engineering and Applications, 47(2011), 58 60.
[9] Q. D. Wang, J. F. Li, J. Zhou. Modication of Caos multicast key management scheme based on
generalized cat map, Computer applications, 31(2011), 975 977.
[10] L. Ermann. D. L Shepelyansky. The arnold cat map, the Ulam method and time reversal, Physica
D: Nonlinear Phenomena, 241(2012), 514 518.
[11] S. Rana, S. Jasola, R. Kumar. A review on particle swarm optimization algorithms and their
applications to data clustering, Articial Intelligence Review, 35(2010), 211 222.
[12] J. S. H Dominguez. A comparison on the search of particle swarm optimization and dierential
evolution on multi-objective optimization, IEEE Conference on Evolutionary Computation (CEC),
2011: 1978 1985.
[13] Y. Shi, R. C. Eberhart. Empirical study of particle swarm optimization, International Conference
on Evolutionary Computation Washington, USA: IEEE, 1999: 1945 1950.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy