PSO and Its Variants: Swarm Intelligence Group Peking University
PSO and Its Variants: Swarm Intelligence Group Peking University
Applications of PSO
Classical and standard PSO
Swarm is better than personal
Classical and standard PSO
xid (t 1)
y Vid (t 1)
g d (t )
Vid (t )
pid (t )
xid (t )
x
Flow chart depicting the General PSO Algorithm:
simulation 1
max
y
min
fitness
x
search space
simulation 2
max
y
min
fitness
x
search space
simulation 3
max
y
min
fitness
x
search space
simulation 4
max
y
min
fitness
x
search space
simulation 5
max
y
min
fitness
x
search space
simulation 6
max
y
min
fitness
x
search space
simulation 7
max
y
min
fitness
x
search space
simulation 8
max
y
min
fitness
x
search space
Schwefel's function
n
f ( x) ( xi ) sin( xi )
i 1
where
500 xi 500
global minimum
f ( x) = n 418.9829;
xi = 420.9687, i = 1 : n
Evolution - Initialization
Evolution - 5 iteration
Evolution - 10 iteration
Evolution - 15 iteration
Evolution - 20 iteration
Evolution - 25 iteration
Evolution - 100 iteration
Evolution - 500 iteration
Search result
Iteration Swarm best
0 416.245599
5 515.748796
10 759.404006
15 793.732019
20 834.813763
100 837.911535
5000 837.965771
Global 837.9658
Standard benchmark functions
1 ) Sphere Function
n
f x xi , x 5,5
2 n
i 1
2 ) Rosenbrock Function
n 1
f x 100 xi 1 xi
i 1
2 2
xi 1 , x 10,10
2 n
3 ) Rastrigin Function
D
f x x 10 cos2x 10
2
i i
i 1
4 ) Ackley Function
1 n 2
exp 1 cos2xi , x 32,32n
n
f x 20 e 20 exp 0.2 x
n i 1 n i 1
Composition Function
Analysis of PSO_state of art
Stagnation - Convergence
Clerc 2002
The particle swarm - explosion, stability, and convergence in a multidimensional
complex space,2002
Kennedy 2005
Dynamic-Probabilistic Particle Swarms,2005
Poli 2007
Exact Analysis of the Sampling Distribution for the Canonical Particle
Swarm Optimiser and its Convergence during Stagnation,2007
On the Moments of the Sampling Distribution of Particle Swarm
Optimisers,2007
Markov Chain Models of Bare-Bones Particle Swarm Optimizers,2007
standard PSO
Defining a Standard for Particle Swarm Optimization,2007
Analysis of PSO_state of art
standard PSO: constriction factor - convergence
Update formula
Equivalent
Vid w Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid )
xid xid Vid
Analysis of PSO_state of art
standard PSO
50 particles
Non-uniform initialization
No evaluation when particle is out of boundary
Analysis of PSO_state of art
standard PSO
A local ring topology
Analysis of PSO_state of art
How does PSO works?
Stagnation versus objective function
Classical PSO versus Standard PSO
Search strategy versus performance
Classical PSO
Main idea: Particle swarm optimization,1995
Exploit the current best position
Pbest
Gbest
Explore the unkown space
pbest gbest
Classical PSO
Implementation
pbest
gbest
wV
pbest gbest
pbest gbest
Exploitation
wV
Exploration
Analysis of PSO_our idea
Sampling probability density-computable
x(t 1) x(t ) wV (t ) Z
Analysis of PSO_our idea
Analysis of PSO_our idea
Analysis of PSO_our idea
Sampling probability
wV
Analysis of PSO_our idea
No inertia part(wV)
Analysis of PSO_our idea
Inertia part(wV)
Analysis of PSO_our idea
No inertia part(wV)
Analysis of PSO_our idea
Inertia part(wV)
Analysis of PSO_our idea
Difference among variants of PSO
Exploitation Exploration
Probability
Balance
Analysis of PSO_our idea
What is the property of the iteration?
Analysis of PSO_our idea
Whether the search strategy is the same or whether
the PSO is adaptive when
Same parameter(during the convergent process)
Different parameter
Different dimensions
Different number of particles
Different topology
Different objective functions
In different search phase(when slow or sharp
slope,stagnation,etc)
What’s the change pattern of the search strategy?
Analysis of PSO_our idea
What is the better PSO on the search strategy?
Simpler implement
Using one parameter as a tuning knob instead of two in
standard PSO
Prove they are equialent when setting some value of parameter
Effective on most objective functions
Adaptive
Analysis of PSO_our idea
Markov chain
State transition matrix
Analysis of PSO_our idea
Random process
Gaussian process
Kernel mapping
w: x1 1
c: x2 2
dim: x3 3
Num: x4 4
5 f ( i xi ) P( Exploitation)
Top: x5 6
FEs: x6 7
Fun: x7
Current results
Variance with convergence
func_num=1;
fes_num=5000;
run_num=10;
particles_num=50;
dims_num=30;
Current results
Variance with dimensions
func_num=1;
fes_num=3000;
run_num=10;
particles_num=50;
Current results
Variance with number of particles
func_num=1;
fes_num=3000;
run_num=10;
dims_num=30;
Current results
Variance with topology
Current results
Variance with inertia weight
Current results
1. Shifted Sphere Function
2. Shifted Schwefel's Problem 1.2
4
x 10
4
x 10
5
8
4
6
3
2 4
1 2
0
0
-1
100 -2
100
50 100
50 50 100
0 50
0 0
-50 -50 0
-50 -50
-100 -100
-100 -100
PSO on Benchmark Function
3. Shifted Rotated High Conditioned Elliptic Function
4. Shifted Schwefel's Problem 1.2 with Noise in Fitness
10
x 10 4
x 10
4
5
3 4
2 3
2
1
1
0
100 0
50 100 100
0 50 0
0 -20
-50 50 -40
-50
-60
-100 -100 -80
0 -100
Current results
Variance with objective functions
Unimodal Functions
Multimodal Functions
保存每一代的全局最优粒子作为第二步中克隆算子的父粒子
n次迭代过后克隆保存的n个全局最优粒子
将所有克隆出来的粒子利用随机扰动变异
基于浓度机制的多样性保持策略进行选择操作
Our variants of PSO
MPSO
Our variants of PSO
AR-CPSO
Our variants of PSO
FPSO
Applications of PSO
Applications of PSO
Applications of PSO