0% found this document useful (0 votes)
19 views3 pages

Unit 4 Comprehensive Learing and Multimodel

The document discusses various extensions and variants of particle swarm optimization (PSO) algorithms. It describes the Comprehensive Learning PSO (CLPSO) algorithm, which allows each particle to learn from any other particle instead of just the global best. It also covers multi-objective PSO approaches, including vector evaluated PSO (VEPSO) which uses multiple swarms to optimize multiple objectives simultaneously. Finally, it notes the wide range of applications of PSO algorithms across many fields such as electrical engineering, artificial intelligence, bioinformatics, and operations research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views3 pages

Unit 4 Comprehensive Learing and Multimodel

The document discusses various extensions and variants of particle swarm optimization (PSO) algorithms. It describes the Comprehensive Learning PSO (CLPSO) algorithm, which allows each particle to learn from any other particle instead of just the global best. It also covers multi-objective PSO approaches, including vector evaluated PSO (VEPSO) which uses multiple swarms to optimize multiple objectives simultaneously. Finally, it notes the wide range of applications of PSO algorithms across many fields such as electrical engineering, artificial intelligence, bioinformatics, and operations research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

30 K.E.

Parsopoulos

where,

Œk
zŒi;k D pŒ1
g ; : : : ; pg
Œk 1
; xi ; pŒkC1
g ; : : : ; pŒd
g ;

Œk
where xi is the current position of the i -th particle of the k-th swarm, which is
under evaluation. Naturally, instead of the overall bests of the swarms, randomly
selected best positions can be used in the context vector. Also, swarms of higher
dimension can be used. However, both these alternatives can radically change
the algorithm’s performance. Obviously, the context vector z constitutes the best
approximation of the problem’s solution with CPSO-Sk .
The second variant presented in [146], denoted as CPSO-Hk , combines CPSO-Sk
with the Canonical PSO and applies each algorithm alternatively in subsequent
iterations. In addition, information exchange between the two algorithms was con-
sidered by sharing half of the discovered solutions between them. The experimental
assessment revealed that both CPSO-Sk and CPSO-Hk are promising, opening the
ground for further developments such as the ones in [48, 136, 152, 167].

Comprehensive Learning PSO

The Comprehensive Learning PSO (CLPSO) [72] was proposed in 2006 as an


alternative for alleviating gbest PSO’s premature convergence problem, which can
be attributed to the use of the overall best position in the update equation of the
velocities. In CLPSO, each particle can use the best position of any other particle to
independently update its velocity, based on a probabilistic scheme.
Specifically, the velocity update of Eq. (4) is replaced with the following [72],

.t / .t/ .t/
vij D vij C C pqŒi;j j xij ; (40)

where j 2 D, i 2 I , and qŒi;j 2 I is the index of the particle that is used for
the update of the j -th component of the i -th particle’s velocity vector. Naturally,
this particle can be either the i -th particle itself or another particle from the
swarm. This decision is probabilistically made according to predefined probabilities
1 ; 2 ; : : : ; d , i.e.,

i; if R 6 j ;
qŒi;j D for all j 2 D;
TOURN .I 0 / ; otherwise;

where R U .0; 1/ is a uniformly distributed random variable, I 0 D I n fi g, and


TOURN .I 0 / is an index selected from I 0 through tournament selection [72]. The
latter procedure includes the random selection of two particles from the set I 0 . The
best between them, i.e., the one with smallest objective value, is the winner and
participates in the update of vij .
In case of qŒi;j D i , for all j 2 D, one of the components of vij is randomly
selected and determined anew by using another particle. Also, the indices qŒi;j are
36 K.E. Parsopoulos

where f 0 .m/ .x/ is the m-th re-evaluation of x using f 0 .x/. Re-evaluation serves
as a mean for approximating the expected value of the noisy objective function,
i.e., F .x/ E.f 0 .x//. Accuracy increases with the number M of re-evaluations,
although it also increases the computational cost.
Thus, the trade-off between better estimations of the objective values and the
corresponding computational burden shall be tuned. In such cases, specialized
techniques such as the Optimal Computing Budget Allocation (OCBA) [16] have
been used to optimally allocate the re-evaluations budget in order to provide reliable
evaluation and identification of the promising particles [91]. These techniques can
be used along with proper parameter tuning [8] or learning strategies [110] for
improved results. Also, they do not require the modification of the algorithm. Alter-
natively, specialized operators have been proposed with remarkable success [47].

Multiobjective PSO

Multiobjective optimization (MO) problems consist of a number of objective


functions that need to be simultaneously optimized. In contrast to the definition of
single-objective problems in Eq. (1), an MO problem is defined as the minimization
of a vector function [24],

f.x/ D .f1 .x/; f2 .x/; : : : ; fK .x//> ;

possibly subject to constraints Ci .x/ 6 0, i D 1; 2; : : : ; m. Typically, the objective


functions fk .x/ can be conflicting. Thus, it is highly improbable that a single
solution that globally minimizes all of them can be found.
For this reason, the main interest in such problems is concentrated on the
detection of Pareto optimal solutions. These solutions are nondominated by any
other point in the search space, i.e., they are at least as good as any other point for
all the objectives fk .x/. Formally, if x, y, are two points in the search space X , then
f.x/ is said to dominate f.y/, and we denote f.x/ f.y/, if it holds that,

fk .x/ 6 fk .y/; for all k D 1; 2; : : : ; K;

and,

fk 0 .x/ < fk 0 .y/; for at least one k 0 2 f1; 2; : : : ; Kg:

Thus, x 2 X is a Pareto optimal point if there is no other point y 2 X such


that f.y/ f.x /. Obviously, an (even infinite) set fx1 ; x2 ; : : :g of Pareto optimal
solutions may exist. The set ff.x1 /; f.x2 /; : : :g is called the Pareto front.
There are two main approaches for tackling MO problems. The first one
aggregates the objectives into a single one and solves the problem with the typical
methodologies for single-objective optimization. The second approach requires
Particle Swarm Methods 37

vector evaluated operators and it is based on the concept of Pareto dominance. In


the context of PSO, early aggregation approaches appeared in 2002 [99], where the
Canonical PSO was used for the minimization of a weighted aggregation of the
objective functions,

K
X K
X
F .x/ D wk fk .x/; wk D 1:
kD1 kD1

Both a conventional weighted aggregation (CWA) approach with fixed weights as


well as a dynamic weighted aggregation (DWA) approach [53] were investigated
with promising results. Obviously, the detection of many Pareto optimal solutions
through weighted aggregation requires multiple applications of PSO, since each run
provides a single solution of F .x/. From the computational point of view, this is a
drawback since the swarms can simultaneously evolve many solutions. Yet, it is still
a popular approach in applications mostly due to its simplicity.
A Vector Evaluated PSO (VEPSO) was also proposed in [99] and parallelized
later in [96]. VEPSO uses a number of K swarms, one for each objective fk .
The k-th swarm Sk is evaluated only with the corresponding objective fk , k D
1; 2; : : : ; K. The swarms are updated according to the gbest model of the Canonical
PSO, although with a slight modification. Specifically, the overall best that is used
for the velocity update of the particles in the k-th swarm comes from another
swarm. Clearly, this is a migration scheme aiming at transferring information among
swarms. The donator swarm can be either a neighbor of the k-th swarm in a ring
topology scheme as the one described in section “Concept of Neighborhood” or it
can be randomly selected [99]. VEPSO was studied on standard MO benchmark
problems with promising results [96].
There is a large number of new developments and applications on multiobjective
PSO approaches in literature [1, 2, 27, 29, 32, 51, 75, 156, 160, 169]. The interested
reader can find comprehensive surveys in [105, 121].

Applications

It would be futile to even try to enumerate all applications of PSO that have
been published so far. From 2005 and on, more than 400 papers with PSO’s
applications appear every year, spanning various scientific and technological fields.
Electrical Engineering concentrates the majority of these works, especially in
the fields of power systems, control, antenna design, electromagnetics, sensors,
networks and communications. Artificial Intelligence also hosts a large number of
PSO-based applications, especially in robotics, machine learning, and data mining.
Bioinformatics and Operations Research follow closely, with numerous works in
modeling, health-care systems, scheduling, routing, supply chain management, and
forecasting.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy