Heuristic Methods: Abstract Since The Linear Ordering Problem Is NP-hard, We Cannot Expect To Be
Heuristic Methods: Abstract Since The Linear Ordering Problem Is NP-hard, We Cannot Expect To Be
Heuristic Methods
2.1 Introduction
R. Martı́ and G. Reinelt, The Linear Ordering Problem, Exact and Heuristic Methods 17
in Combinatorial Optimization 175, DOI: 10.1007/978-3-642-16729-4 2,
c Springer-Verlag Berlin Heidelberg 2011
18 2 Heuristic Methods
Decomposition Methods
The original problem is broken down into sub-problems that are simpler to solve,
bearing in mind, be it in a general way, that subproblems belong to the same problem
class.
Inductive Methods
The idea behind these methods is to generalize the smaller or simpler versions to the
whole case. Properties or techniques that have been identified in these cases which
are easier to analyze, can be applied to the whole problem.
2.1 Introduction 19
Reduction Methods
These involve identifying properties that are mainly fulfilled by the good solutions
and introduce them as boundaries to the problem. The objective is to restrict the
space of the solutions by simplifying the problem. The obvious risk is that the opti-
mum solutions of the original problem may be left out.
Constructive Methods
These involve building a solution to the problem literally step by step from scratch.
Usually they are deterministic methods and tend to be based on the best choice
in each iteration. These methods have been widely used in classic combinatorial
optimization.
There are diverse possibilities for measuring the quality of a heuristic, among which
we find the following.
There are situations when no optimum solution is available for a problem, not even
for a limited set of examples. An alternative evaluation method involves comparing
the value of the solution provided by the heuristic with a bound for the problem (a
lower bound if it is a minimization problem and an upper bound if it is a maximiza-
tion problem). Obviously the quality of fit will depend on the quality of the bound
(closeness to optimal). Thus we must somehow have information about the quality
of the aforementioned bound, otherwise the proposed comparison would not be of
much interest.
This is one of the most commonly used methods for difficult problems which have
been worked on for a long time and for which some good heuristics are known.
Similarly to what happens with the bound comparisons, the conclusion of this com-
parison deals with the quality of fit of the chosen heuristic.
Given that the LOP has been studied in-depth from both the exact viewpoint
and that of a heuristic, we have a value of the optimum solution for small and
2.2 Construction Heuristics 21
One method that was well-accepted for a time concerns the behavioral analysis of
the heuristic algorithm in the worst case; i.e., consider the examples that most dis-
favor the algorithm and set analytical bounds to the maximal deviation in terms of
the optimum solution to the problem. The best aspect of this method is that it estab-
lished the limits of the algorithm’s results for any example. However, for the same
reason, the results tend not to be representative of the average behavior of the al-
gorithm. Furthermore, the analysis can be very complicated for more sophisticated
heuristics.
An algorithm A for dealing with a maximization problem is called ε -approxi-
mative if there is a constant ε > 0 such that for every problem instance the algorithm
guarantees that a feasible solution can be found with value cA and the property
cA ≥ (1 − ε )copt.
We will now review some of the construction heuristics, i.e., methods which follow
some principle for successively constructing a linear ordering. The principle should
somehow reflect that we are searching for an ordering with high value.
One of the earliest heuristic methods was proposed by Chenery and Watanabe [32].
These authors did not formulate an algorithm, but just gave some ideas of how to
obtain plausible rankings of the sectors of an input-output table. Their suggestion
is to rank those sectors first which show a small share of inputs from other sectors
and of outputs to final demand. Sectors having a large share of inputs from other
22 2 Heuristic Methods
industries and of final demand output should be ranked last. Chenery and Watanabe
defined coefficients taking these ideas into account to find a preliminary ranking.
Then they try to improve this ranking in some heuristic way which is not specified
in their paper. The authors admit that their method does not necessarily lead to good
approximate solutions of the triangulation problem.
This method [6] is based on so-called output coefficients. The output coefficient of
a sector i with respect to another sector j is defined as
ci j
bi j = .
∑ cik
k=i
Then it is intended to rank sector i before sector j whenever bi j > b ji (“better cus-
tomer principle”). This is impossible in general. So it is heuristically tried to find a
linear ordering with few contradictions to this principle. Subsequently local changes
are performed to achieve better triangulations. Similarly an input coefficient method
can be formulated based on the input coefficients
ci j
ai j = .
∑ ck j
k= j
In [8] two further methods are described. The first one is related to the previous
ones in that it calculates special quotients to rank the sectors. For each sector i the
number
∑ cik
k=i
qi =
∑ cki
k=i
is determined. The sector with the largest quotient qi is then ranked highest. Its
corresponding rows and columns are deleted from the matrix, and the procedure is
applied to the remaining sectors.
2.2 Construction Heuristics 23
The second method starts with an arbitrarily chosen linear ordering, w.l.o.g.
1, 2, . . . , n. Then for every m = 1, 2, . . . , n − 1 the objective function values of the
orderings m + 1, m + 2, . . . , n, 1, . . . , m are evaluated. The best one among them is
chosen, and the procedure is repeated as long as improvements are possible.
This is a simple heuristic which builds an ordering by inserting the next objects at
positions which are locally optimal.
Best Insertion
(1) Select an arbitrary object j and set S = {1, 2, . . ., n} \ { j}. Let j be
the current ordering.
(2) For k = 1, 2, . . . , n − 1:
(2.1) Let i1 , i2 , . . . , ik denote the current ordering and choose some
l ∈ S.
(2.2) For every t, 1 ≤ t ≤ k + 1, compute qt = ∑t−1 j=1 ci j l + ∑ j=t cli j and
k
to account for the sum of entries which are “lost” when l is inserted at position t.
Table 2.1 reports on our results for 7 constructive heuristics on the OPT-I set
(the set of 229 instances with optimum known). In this experiment we compute
for each instance and each method the relative deviation Dev (in percent) between
the best solution value Value obtained with the method and the optimal value for
that instance. For each method, we also report the number of instances #Opt for
which an optimum solution could be found. In addition, we calculate the so-called
2.3 Local Search 25
score statistic [114] associated with each method. For each instance, the nrank of
method M is defined as the number of methods that produce a better solution than
the one found by M. In the event of ties, the methods receive the same nrank, equal
to the number of methods strictly better than all of them. The value of Score is the
sum of the nrank values for all the instances in the experiment. Thus the lower the
Score the better the method. We do not report running times in this table because
these methods are very fast and their running times are extremely short (below 1
millisecond). Specifically, Table 2.1 shows results for:
– CW: Chenery and Watanabe algorithm
– AM-O: Aujac and Masson algorithm (output coefficients)
– AM-I: Aujac and Masson algorithm (input coefficients)
– Bcq: Becker algorithm (based on quotients)
– Bcr: Becker algorithm (based on rotations)
– BI1: Best Insertion algorithm (variant 1)
– BI2: Best Insertion algorithm (variant 2)
Results in Table 2.1 clearly indicate that OPT-I instances pose a challenge
for the simple heuristics with average percentage deviations ranging from 3.49%
to 32.97%. In most of the cases none of the methods is able to match the optimum
solution (with the exception of BI1 and BI2 with 4 and 3 optima respectively in
the Special instances). These results show that only Bcq, BI1 and BI2 can be con-
sidered reasonable construction heuristics (with an average percent deviation lower
than 5%).
After having constructed some ordering with one of the heuristics above, it is rea-
sonable to look for improvement possibilities. In this section we will describe fairly
simple (deterministic) local improvement methods that are able to produce accept-
able solutions for the LOP. The basic philosophy that drives local search is that it
is often possible to find a good solution by repeatedly increasing the quality of a
given solution, making small changes at a time called moves. The different types of
possible moves characterize the various heuristics. Starting from a solution gener-
ated by a construction heuristic, a typical local search performs steps as long as the
objective function increases.
Local search can only be expected to obtain optimum or near-optimum solutions
for easy problems of medium size, but it is a very important and powerful concept
for the design of meta-heuristics, which are the topic of the next chapter.
26 2 Heuristic Methods
2.3.1 Insertion
This heuristic checks whether the objective function can be improved if the position
of an object in the current ordering is changed. All possibilities for altering the po-
sition of an object are checked and the method stops when no further improvement
is possible this way.
In problems where solutions are represented as permutations, insertions are prob-
ably the most direct and efficient way to modify a solution. Note that other move-
ments, such as swaps, can be obtained by composition of two or more insertions. We
define move(O j , i) as the modification which deletes O j from its current position j
in permutation O and inserts it at position i (i.e., between the objects currently in
positions i − 1 and i).
Now, the insertion heuristic tries to find improving moves examining eventually
all possible new positions for all objects O j in the current permutation O. There are
several ways for organizing the search for improving moves. For our experiments
we proceeded as follows:
Insertion
(1) Compute an initial permutation O = O1 , O2 , . . . , On .
(2) For j = 1, 2, . . . , n:
(2.1) Evaluate all possible insertions move(O j , i).
(2.2) Let move(Ok , i∗ ) be the best of these moves.
(2.3) If move(Ok , i∗ ) is improving then perform it and update O.
(3) If some improving move was found, then goto (2).
In [86] two neighborhoods are studied in the context of local search methods for
the LOP. The first one consists of permutations obtained by switching the positions
of contiguous objects O j and O j+1 . The second one involves all permutations result-
ing from executing general insertion moves, as defined above. The conclusion from
the experiments is that the second neighborhood clearly outperforms the first one,
which is much more limited. Furthermore two strategies for exploring the neighbor-
hood of a solution were studied. The best strategy selects the move with the largest
move value among all the moves in the neighborhood. The first strategy, on the
other hand, scans the list of objects (in the order given by the current permutation)
searching for the first object whose movement gives a strictly positive move value.
The computations revealed that both strategies provide similar results but the first
involved lower running times.
2.3 Local Search 27
The method developed by Chanas and Kobylanski [32], referred to as the CK method
in the following, is based on the following symmetry property of the LOP. If the per-
mutation O = O1 , O2 , . . . , On is an optimum solution to the maximization problem,
then an optimum solution to the minimization problem is O∗ = On , On−1 , . . . , O1 .
In other words, when the sum of the elements above the main diagonal is maxi-
mized, the sum of the elements below the diagonal is minimized. The CK method
utilizes this property to escape local optimality. In particular, once a local optimum
solution O is found, the process is re-started from the permutation O∗ . This is called
the REVERSE operation.
In a global iteration, the CK method performs insertions as long as the solution
improves. Given a solution, the algorithm explores the insertion move move(O j , i)
of each element O j in all the positions i in O, and performs the best one. When
no further improvement is possible, it generates a new solution by applying the
REVERSE operation from the last solution obtained, and performs a new global
iteration. The method finishes when the best solution found cannot be improved
upon in the current global iteration.
It should be noted that the CK method can be considered to be a generalization
of the second heuristic of Becker described above. The latter evaluates the orderings
that can be obtained by rotations of a solution, while the CK method evaluates all
insertions. Since these rotations are basically insertions of the first elements to the
last positions, we can conclude that Becker’s method explores only a fraction of the
solutions explored by CK.
2.3.3 k-opt
The k-opt improvement follows a principle that can be applied to many combinato-
rial optimization problems. Basically, it selects k elements of a solution and locally
optimizes with respect to these elements. For the LOP, a possible k-opt heuristic
would be to consider all subsets of k objects Oi1 , . . . , Oik in the current permuta-
tion and find the best assignment of these objects to the positions i1 , . . . , ik . Since
the number of possible new assignments grows exponentially with k, we have only
implemented 2-opt and 3-opt.
The main problem with local improvement heuristics is that they very quickly get
trapped in a local optimum. Kernighan and Lin proposed the idea (originally in [78]
for a partitioning problem) of looking for more complicated moves that are com-
posed of simpler moves. In contrast to pure improvement heuristics, it allows that
28 2 Heuristic Methods
some of the simple moves are not improving. In this way the objective can decrease
locally, but new possibilities arise for escaping from the local optimum. This type
of heuristic proved particularly effective for the traveling salesman problem (where
it is usually named Lin-Kernighan heuristic).
We only describe the principle of the Kernighan-Lin approach. For practical ap-
plications on large problems, it has to be implemented carefully with appropriate
data structures and further enhancements like restricted search or limited length of
combined moves to speed up the search for improving moves. We do not elaborate
on this here.
We devised two heuristics of this type for the LOP. In the first version, the basic
move consists of interchanging two objects in the current permutation.
Kernighan-Lin 1
(1) Compute some linear ordering O.
(2) Let m = 1, Sm = {1, 2, . . . , n}.
(3) Determine objects s,t ∈ Sm , s = t, the interchange of which in the cur-
rent ordering leads to the largest increase gm of the objective function
(increase may be negative).
(4) Interchange s and t in the current ordering. Set sm = s and tm = t.
(5) If m < n/2
, set Sm+1 = Sm \ {s,t} and m = m + 1. Goto (3).
(6) Determine 1 ≤ k ≤ m, such G = ∑ki=1 gi is maximum.
(7) If G ≤ 0 then Stop, otherwise, starting from the original ordering O,
successively interchange si and ti , for i = 1, 2, . . . , k. Let O denote the
new ordering and goto (2).
Kernighan-Lin 2
(1) Compute some linear ordering O.
(2) Let m = 1, Sm = {1, 2, . . . , n}.
(3) Among all possibilities for inserting an object of Sm at a new position
determine the one leading to the largest increase g p of the objective
function (increase may be negative). Let s be this object and p the new
position.
(4) Move s to position p in the current ordering. Set sm = s and pm = p.
(5) If m < n, set Sm+1 = Sm \ {s} and m = m + 1. Goto (3).
(6) Determine 1 ≤ k ≤ m, such G = ∑ki=1 gi is maximum.
(7) If G ≤ 0 then Stop, otherwise, starting from the original ordering O,
successively move si to position pi , for i = 1, 2, . . . , k. Let O denote
the new ordering and goto (2).
2.3 Local Search 29
This heuristic chooses windows ik , ik+1 , . . . , ik+L−1 of a given length L of the cur-
rent ordering i1 , i2 , . . . , in and determines the optimum subsequence of the respec-
tive objects by enumerating all possible orderings. The window is moved along the
complete sequence until no more improvements can be found. Of course, L cannot
be chosen too large because the enumeration needs time O(L!).
Local Enumeration
(1) Compute some linear ordering O.
(2) For i = 1, . . . , n − L + 1:
(2.1) Find the best possible rearrangement of the objects at positions
i, i + 1, . . . , i + L − 1.
(3) If an improving move has been found in the previous loop, then
goto (2).
Table 2.2 reports on our results for 7 improving heuristics on the OPT-I set of
instances. As in the construction heuristics, we report, for each instance and each
method, the relative percent deviation Dev, the number of instances #Opt for which
an optimum solution is found, and the score statistic. Similarly, we do not report
running times in this table because these methods are fairly fast. Specifically, the
results obtained with the following improvement methods (started with a random
initial solution) are given:
– LSi: Local Search based on insertions
– 2opt: Local Search based on 2-opt
– 3opt: Local Search based on 3-opt
– LSe: Local Search based on exchanges
– KL1: Kernighan-Lin based on exchanges
– KL2: Kernighan-Lin based on insertions
– LE: Local enumeration
As expected, the improvement methods are able to obtain better solutions than
the construction heuristics, with average percentage deviations (shown in Table 2.2)
ranging from 0.57% to 2.30% (the average percentage deviations of the construc-
tion heuristics range from 3.49% to 32.97% as reported in Table 2.1). We have not
observed significant differences when applying the improvement method from dif-
ferent initial solutions. For example, as shown in Table 2.2 the LSi method exhibits
a Dev value of 0.16% on the RandomAII instances when it is started from random
solutions. When it is run from the CW or the Bcr solutions, it obtains a Dev value
of 0.17% and 0.18% respectively.
30 2 Heuristic Methods
used to generate solutions to launch a succession of new searches for a global opti-
mum.
The principle layout of a multi-start procedure is the following.
Multi-Start
(1) Set i=1.
(2) While the stopping condition is not satisfied:
(2.1) Construct a solution xi . (Generation)
(2.2) Apply local search to improve xi and let xi be the solution ob-
tained. (Improvement)
(2.3) If xi improves the best solution, update it. Set i = i + 1. (Test)
solution to yield corresponding adaptive local minima. The authors test this method
for the traveling salesman problem and obtain significant speedups over previous
multi-start implementations.
Simple forms of multi-start methods are often used to compare other methods
and measure their relative contribution. In [7] different genetic algorithms for six
sets of benchmark problems commonly found in the genetic algorithms literature
are compared: traveling salesman problem, job-shop scheduling, knapsack and bin
packing problem, neural network weight optimization, and numerical function op-
timization. The author uses the multi-start method (multiple restart stochastic hill-
climbing) as a basis for computational testing. Since solutions are represented with
strings, the improvement step consists of a local search based on random flipping of
bits. The results indicate that using genetic algorithms for the optimization of static
functions does not yield a benefit, in terms of the final answer obtained, over simpler
optimization heuristics.
One of the most well known multi-start methods is the greedy adaptive search
procedure (GRASP). The GRASP methodology was introduced by Feo and Re-
sende [45] and was first used to solve set covering problems [44]. We will devote a
section in the next chapter to describe this methodology in detail.
A multi-start algorithm for unconstrained global optimization based on quasi-
random samples is presented in [67]. Quasi-random samples are sets of determin-
istic points, as opposed to random, that are evenly distributed over a set. The al-
gorithm applies an inexpensive local search (steepest descent) on a set of quasi-
random points to concentrate the sample. The sample is reduced, replacing worse
points with new quasi-random points. Any point that is retained for a certain number
of iterations is used to start an efficient complete local search. The algorithm termi-
nates when no new local minimum is found after several iterations. An experimental
comparison shows that the method performs favorably with respect to other global
optimization procedures.
An open question in order to design a good search procedure is whether it is bet-
ter to implement a simple improving method that allows a great number of global
iterations or, alternatively, to apply a complex routine that significantly improves
a few generated solutions. A simple procedure depends heavily on the initial solu-
tion but a more elaborate method takes much more running time and therefore can
only be applied a few times, thus reducing the sampling of the solution space. Some
meta-heuristics, such as GRASP, launch limited local searches from numerous con-
structions (i.e., starting points). In other methods, such as tabu search, the search
starts from one initial point and, if a restarting procedure is also part of the method,
it is invoked only a limited number of times. In [94] the balance between restart-
ing and search-depth (i.e., the time spent searching from a single starting point)
is studied in the context of the matrix bandwidth problem. Both alternatives were
tested with the conclusion that it was better to invest the time searching from a few
starting points than re-starting the search more often. Although we cannot draw a
general conclusion from these experiments, the experience in the current context
and in previous projects indicates that some meta-heuristics, like tabu search, need
2.4 Multi-Start Procedures 33
to reach a critical search depth to be effective. If this search depth is not reached,
the effectiveness of the method is severely compromised.
In this section we will describe and compare 10 different constructive methods for
the LOP. It should be noted that, if a constructive method is completely determinis-
tic (with no random elements), its replication (running it several times) will always
produce the same solution. Therefore, we should add random selections in a con-
structive method to obtain different solutions when replicated. Alternatively, we can
modify selections from one construction to another in a deterministic way by record-
ing and using some frequency information. We will look at both approaches, which
will enable us to design constructive methods for the LOP that can be embedded in
a multi-start procedure.
Above we have described the construction heuristic of Becker [8] in which for
each object i the value qi is computed. Then, the objects are ranked according to the
q-values qi = ∑k=i cik /∑k=i cki .
We now compute two other values that can also be used to measure the attractive-
ness of an object to be ranked first. Specifically, ri and ci are, respectively, the sum
of the elements in the row corresponding to object i, and the sum of the elements in
the column of object i, i.e., ri = ∑k=i cik and ci = ∑k=i cki .
Constructive Method G1
This method first computes the ri values for all objects. Then, instead of selecting
the object with the largest r-value, it creates a list with the most attractive objects,
according to the r-values, and randomly selects one among them. The selected ob-
ject is placed first and the process is repeated for n iterations. At each iteration the
r-values are updated to reflect previous selections (i.e., we sum the cik across the
unselected elements) and the candidate list for selection is computed with the high-
est evaluated objects. The method combines the random selection with the greedy
evaluation, and the size of the candidate list determines the relative contribution of
these two elements.
34 2 Heuristic Methods
Constructive method G1
(1) Set S = {1, 2, . . . , n}. Let α ∈ [0, 1] be the percentage for selection and
O be the empty ordering.
(2) For t = 1, 2, . . . , n:
(2.1) Compute ri = ∑ cik for all i ∈ S.
k∈S,k=i
(2.2) Let r∗ = max{ri | i ∈ S}.
(2.3) Compute the candidate list C = {i ∈ S | ri ≥ α r∗ }.
(2.4) Randomly select j∗ ∈ C and place j∗ at position t in O and set
S = S \ { j∗ }.
Method G2 is based on the ci -values computed above. It works in the same way as
G1 but the attractiveness of object i is now measured with ci instead of ri . Objects
with large c-values are placed now in the last positions.
Constructive method G2
(1) Set S = {1, 2, . . . , n}. Let α ∈ [0, 1] be the percentage for selection and
O be the empty ordering.
(2) For t = n, n − 1, . . ., 1:
(2.1) Compute ci = ∑ cki for all i ∈ S.
k∈S,k=i
(2.2) Let c∗ = max{ci | i ∈ S}.
(2.3) Compute the candidate list C = {i ∈ S | ci ≥ α c∗ }.
(2.4) Randomly select j∗ ∈ C and place j∗ in position t in O and set
S = S \ { j∗ }.
These methods are designed analogously to G1–G3, except that the selection of
objects is from a candidate list of the least attractive and the solution is constructed
starting from the last position of the permutation. We give the specification of G6
which is modification of G3.
2.4 Multi-Start Procedures 35
Constructive method G6
(1) Set S = {1, 2, . . . , n}. Let α ≥ 0 be the percentage for selection and O
be the empty ordering.
(2) For t = 1, 2, . . . , n:
(2.1) For all i ∈ S, compute
∑ cik
k∈S,k=i
qi = .
∑ cki
k∈S,k=i
This is a mixed procedure derived from the previous six. The procedure generates
a fraction of solutions from each of the previous six methods and combines these
solutions into a single set. That is, if n solutions are required, then each method Gi,
i = 1, . . . , 6, contributes n/6 solutions.
Constructive Method DG
Constructive Method FQ
objects from occupying positions that they have frequently occupied in previous
solution generations.
The constructive method FQ (proposed in [19]) is based on the notion of
constructing solutions employing modified frequenciesfrequency. The generator
exploits the permutation structure of a linear ordering. A frequency counter is
maintained to record the number of times an element i appears in position j. The
frequency counters are used to penalize the “attractiveness” of an element with re-
spect to a given position. To illustrate this, suppose that the generator has created
30 solutions. If 20 out of the 30 solutions have element 3 in position 5, then the
frequency counter freq(3, 5) = 20. This frequency value is used to bias the potential
assignment of element 3 in position 5 during subsequent constructions, thus induc-
ing diversification with respect to the solutions already generated.
The attractiveness of assigning object i to position j is given by the greedy func-
tion fq(i, j), which modifies the value of qi to reflect previous assignments of object i
to position j, as follows:
∑ cik
k=i maxq
fq(i, j) = −β freq(i, j),
∑ cki max f
k=i
Constructive method FQ
(1) Set S = {1, 2, . . . , n}. Let β ∈ [0, 1] be the percentage for diversifica-
tion and freq(i, j) be the number of times object i has been assigned
to position j in previous constructions.
(2) For t = 1, 2, . . . , n:
∑ cik
k=i max
(2.1) For all i, j ∈ S compute fq(i, j) = ∑ cki
− β maxqf freq(i, j).
k=i
(2.2) Let i∗ and j∗ be such that fq(i∗ , j∗ ) = max{fq(i, j) | i, j ∈ S}.
(2.3) Place i∗ at position j∗ in O and set S = S \ {i∗}.
(2.4) freq(i∗ , j∗ ) = freq(i∗ , j∗ ) + 1.
It is important to point out that fq(i, j) is an adaptive function since its value
depends on attributes of the unassigned elements at each iteration of the construction
procedure.
In our first experiment we use the instance stabu75 from LOLIB. We have
generated a set of 100 solutions with each of the 10 generation methods. Figure 2.1
shows in a box-and-whisker-plot representation, the value of the 100 solutions gen-
erated with each method. Since the LOP is a maximization problem, it is clear that
the higher the value, the better the method. We can therefore say that constructive
method G3 obtains the best results. Other methods, such as FQ and MIX also obtain
2.4 Multi-Start Procedures 37
solutions with very good values, but their box-plot representation indicates that they
also produce lower quality solutions. However, if the construction is part of a global
method (as is the case in multi-start methods), we may prefer a constructive method
able to obtain solutions with different structures rather than a constructive method
that provides very similar solutions. Note that if every solution is subjected to lo-
cal search, then it is preferable to generate solutions scattered in the search space
as starting points for the local search phase rather than good solutions concentrated
in the same area of the solution space. Therefore, we need to establish a trade off
between quality and diversity when selecting our construction method.
Given a set of solutions P represented as permutations, in [95] a diversity mea-
sure d is proposed which consists of computing the distances between each solution
and a “center” of P. The sum (or alternatively the average) of these |P| distances
provides a measure of the diversity of P. The diversity measure d is calculated as
follows:
(1) Calculate the median position of each element i in the solutions in P.
(2) Calculate the dissimilarity (distance) of each solution in the population with
respect to the median solution. The dissimilarity is calculated as the sum of the
absolute difference between the position of the elements in the solution under
consideration and the median solution.
(3) Calculate d as the sum of all the individual dissimilarities.
For example, assume that P consists of the orderings A, B,C, D, B, D,C, A,
and C, B, A, D. The median position of element A is therefore 3, since it occupies
positions 1, 3 and 4 in the given orderings. In the same way, the median positions
of B,C and D are 2, 3 and 4, respectively. Note that the median positions might not
38 2 Heuristic Methods
induce an ordering, as in the case of this example. The diversity value of the first
solution is then calculated as d1 = |1 − 3| + |2 − 2| + |3 − 3| + |4 − 4| = 2.
In the same way, the diversity values of the other two solutions are obtained as
d2 = 4 and d3 = 2. The diversity measure d of P is then given by d = 2 + 4 + 2 = 8.
We then continue with our experiment to compare the different constructive
methods for the LOP. As described above, we have generated a set of 100 solutions
with each of the 10 generation methods. Figure 2.2 shows the box-and-whisker plot
of the diversity values of the solution set obtained with each method.
Figure 2.2 shows that MIX and FQ obtain the highest diversity values (but also
generate other solutions with low diversity values). As expected, the random con-
structive method RND consistently produces high diversity values (always generat-
ing solutions with an associated d-value over 800 in the diagram).
As mentioned, a good method must produce a set of solutions with high quality
and high diversity. If we compare, for example, generators MIX and G3 we observe
in Fig. 2.1 that G3 produces slightly better solutions in terms of solution quality, but
Fig. 2.2 shows that MIX outperforms G3 in terms of diversity. Therefore, we will
probably consider MIX as a better method than G3. In order to rank the methods we
have computed the average of both measures across each set.
Figure 2.3 shows the average of the diversity values on the x-axis and the average
of the quality on the y-axis. A point is plotted for each method.
As expected, the random generator RND produces a high diversity value (as mea-
sured by the dissimilarity) but a low quality value. DG matches the diversity of RND
using a systematic approach instead of randomness, but as it does not use the value
of solutions, it also presents a low quality score. The mixed method MIX provides a
2.4 Multi-Start Procedures 39
600000
G6 G3
FQ
500000
G1
G4 MIX
G5
G2
400000 RND
DG
300000
0 200 400 600 800 1000
good balance between dissimilarity and quality, by uniting solutions generated with
methods G1 to G6.
We think that quality and diversity are equally important, so we have added both
averages. To do so, we use two relative measures C for quality, and d for di-
versity. They are basically standardizations to translate the average of the objective
function values and diversity values respectively to the [0,1] interval. In this way we
can simply add both quantities.
1.6
1.4
ΔC +Δd
1.2
1.0
0.8
0.6
Δd
0.4
0.2 ΔC
0.0
G5 G4 G2 G1 DG RND G6 G3 M IX FQ
Figure 2.4 clearly shows the following ranking of the 10 methods, where the
overall best is the FQ generator: G5, G4, G2, G1, DG, RND, G6, G3, MIX and FQ.
These results are in line with previous works which show the inclusion of memory
40 2 Heuristic Methods