Etd10653 JGuo
Etd10653 JGuo
Assignment Problem
by
Jingxin Guo
B.S., Western Washington University, 2015
in the
Department of Applied Mathematics
Faculty of Science
Abraham Punnen
Senior Supervisor
Professor
George Zhang
Supervisor
Professor
Beedie School of Business
Binay Bhattacharya
Internal Examiner
Professor
Department of Computing Science
ii
Abstract
iii
Dedication
iv
Acknowledgements
First, I would like to express my sincere gratitude to my senior supervisors Professor Pun-
nen, I cannot imagine finishing the thesis without the support from him. During the past
two years, he became my best friend and a role model of mine. He helped me not only
academically, he also taught me to become a better person. I can never ask for a better
supervisor.
I also want to thank my supervisor Professor Zhang. He is the reason I am here today.
He introduced me to the Operations research program at SFU. Also, I would like to thank
Ms. Kouzniak for her support and insightful advice. In addition, I would like to thank
Professor Bhattacharya to be my internal examiner.
I am also deelp grateful to Professor Kropinski and Professor Mishna. I would like to
thank them for supporting me when I was at the most difficult time in graduate school.
Without their support, I may not be able to finish the program.
Besides, I want to thank Dr. Wu. He spent a lot of his time teaching me computer
science. I am thankful to all his help and patience.
In addition, I would like to thank John Hayfron, Ellen Harris, Michelle Spencer and
Pooja Pandey for giving me feedbacks to my thesis.
I also want to thank Xiaorui Li, Jiaojiao Yang, Shanwen Yan and Rimi Sharma for their
support. As well as Weixing Cui, Arthur and Joe Ngai for giving me helpful advice.
Last, I would like to thank my family for supporting me in every possible way. I could
never ask for a better family.
v
Table of Contents
Approval ii
Abstract iii
Dedication iv
Acknowledgements v
Table of Contents vi
List of Figures x
1 Introduction 1
1.1 Linear-fractional Programming . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 LFP as a linear program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Combinatorial Fractional Programming . . . . . . . . . . . . . . . . . . . . 7
1.4 Integer Fractional Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Linear Fractional Assignment problem . . . . . . . . . . . . . . . . . . . . . 12
2 LFAP formulations 14
2.1 Mixed 0-1 linear programming formulations . . . . . . . . . . . . . . . . . . 14
2.2 MILP formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Computational Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Test Problem Generation . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 Newton’s Method 29
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Heuristic Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3 Computational Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
vi
4 Local Search Algorithms 39
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Experimental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 Conclusion 66
6 Bibliography 67
vii
List of Tables
viii
Table 4.12 Results of Right Skewed Problems with data size dependent range . . 58
Table 4.13 Results of Negative Correlated Problems . . . . . . . . . . . . . . . . 59
Table 4.14 Results of Negative Correlated Problems . . . . . . . . . . . . . . . . 60
Table 4.15 Results of Positive Correlated Problems-increased . . . . . . . . . . . 61
Table 4.16 Results of Positive Correlated Problems-increased . . . . . . . . . . . 62
Table 4.17 Results of Positive Correlated Problems-decreased . . . . . . . . . . . 63
Table 4.18 Results of Positive Correlated Problems-decreased . . . . . . . . . . . 64
ix
List of Figures
Figure 3.1 The graph of f (λ) is shown in thick lines. The straight lines corre-
sponding to x2 and x6 are the same. . . . . . . . . . . . . . . . . . 30
Figure 3.2 Problem Size v.s. Times of Negative Correlated Problems . . . . . 38
Figure 4.1 Problem Size v.s. Times of Symmetric Problems with fixed range . 65
x
Chapter 1
Introduction
cT x + α
Minimize
dT x + β
Subject to Ax ≥ b, (1.1)
x ≥ 0, (1.2)
where A is an m×n matrix, x, b is a column vector of size m, x, c and d are column vectors
of size n, and α and β are scalars. If d is the zero vector of size n, then the LFP reduces to the
well known linear programming problem [27]. LFP has been studied by many authors and
we refer to the book [2] for details. LFP can be used in modeling many real life problems
such as those in the financial industry [16] and production planning [23], among others.
Many authors have proposed various algorithms to solve the model [1, 9, 20, 24, 32, 41, 42].
The objective function of a LFP have properties very similar to that of a linear program.
When dT x + β > 0 for all feasible solution x, if the LFP has an optimal solution, then there
exists an optimal solution that corresponds to an extreme point of the polytope defined by
(1.1) and (1.2). Further, a local optimum on this polytope is also a global optimum. To
validate this, let us now consider some properties of LFP.
A function f is said to be quasiconvex on domain X if for all x1 , x2 ∈ X and for all
t ∈ [0, 1]
h i h i
f tx1 + (1 − t)x2 ≤ max f (x1 ), f (x2 ) . (1.3)
1
A function f is explictly quasiconvex on X if for all x1 , x2 ∈ X with f (x1 ) 6= f (x2 ) and
for all t ∈ (0, 1) the inequality (1.3) is strict.
For instance, f (x) = x−1 for {x ∈ R|x > 0} is quasiconave as well as explicitly quasi-
concave.
cT x + α
Theorem 1.1. [31] The objective function f (x) = of a LFP problem is both
dT x + β
quasiconvex and quasiconcave.
cT x + α
Proof. To prove that f (x) = is both quasiconvex and quasiconcave, by definition,
dT x + β
we need to prove that:
Then,
(1.5) becomes:
C1 C2 C0 C1 C2
min , ≤ ≤ max , (1.8)
D1 D2 D0 D1 D2
2
Suppose that:
C1 C2
≥ (1.9)
D1 D2
C2 C0 C1
≤ ≤ (1.10)
D2 D0 D1
C1 D 2 − C2 D 1 ≥ 0 (1.11)
for D1 D2 > 0.
Multiply both sides of (1.11) by (1 − γ) where γ ∈ (0, 1), add γC1 D1 ,and then subtract
γC1 D1 we get the inequality:
C1 D0 − D1 C0 ≥ 0 (1.13)
C1 C0
Suppose ≥ and multiply both sides by D0 D1 > 0, the inequality still holds since
D1 D0
D0 D1 > 0 and we get (1.13).
Thus, (1.13) have proven for D0 D1 > 0, the left hand side inequality of (1.10) is valid.
We can prove the right hand side of the inequality (1.10) in a similar way.
The above proof is taken from [31] and presented here for completion.
Theorem 1.3. [32] When dT x + β > 0 for all feasible x, an optimal solution in the LFP
is attained at an extreme point of the polyhedral set defining the feasible region and a local
minimum is global minimum.
3
1.2 LFP as a linear program
When dT x + β > 0, the LFP can be formulated as a linear program (LP) as shown by
Charnes and Cooper [9]. We discuss this transformation below.
Consider the LFP:
cT x + α
Minimize
dT x + β
Subject to Ax ≥ b,
x≥0
Minimize t(cT x + α)
Subject to Ax ≥ b,
t(dT x + β) = 1,
x≥0
Minimize cT xt + αt
Subject to Ax ≥ b,
tdT x + tβ = 1,
x ≥ 0,
Minimize cT y + αt
y
Subject to A ≥ b,
t
dT y + tβ = 1,
y ≥ 0, t ≥ 0
4
Which is equivalent to:
Minimize cT y + αt
Subject to Ay − bt ≥ 0,
dT y + tβ = 1,
y ≥ 0, t ≥ 0
cT x + α
LF P 1 : Minimize
dT x + β
Subject to Ax ≥ b,
dT x + β ≥ ,
x≥0
and
cT x + α
LF P 2 : Minimize
dT x + β
Subject to Ax ≥ b,
dT x + β ≤ −,
x≥0
An optimal solution to LFP is the best solution amongest those of LFP1 and LFP2. But
for LFP2, dT x + β is negative.
LFP2 can be reformulated as:
−cT x − α
LF P 3 : Minimize
−dT x − β
Subject to Ax ≥ b,
dT x + β ≤ −,
x≥0
5
Note that LFP3 is equivalent to LFP2 and −dT x − β > 0. Thus, we can reformulate LFP1
and LFP3 as linear programs as shown earlier. The best solution produced amongst of
those of LFP1 and LFP3 gives an optimal solution to LFP; subject to the tolerance limit .
Thus, without loss of generality, we assume dT x + β > 0 for all x. Based on the reduction
discussed above, LFP can be solved by solving at most two linear programs. Let us now
discuss another algorithm that can be used to solve LFP.
Consider the parametric linear problem:
Subject to Ax ≥ b,
x ≥ 0.
Theorem 1.4. [2] Let S denotes the set of feasible solutions of LFP and D(x) > 0 for all
x ∈ S. Then, a feasible solution x∗ is an optimal solution to LFP if and only if
C(x∗ ) C(x)
λ∗ = ≤ ,∀ x ∈ S
D(x∗ ) D(x)
C(x) − λ∗ D(x) ≥ 0, ∀ x ∈ S
thus,
min{C(x) − λ∗ D(x)} = 0, λ ∈ R
x∈S
6
Based on the Theorem 1.4, we can develop an algorithm for computing an optimal so-
lution to LFP assuming D(x) > 0 ∀ x ∈ S.
Since D(x) > 0 ∀x ∈ S, we have:
∂F (λ)
= −D(x) < 0
∂λ
Algorithm: Newton-LFP
C(x0 )
Step 1: Choose a starting solution x0 ∈ S, compute λ1 = , and let k = 1;
D( x0 )
C(xk )
Step 4: Let λk+1 = ; Let k = k + 1; Go to Step 1
D(xk )
The above algorithm is called Dinkelbach’s algorithm which was proposed by Dinkelbach
in 1967 [12]. One can see that it is equivalent to Newton’s method for finding the roots of
the equation F (λ) = 0 and hence it is also called Newton’s method.
The above proof is taken from [2] and presented here for completion.
Other approaches to solve LFP include simplex method [11] [32], interior point method
[35] and binary search method [43].
X
Minimize ce
e∈S
Subject to S ∈ F
Further, let de for e ∈ E be another cost defined for elements of E and α and β are real
numbers. Then, a Fractional Combinatorial Optimization Problem (FCOP) is to
7
P
α + e∈S ce
Minimize P
β + e∈S de
Subject to S ∈ F
Theorem 1.5. [34] If LCOP is solvable with O(p(m)) comparisons and O(q(m)) additions,
then F COP + is solvable in O(p(m)(q(m) + p(m))) time.
The above result improved the complexity of solving minimum ratio spanning tree
problem to O(|E| log2 |V | log log |V |) and the complexity of solving minimum cycle problem
to O(|E||V |2 log |V |).
Ealier we mentioned that Newton’s method can be used for solving LFP. The same
idea can also be used to solve F COP + . Matsui, Saruwatari and Shigeno [33] studied the
method and showed it terminates in a polynomial number of iterations. Here, we denote
C = max{max(i=1,2,...,m) |ci |, 1} and D = max{max(i=1,2,...,m) |di |, 1} .
Theorem 1.6. [33] The number of iterations of Newton’s method applied to F COP + is
O(log(mCD) ≤ O(log(mM )) in the worst case when M = max{C, D}.
This Theorem gives a complexity bound for the minumum ratio spanning tree problem
of O(T (n, m) log(nCD)) time, where n is the number of nodes, m is the number of edges
of the given graph and T (n, m) represent the complexity of a stronlgy polynomial time
algorithm for the ordinary minimum spanning tree problem. The Theorem also shows that
the worst case time bound for the fractional assignment problem on a bipartite graph with
√ √
2n nodes and m edges is O( nm(log(2n3 CD))(log(mCD))) ≤ O( nm(log(nCD))2 ) when
Newton’s method is applied.
8
Radzik [39] studied F COP + and showed that the Newton’s method, when adapted
to solve F COP + , terminates in a polynomial number of iterations. More specifically, he
proved that
Theorem 1.7. [39] If in an FCOP problem, the coordinates of vectors c and d are integers
from [−U, U ], then Newton’s method finds the optimum solution in F COP + in O(log(mU ))
iterations.
This result is similar to that obatained by Matsui, Saruwatari and Shigeno [33] discussed
earlier. Further, Radzik showed that
Theorem 1.8. Newton’s method finds an optimal solution to F COP + in O(m4 log2 m)
iterations.
m
X
α+ cj xj
i=1
Minimize m
X
β+ dj xj
j=1
Subject to x ∈ F̂
Let us now discuss linear fractional programs with integer variables which encompass
FCOP, when F has a compact representation in terms of its incidence vectors.
9
1.4 Integer Fractional Programs
In the definition of LFP, if we put the additional restriction that the variables can take only
integer values, we get an Integer Linear Fractional programming (ILFP).
An ILFP can be represented as follow:
cT x + α
Minimize
dT x + β
Subject to Ax ≥ b,
x ≥ 0, x ∈ I,
ILF P 1 : Minimize cT y + αt
Subject to Ay − bt ≥ 0,
dT y + tβ = 1,
y
y ≥ 0, t ≥ 0, ∈ I,
t
y
Note that the constraint ∈ I makes ILFP1 hard to solve. Moreover, such a constraint
t
cannot be used directly within general purpose mixed integer linear programming solvers.
For this reason, we have to reformulate it into a different form to be able to use a general
purpose solver.
Algorithms for solving ILFP had been proposed by many authors. Robillard [41], Grun-
span and Thomas [20] and Anzai [1] developed their algorithms for solving integer fractional
programs by using an algorithm proposed by Martos and Isbell-Marlow [24, 32] for LFP.
Grunspan and Thomas described a general algorithm for solving the integer linear program
by reducing it to a sequence of linear integer problems in 1973 [20]. Granot and Granot [18]
developed a cutting plane algorithm to solve both integer fractional programming (IFP)
and the mixed integer fractional programming by using Charnes and Cooper’s approach for
solving associated continuous fractional programs [9].
As noted earlier, when the integer variables are restricted to take values from {0, 1},
the resulting ILFP is called the Binary Fractional Linear Programming problem (BFLP).
10
BFLP can be formulated as follow:
cT x + α
Minimize
dT x + β
Subject to Ax ≥ b,
x ∈ {0, 1}
Following the transformation of Charnes and Cooper [9] with approporiate modification,
we get the following formulation for BFLP:
BF LP 1 : Minimize cT y + αt
Subject to Ay − bt ≥ 0,
dT y + tβ = 1,
y ∈ {0, t}, t ≥ 0 (1.14)
Constraint (1.14) enforces y takes values from the discrete set {0, t}. Such a constraint
cannot be used directly for general purpose solvers such as CPLEX or Gurobi. But, it can
be handled as follows. We let y = ty 0 and have the equivalent formulation:
BF LP 2 : Minimize cT ty0 + αt
Subject to Aty0 − bt ≥ 0,
dT ty0 + tβ = 1,
y0 ∈ {0, 1}, t ≥ 0 (1.15)
Note that t is a decision variable. Replacing y 0 t with z and adding the McCormick envelope
inequalities [17], BFLP2 reduces to
BF LP 3 : Minimize cT z + αt
Subject to Az − bt ≥ 0,
dT z + tβ = 1,
z ≥ y0 t, (1.16)
z ≥ t + y0 t̄ − t̄, (1.17)
0
z ≤ t + y t − t, (1.18)
¯ ¯
z ≤ y0 t̄, (1.19)
y0 ∈ {0, 1}, t ≥ 0
t ≤ t ≤ t̄
¯
11
where t̄ and t respectively represent lower and upper bounds on t over the set of feasible
¯
solutions.
Constraints (1.16) - (1.19) together make sure z = y 0 t. Note that y 0 ∈ {0, 1}. When
y 0 = 0, constraint (1.17) and (1.18) is equivalent to t− t̄ ≤ z ≤ t−t. When y 0 = 1, constraint
¯
(1.17) and (1.18) is equivalent to z = t.
Since all the constraints in BFLP3 can be forced using a MIP solver, it can be solved by
a solver theoretically. However, since MIP is a NP-hard problem [14], it cannot be solved
efficiently as the problem size gets bigger. Using exact algorithms to solve the problem is
usually hard and inefficient, so we often use heuristic algorithms to solve MIP problems.
In 1968, Hammer and Reudeanu [41] proposed an algorithm to solve the Constrained
(0, 1) Fractional Programming (CFP) problem [21]. This method was refined by Robillard
in 1971. Granot and Granot proposed an Implicit Enumeration method for solving CFP
[19] by generating a stronger surrogate constrains for CFP. The algorithm was based on
Balas’ additive algorithm [3], and Geoffrion’s backtracking procedure [15]. Furthermore,
Beale showed that method can be used to solve problems using zero-one decision variables,
by using a code which allows special ordered sets of along type as defined by Beale and
Tomlin [4] without requiring any new algorithms. Beale’s method is to group the zero-one
xjk = 1, ∀j. Since the independent zero-
P
variables into multiple-choice set xjk such that k
one variable can be represented as a set of a set of two element. One is the original zero-one
varibale and another one is a slack variable which has no entires in other rows. Thus, we
do not require additional constraint when involving zero-one variables.
The assignment problem is one of the fundamental combinatorial optimization problems
and it is studied extensively in literature [7].
C(x) + α
LF AP : Minimize
D(x) + β
Subject to
n
X
xij = 1, i = 1, ..., n (1.20)
j=1
Xn
xij = 1, j = 1, ..., n (1.21)
i=1
12
We use the notation
n X
X n n X
X n
C(x) = cij xij , and D(x) = dij xij ,
i=1 j=1 i=1 j=1
and represent the set of all n × n 0-1 matrices satisfying (1.20) and (1.21) by F . Without
loss of generality we assume α = β = 0. For otherwise, define c0ij = cij + α
n
Then
n X
n n X
n n X n
αX
c0ij xij =
X X
cij xij + xij
i=1 j=1 i=1 j=1
n i=1 j=1
=C(x) + α
n
n X
d0ij xij = D(x) + β
X
i=1 j=1
When D(x) > 0 for all x ∈ F the LFAP can be solved polynomial time [26, 34] and if
D(x) is arbitrary, then LFAP is NP-hard [26].
When D(x) + β > 0 then the constraints xij ∈ {0, 1} can be replaced by xij ≥
0. In this case, LFAP reduces to LFP. Thus, this version of LFAP is a special case of
LFP. Also, as noted earlier, LFAP is a special case of FCOP. Meggiddo showed that the
LFAP can be solved in O(min{n2 log n + nm2 , nm2 log2 (nCmax Dmax )}) [34] when D(x) >
0 ∀ x ∈ F . Radzik established an algorithm which has complexity O((n2 logn+nm)n4 log 2 n)
for the same problem. Thus, both Meggiddo’s algorithm and Radzik’s algorithm are
strongly polynomial. Another method specifically designed for LFAP was proposed by
Shigone, Saruwatari and Matsui [43] in 1995 for D(x) > 0 ∀ x ∈ F . As in the case
of Radzik’s algorithm, this method is based on Newton’s method and has complexity of
√
O( nm log(nCmax Dmax )), where Cmax = max{|cij | : (i, j) ∈ E} and Dmax = max{|cij | :
(i, j) ∈ E}. In 2008, Kabadi and Punnen [26] proposed a simplex scheme for the LFAP
with complexity of O(min{n3 m log2 (nCmax Dmax )}) .
To the best of our knowledge, all published works in the literature assumes that D(x) >
0. In this thesis, we consider the LFAP without any sign restriction on C(x) or D(x). One
can use the same transformations I used before. However, we are giving better formulations,
in term of number of additional constraints and variables.
13
Chapter 2
LFAP formulations
Theorem 2.1. Verifying if there exists an x ∈ F such that D(x) = 0 is NP- hard.
Proof. The proof uses a reduction from the partition problem. The partition problem is:
”Given numbers a1 , a2 , ..., an does there exist a subset S ⊆ {1, 2, ..., n} such that
P
j∈S aj =
aj ?” where S̄ = {1, ..., n} − S
P
j∈S̄
From an instance of partition, we construct an instance of the assignment problem as follows.
Construct a bipartite graph G = (V1 , V2 , E) where V1 = {1, 2, ..., 2n}, V2 = {10 , 20 , ..., 2n0 }.
Define
0
ai if j = i , and i ≤ n
−ai if j = n + i0 , and i ≤ n
dij = (2.1)
M a large number if i ≤ n and not of the above type
0 otherwise
P P
Suppose there is a partition S, S̄, such that i∈S ai = i∈S̄ ai . Then we need to
construct an assignment x with D(x) = 0, for the D matrix defined in equation 2.1. For
i ∈ S, match i with i0 . For i ∈ S̄, match i with n + i0 . Extend this to a perfect matching
using edge of value zero. Let x be the resulting assignment. It is easy to see that D(x) = 0.
Conversely, assume D(x) = 0. Then for i ≤ n, if i is assigned to i0 , put i in S and if
i is assigned to n + i0 , put i in S̄. It can be verified that
P P
i∈S ai = i∈S̄ ai
14
a1
1 10
−a1
n+1 n + 10
a2
2 20
−a2
n+2 n + 20
.. ..
. .
an
n n0
−an
2n 2n0
15
Let F + = {x ∈ F : D(x) > 0} and F − = {x ∈ F : D(x) < 0}. Denote by LF AP + the
variation of LFAP where the family of feasible solutions is restricted to F + and LF AP −
the variation of LFAP where the family of feasible solutions is restricted to F − . LFAP is
solvable in (strongly) polynomial time [38, 43] if D(x) > 0 for all x ∈ F . However, this
simplicity doesn’t translate to LF AP + and LF AP − .
Proof. The proof for LF AP + and LF AP − can be obtained using the same approach. Note
that since we assume D(x) 6= 0 for all x ∈ F , F = F + ∪F − . Thus the best solution amongst
the optimal solutions of LF AP + and LF AP − solves LFAP. Moreover, any LF AP − can be
reformulated as an LF AP + by multiplying the numerator and denominator by −1. Thus
if LF AP + can be solved in polynomial time, then LFAP can be solved in polynomial time.
Since LFAP is NP-hard [26], the result follows.
Since we are not interested in solutions x with D(x) = 0, we need to exclude this
possibility. This can be achieved by introducing the following constraints:
D(x) ≥ − M z (2.2)
D(x) ≤ − + M (1 − z) (2.3)
z ∈ {0, 1}
where M is a very large number and is a small positive tolerance limit. If the entries in
the matrix D are integers, can be chosen as 1.
16
n X
X n
LF AP 1 : Minimize f1 (x) = cij xij y
i=1 j=1
Subject to
n
X
xij = 1, i = 1, ..., n (2.4)
j=1
Xn
xij = 1, j = 1, ..., n (2.5)
i=1
Xn X n
dij xij y = 1 (2.6)
i=1 j=1
Xn X n
dij xij + M z ≥ (2.7)
i=1 j=1
Xn X n
dij xij + M z ≤ M − (2.8)
i=1 j=1
Constraint (2.4) is to make sure each xi matches with exactly one xj . At the same time,
constraint (2.5) is to make srue each xj matches with exactly one xi . Constraint (2.7) and
constraint(2.8) are same as constraint(2.2) and constraint(2.3).
We can linearize the quadratic terms in the above formulation. Note that thses quadratic
terms are the product of a binary variable and a bounded continuous variables. This product
can be linearized using standard transformations [17] by introducing a new variable yij to
replace xij y and adding the McCormick envelop inequalities
Note that McCormick envelop inequalities guarantee that xij y = yij for all i,j. Thus, we
get the mixed 0-1 linear programming formulation of LFAP as
17
n X
X n
LF AP 2 : Minimize f2 (x) = cij yij
i=1 j=1
Subject to
n
X
xij = 1, i = 1, ..., n (2.15)
j=1
Xn
xij = 1, j = 1, ..., n (2.16)
i=1
Xn X n
dij yij = 1 (2.17)
i=1 j=1
Xn X n
dij xij + M z ≥ (2.18)
i=1 j=1
Xn X n
dij xij + M z ≤ M − (2.19)
i=1 j=1
log (ū1 ) + 1 if ū1 > 0, log (ū2 ) + 1 if ū2 > 0,
2 2
t1 = and t2 =
0 otherwise 0 otherwise.
We use the convention that in the summation operator, if the lower limit is higher than
the upper limit, then the value of the sum is zero. With this understanding, C(x) can be
represented as follows:
18
t1
X t2
X
C(x) = 2k−1 u1k − 2`−1 u2`
k=1 `=1
where u1k , k = 1, 2, . . . , t1 and u2` , ` = 1, 2 . . . t2 are binary variables. Good values of ū1
and ū2 can be identified as
z1 if z1 = max{C(x) : x ∈ F } and z1 > 0,
ū1 =
0 otherwise
and
|z2 | if z2 = min{C(x) : x ∈ F } and z2 < 0,
ū2 =
0 otherwise
However, computing these values can be time consuming. We can also set ū1 and ū2
to approximations that are easily computable. Valid choices for ū1 and ū2 , that can be
computed in O(n2 ) time, are given below:
n o
z1 if z1 = min Pn max1≤j≤n {cij }, Pn max1≤i≤n {cij } and z1 > 0,
i=1 j=1
ū1 =
0 otherwise
and
n o
|z2 | if z2 = max Pn min1≤j≤n {cij }, Pn min1≤i≤n {cij } and z2 < 0,
i=1 j=1
ū2 =
0 otherwise
Similarly, let ū3 and ū4 respectively be upper bounds on u3 and u4 . Define
log (ū3 ) + 1 if ū3 > 0, log (ū4 ) + 1 if ū4 > 0,
2 2
t3 = and t4 =
0 otherwise 0 otherwise.
t3
X t4
X
D(x) = 2k−1 u3k − 2`−1 u4`
k=1 `=1
where u3k , k = 1, 2, . . . , t3 and u4` , ` = 1, 2 . . . t4 are binary variables. As any integer can be
represented as difference of two nonnegative integers. The values of ū3 and ū4 can be set as
19
w1 if w1 = max{D(x) : x ∈ F } and w1 > 0,
ū3 =
0 otherwise
and
|w2 | if w2 = min{D(x) : x ∈ F } and w2 < 0,
ū4 =
0 otherwise
Good approximations that can be used in place of the above values are given below:
n o
w1 if w1 = min Pn max1≤j≤n {dij }, Pn max1≤i≤n {dij } and w1 > 0,
i=1 j=1
ū3 =
0 otherwise
and
n o
|w2 | if w2 = max Pn min1≤j≤n {dij }, Pn min1≤i≤n {dij } and w2 < 0,
i=1 j=1
ū4 =
0 otherwise
Using the notations introduced above, we give another mixed 0-1 quadratic program-
ming formulation for LFAP as follows:
20
t1
X t2
X
LF AP 3 : Minimize f3 (x) = 2k−1 u1k y − 2`−1 u2` y
k=1 `=1
Subject to
n
X
xij = 1, i = 1, ..., n (2.26)
j=1
Xn
xij = 1, j = 1, ..., n (2.27)
i=1
Xn X n t3
X t4
X
dij xij = 2k−1 u3k − 2`−1 u4` (2.28)
i=1 j=1 k=1 `=1
n X
X n t1
X t2
X
cij xij = 2k−1 u1k − 2`−1 u2` (2.29)
i=1 j=1 k=1 `=1
t3
X t4
X
2k−1 u3k y − 2`−1 u4` y = 1 (2.30)
k=1 `=1
Xn Xn
dij xij + M z ≥ (2.31)
i=1 j=1
Xn X n
dij xij + M z ≤ M − (2.32)
i=1 j=1
The non-linear terms in the objective function and constraints involve product of binary
variables with the continuous variable y. Thus we can linearize these terms using McCormick
envelop inequalities. Introducing the variables zki = uik y, k = 1, 2, . . . , ti , i = 1, 2, 3, 4 and
adding the McCormick envelop inequalities to force these definitions, we get our next 0-1
integer linear programming formulation of LFAP as:
21
t1
X t2
X
LF AP 4 : Minimize f4 (x) = 2k−1 zk1 − 2`−1 z`2
k=1 `=1
Subject to
n
X
xij = 1, i = 1, ..., n (2.35)
j=1
Xn
xij = 1, j = 1, ..., n (2.36)
i=1
Xn X n t1
X t2
X
cij xij = 2k−1 u1k − 2`−1 u2` (2.37)
i=1 j=1 k=1 `=1
n X
X n t3
X t4
X
dij xij = 2k−1 u3k − 2`−1 u4` (2.38)
i=1 j=1 k=1 `=1
t3
X t4
X
2p−1 zp3 − 2l−1 zh4 = 1 (2.39)
p=1 l=1
Xn Xn
dij xij + M z ≥ 1 (2.40)
i=1 j=1
Xn X n
dij xij + M z ≤ M − 1 (2.41)
i=1 j=1
y ≤ y ≤ ȳ (2.49)
¯
(2.50)
1
where y = −M and ȳ = M for some large values of M . Note that y = . Thus, when
¯ D(x)
dij is integer, we can choose y = −1, ȳ = 1,z = −1 and z̄ = 1.
¯ ¯
Although the representation of integers using binary variables is well known, to the best of
our knowledge, it is not used to the context of LFAP.
22
2.3 Computational Experiments
In this section, we present the results of extensive computational experiments carried out
using LFAP2 and LFAP4 formulations. To obtain reasonable satisfied conclusion, we gen-
erated nine different groups of instances, solve the instances using both LFAP2 and LFAP4
formulations. The formulations LFAP2 and LFAP4 are solved using solver Gurobi 7.5.1
on a workstation with the following configuration: Intel i7-4790 CPU, 32GB RAM, and
64-bit Windows 7 Enterprise operating system. Gurobi is called from a Python program
that prepared the input models and data.
23
7. Negatively correlated problems:
C(x) is a randomly generated increasing matrix using the range [−1000, 1000]. Each
row in matrix C(x) is sorted in ascending order. D(x) is a randomly generated de-
creasing matrix using the range [−1000, 1000]. Each row in matrix D(x) is sorted in
descending order.
In the test problem we generated, we are not assuming D(x) 6= 0. However, in the for-
mulations, we eliminated this possibility. Also, there are many other ways to generate test
problems. With limited experiment analysis, we focus this thesis on limited classes of test
instances with some relationship between C(x) and D(x) Under each category, we generated
instances in size n = 100, 200,...,1000.
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -4890 60.83 -4890 19.05
R2 200 -9917 1738.07 -9917 1029.27
R3 300 -14969 3416.07 -14969 196.41
R4 400 -19989 10242.59 -19989 117.47
R5 500 -24994 11374.66 -24994 1163.32
R6 600 -29999 13236.66 -29999 530.19
R7 700 -34997 15360.98 -34997 709.14
R8 800 -40000 1280.45 -40000 780.19
R9 900 -45000 3173.25 -45000 1732.75
R10 1000 -50000 - -50000 -
24
Table 2.2: Results of Symmetric Problems with data size dependent range
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -9767 95.85 -9767 10508.23
R2 200 -55808 2233.17 -27905.5 -
R3 300 -154446 - -154446 -
R4 400 -153838 - -317600 -
R5 500 -317527 - -185360 -
R6 600 -217736 - -438757 -
R7 700 -1281409 - -430410 -
R8 800 -1797159 - -1803220 -
R9 900 -1206280 - -220117 -
R10 1000 -200413 - -1048421.67 -
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -4669 4.32 -4669 18.80
R2 200 -9671 34.97 -9671 14.80
R3 300 -14768 179.16 -14768 67.50
R4 400 -19830 776.02 -19830 194.48
R5 500 -24841 1785.94 -24841 50.83
R6 600 -29891 4304.35 -29891 9666.82
R7 700 -34927 14827.87 -34927 839.95
R8 800 -39963 7033.47 -39963 165.12
R9 900 -69.15 - -44994 204.83
R10 1000 -66.09 - -50000 358.97
Table 2.4: Results of Left Skewed Problems with data size dependent range
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -90147 3874.47 -90147 32.96
R2 200 -536641 326.28 -536641 929.25
R3 300 -745792 - -1499082 437.16
R4 400 -1023206 - -3107025 1744.19
R5 500 -5424025 - -5468865 5218.83
R6 600 -254578 - -8647804 4911.91
R7 700 -669357 - 1797418 -
R8 800 -1484738 - -1272638 -
R9 900 -24001796 - -23999333 1998.54
R10 1000 -11092 - -9773293 -
25
Table 2.5: Results of Right Skewed Problems with fixed range
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -4632 3.58 -4632 7.53
R2 200 -9739 36.17 -9739 47.78
R3 300 -14752 200.9 -14752 34.62
R4 400 -19807 360.10 -19807 91.38
R5 500 -24861 868.70 -24861 205.24
R6 600 -29884 5201.34 -29884 474.44
R7 700 -34920 4453.90 -34920 953.37
R8 800 -39957 7732.39 -39957 104.47
R9 900 -44982 11423.66 -44982 158.57
R10 1000 -50000 17985.35 - 50000 171.51
Table 2.6: Results of Right Skewed Problems with data size dependent range
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 -89057 9822.31 -89057 112.88
R2 200 -536773 511.92 -536773 141.73
R3 300 -1495795 - -1497965 184.07
R4 400 -1497696 - -3111269 1539.31
R5 500 -900910 - -5453382 -
R6 600 -254578 - -4329428 2688.44
R7 700 -669357 - -12751720 -
R8 800 11909 - -17795050 -
R9 900 -23988703 - -5990118.5 -
R10 1000 -31262549 - -5144754 -
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 19.97 - 3.1 4631.19
R2 200 -2.07 - -2.15 6975.61
R3 300 -0.35 - -0.55 -
R4 400 -0.85 - -1.63 -
R5 500 -5.02 - -89 -
R6 600 0.06 - -0.84 -
R7 700 1.43 - 1.36 -
R8 800 0.35 - 0.32 -
R9 900 -0.33 - -19.46 -
R10 1000 -3.10 - 125.22 -
26
Table 2.8: Results of Positive Correlated Problems-increased matrices
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 0.04 - 0.00 -
R2 200 0.16 - -0.25 -
R3 300 1.04 - 0.35 -
R4 400 1.6 - 1.55 -
R5 500 1.18 - 1.14 -
R6 600 4.07 - -418 -
R7 700 -0.38 - -0.54 -
R8 800 -0.44 - -0.84 -
R9 900 -0.94 - -1.94 -
R10 1000 0.31 - -0.20 -
LFAP2 LFAP4
Test n Obj.Value Time(s) Obj.Value Time(s)
R1 100 - 0.94 - -1.02 -
R2 200 -2 - -2.53 -
R3 300 -1 - -0.61 -
R4 400 6.83 - 124.88 -
R5 500 -1134 - 165
R6 600 -3.42 - -8.06 -
R7 700 2.40 - -585 -
R8 800 -1.09 - -3.2 -
R9 900 -1.39 - -710 -
R10 1000 0.4 - -1.07 -
27
Figure 2.2: Problem Size v.s. Times of Right Skewed Fix Range
18000
16000
14000
12000
Times (sec)
10000
8000
6000
4000
2000
0
100 200 300 400 500 600 700 800 900 1000
Problem Size
LFAP2 LFAP4
From the results presented in the Tables, we can see that as the matrix size n gets bigger,
both LFAP2 and LFAP4 require longer computation time to find the optimal solution.
Comparing Table 2.1 with Table 2.2, Table 2.3 with Table 2.4, and Table 2.5 with
Table 2.6 respectively, we see that instances with fixed range parameters usually took less
computation time compared to the ones with data size dependent range for the same size
instances. For the parameter setting, the ones with data size dependent range had wider
range compared to the fixed ones for all size instances.
None of the positively correlated problems solved to optimality even for n = 100. Posi-
tive correlation, although considered as a structured problem, resulted in harder instances.
Negative correlation data also produced hard instances.
Also, LFAP4 generally ran faster than LFAP2 when the time limit had not been reached.
In some instances, LFAP4 solved the problem within the time limit while LFAP2 does not.
In the instances that both LFAP2 and LFAP4 could not solve the problem within the time
limit, LFAP4 usually produced better objective values compared to LFAP4.
Thus, LFAP4 appears to be a better model. But note that the model assume integer
values for elements of C and D. If the elements of C and D are rational numbers, they can
be scaled to integers and apply LFAP4. But such a scaling operation could result in large
integers and could result in overflow errors. We did not test LFAP4 model with rational
data scaled to integers.
28
Chapter 3
Newton’s Method
3.1 Introduction
In the introduction chapter, we discussed how Newton’s method (also called Dinkelbach’s
algorithm) can be used to solve LFP and F COP + .
Since LFAP is a special case of LFP and F COP + , Newton’s method (Dinklebach’s
method) can be used to solve LFAP when D(x) > 0 for x ∈ F .
Recall that Newton’s method solves the parametric problem
Subject to
n
X
xij = 1, i = 1, ..., n (3.1)
j=1
Xn
xij = 1, j = 1, ..., n (3.2)
i=1
The validity of Newton’s method follows from Theorem 1.4 and applies to LFAP when
D(x)>0 for all x.
However, the method is not directly applicable to solve the general version of the LFAP.
When D(x) > 0 for all x ∈ F , x∗ is an optimal solution for LFAP if and only if f (λ) =
cT x∗ − λdT x∗ = min{c(x) − λD(x) : x ∈ F } = 0 [12, 25, 29]. (The analogous result is true
for more general linear and integer fractional programs.) When D(x) is allowed to take
arbitrary values, the condition is only necessary but not sufficient. To see this, consider a
3 × 3 LFAP with
29
y
y = c(x5 ) − λD(x5 ) = 6 − λ
y = c(x3 ) − λD(x3 ) = 3 + λ
4 4
y = c(x ) − λD(x ) = 4 − λ y = c(x2 ) − λD(x2 ) = 2 + λ
y = c(x1 ) − λD(x1 ) = 1 + λ
(1.5, 2.5)
Figure 3.1: The graph of f (λ) is shown in thick lines. The straight lines corresponding to
x2 and x6 are the same.
1 1 0 −1 0 0
1 0 1 and D = −1 0 0
C=
4 1 0 1 0 0
There are six possible assignments x1 = (1, 2, 3), x2 = (2, 1, 3), x3 = (1, 3, 2), x4 =
(3, 2, 1), x5 = (2, 3, 1) and x6 = (3, 1, 2). Figure 3.1 shows the plot of cT (xi ) − λdT (xi ) for
i =, 2, . . . , 6 and the lower envelop of these lines represents f (λ). Note that x3 is the unique
optimal solution to the resulting LFAP but it doesn’t contribute to f (λ).
We can however use Newton’s method to solve LFAP as two problems of the type
LFAP+. Consider the following constrained fractional assignment problems:
30
Pn Pn
i=1 j=1 cij xij
CLF AP 1 : Minimize f1 (x) = Pn Pn
i=1 j=1 dij xij
Subject to
n
X
xij = 1, i = 1, ..., n (3.4)
j=1
Xn
xij = 1, j = 1, ..., n (3.5)
i=1
Xn X n
dij xij ≥ (3.6)
i=1 j=1
Pn Pn
i=1 j=1 −cij xij
CLF AP 2 : Minimize f (x) = Pn Pn
i=1 j=1 −dij xij
Subject to
n
X
xij = 1, i = 1, ..., n (3.8)
j=1
Xn
xij = 1, j = 1, ..., n (3.9)
i=1
Xn X n
−dij xij ≥ (3.10)
i=1 j=1
Note that the denominator in the objective function of both CLFAP1 and CLFAP2 are
non-negative and hence we can apply Newton’s method to solve these problems.
Theorem 3.1. Let x0 and x∗ respectively be optimal solutions of CLFAP1 and CLFAP2.
Then, the minimum amongst x0 and x∗ is an optimal solution to LFAP.
The proof of this theorem follows from Theorem 2.2. CLFAP1 and CLFAP2 belongs to
LF AP + where the family of feasible solution is restricted to F + . In Theorem 2.2, we have
shown that both LF AP + and LF AP − are NP-hard. Moreover, the best solution amongest
optimal solutions to LF AP + and LF AP − solves LFAP. From this, it can be verified that
the best solution amongst the solution of CLFAP1 and CLFAP2 gives an optimal solution
to LFAP.
Let us now discuss Newton’s method specialized to CLFAP1.
31
Algorithm - Newton’s method - CLFAP1
Subject to
n
X
xij = 1, i = 1, ..., n
j=1
Xn
xij = 1, j = 1, ..., n
i=1
Xn X n
dij xij ≥
i=1 j=1
Theorem 3.2. Newton’s method for CLFAP1 solves the problem in O m4 log 2 m × φ(n, C, D)
time, where φ(n, C, D) is the complexity of solving a linear constrained assignment problem.
n X
X n
α = min (−cij + λdij )xij
i=1 j=1
Subject to
n
X
xij = 1, i = 1, ..., n (3.12)
j=1
Xn
xij = 1, j = 1, ..., n (3.13)
i=1
Xn X n
−dij xij ≥ (3.14)
i=1 j=1
32
Combining CLFAP1 and CLFAP2, we get the following algorithm to solve LFAP, which
is based on standard Newton’s method applied to CLFAP1 and CLFAP2.
Subject to
n
X
xij = 1, i = 1, ..., n
j=1
Xn
xij = 1, j = 1, ..., n
i=1
Subject to
n
X
xij = 1, i = 1, ..., n
j=1
Xn
xij = 1, j = 1, ..., n
i=1
33
It may be noted that if z1 or z2 is ∞, then LFAP is solvable in polynomial time.
34
Table 3.2: Results of Symmetric Problems with fixed range
Table 3.3: Results of Symmetric Problems with data size dependent range
35
Table 3.5: Results of Left Skewed Problems with data size dependent range
Table 3.7: Results of Right Skewed Problems with data size dependent range
36
Table 3.8: Results of Negative Correlated Problems
37
Figure 3.2: Problem Size v.s. Times of Negative Correlated Problems
4000
3500
3000
2500
Times(sec)
2000
1500
1000
500
0
100 200 300 400 500 600 700 800 900 1000
Problem Size
38
Chapter 4
4.1 Introduction
In the previous chapter, we introduced a heuristic method for solving the LFAP using
a variation of the Newton’s method. In this chapter, we develop some local search based
heuristics for solving LFAP. These are important algorithms in the sense that the algorithms
take input a feasible solution and tries to find improved solutions.
39
proving solution from a pre-defined candidate list is sometimes used. A formal description
of a generic local search algorithm is given below.
For the LFAP, we consider a 2-exchange neighborhood to develop local search algo-
rithms. 2-exchange neighborhoods are often being used for solving linear and quadratic
combinatorial optimization problems [13]. To the best of our knowledge, this class of neigh-
bors are not studied for fractional combinatorial optimization problem. Any feasible solution
x of the LFAP can be represented by a set X(S) of ordered pairs, where (i, j) ∈ X(S) if and
only if xij = 1. For simplicity, in this chapter, we use the notation x to represent the set
X(S). For any two distinct elements (i, j), (k, l) in an assignment x, let x̂ be the assignment
given by x − {(i, j), (k, l)} ∪ {(i, l), (k, j)}. The operation of generating x̂ from x is called
a 2-exchange. Then, the 2-exchange neighborhood of x, denoted by N (x), is the collection
of all assignments obtained from x by a 2-exchange operation. It can be verified that the
objective function value f (x̂) of x̂ is given by
where ∆C(mv) = ci` + ckj − cij − ck` and ∆D(mv) = di` + ckj − dij − dk` .
The solution x is locally optimal with respect to a 2-exchange neighborhood if f (x)−f (x̂) ≤
0 for all x̂ ∈ N (x). Choose (p, q), (r, s) ∈ x such that
40
Then x̂ = x − {(r, s), (p, q)} ∪ {(r, q), (p, s)} is the best solution in N (x).
The above process of finding a local search move is described in the following algorithm.
Algorithm 2: findMove
bestMv ← N U LL,
bestMvObj ← ∞;
forall mv(i, j) ∈ N (x): do
C + ∆C(mv)
if bestM vObj > then
D + ∆D(mv)
bestMv← mv
C + ∆C(mv)
bestMvObj←
D + ∆D(mv)
end
end
return bestMv
If the output is Null, then the input of algorithm1 is locally optimal. Given C(x) and
D(x), we can compute x̂ and f (x̂), (where x̂ is the solution we move to from x) in O(n2 )
time. If f (x) − f (x̂) > 0, a local search algorithm replaces x by x̂ and the search for
an improving 2-exchange solution is continued. The algorithm terminates when a locally
optimal solution is reached. The function makeMove means we move from one solution to
another wehre mv defines the movement. The local search is presented below.
As mentioned earlier, a local search algorithm terminates when a local optimum is
reached. However, this local optimum may be far from a global optimum. One of the
strategies to enhance a basic local search algorithm is to initiate the algorithm multiple
times with different starting solutions. We then compare the local optimum we get each time
and use the best one as our final solution. A new local search can be initiated by selecting
a different starting solution and continuing the process for a fixed number of iterations and
choosing the overall best solution results in our multi-start local search algorithm (MSLS-
Algorithm).
However, since the MSLS-algorithm needs to run multiple times to improve the solution,
it takes lots of computation time. Given that this strategy may be too random, it cannot
guarantee improvement of local search algorithms.
For instance, if the start solution has C(x) and D(x) both positive or negative, algorithm
1 tends to move to the direction where |C(x)| is minimized and |D(x)| is maximized and
terminate with such a local solution. However, it is possible that the global solution is with
|C(x)| maximized and |D(x)|minimized. In this case, a local search will not produce a good
solution.
If there is no huge gap between local optimum and global optimum, the strategy we
mentioned above may not be very helpful. To improve it, instead of using a random starting
41
solution, we could use a local search strategy to find a relatively good starting solution and
perform a 2-exchange local search to find local solution.
Our strategy is to find a starting solution with negative objective function value (if
exists). There are two cases from which we can obtain solutions with negative objective
function values. The first case is when C(x) is positive and D(x) is negative. The second
case is when C(x) is negative while D(x) is positive.
For finding positive C(x) value and negative D(x) value, we have 4 kinds of randomly
generated starting solution. First is when C(x) is positive while D(x) is negative. For this
kind, we terminate the algorithm and use the starting solution as the output. Second is
when C(x) and D(x) are both negative. In this case, we try to find moves where values
of both C(x) and D(x) are increased while making sure D(x) remains negative. Third is
when C(x) and D(x) both positive. Fourth is when C(x) is negative and D(x) is positive.
For the third and fourth, we try to move to a new solution where C(x) is increasing while
D(x) is decreasing. After finishing running for all potential movements on the candidate
list, we output the final solution. The formal description of algorithm for finding a start
solution with C(x) positive and D(x) negative can be written as follows:
Note that we replace f indmove() in Algorithm 1 to Algorithm 3 to get the whole algorithm.
where mv represents the current movement.∆C(mv) represents changing value of C(x)
when move to mv. ∆D(mv) represents changing value of D(x) when move to mv. bestMv
represent the best movement we have so far. ∆C(bestM v) represent the the changing value
42
of C(x) when make the bestMv. ∆D(bestM v) represent the the changing value of D(x)
when make the bestMv.
For the second case is when C(x) is negative while D(x) is positive, we also have 4
kinds of randomly generated starting solution. First is when C(x) is negative while D(x)
is positive. For this kind, we terminate the algorithm and use the starting solution as
the output. Second is when C(x) and D(x) are both negative. In this case, we try to
find a move where values of both C(x) and D(x) are increased while making sure C(x)
remains negative. Third is when C(x) and D(x) are both positive. In this case, we try to
find a move where values of both C(x) and D(x) are decreasing while making sure D(x)
remains negative. C(x) and D(x) both positive. Fourth is when C(x) is positive and D(x)
is negative. In this case, we try to move to a point where C(x) is decreasing and D(x) is
decreasing. After finishing running for all potential movements on the candidate list, we
output the final solution. The formal description of algorithm for finding a start solution
43
with C(x) negative and D(x) positive can be written as follows:
Note that we replace f indmove() in Algorithm 1 to Algorithm 4 to get the whole algorithm.
Based on the above two algorithms, we can have the Modified local search for LFAP.
For the modified local search, we first start with a random solution x. We run Algorithm
3 and see if we can get a new starting solution with postive C(x) value and negative D(x)
value. If yes, we output the solution and use it as input to regular Local Search algorithm.
If not, we run Algorithm 4 and see if we can get a new starting solution with negative C(x)
value and positive D(x). If we can find such solution, we use it as input to the regular
Local Search Algorithm. If cannot find such solution, we find a new random solution x and
repeat the above process until we can find a solution.
A formal high level description of our modified local search algorithm for LFAP is given
below.
44
Algorithm 5: Modified local search for LFAP
isN egobj ← f alse
initilize()
localsearch(Find Move with positive C(x) and negative D(x))
if C(x) > 0 and D(x) < 0: then
localsearch(findMove);
isN egobj ← true
end
initialize()
localsearch(Find Move with negative C(x) and positive D(x))
if C(x) < 0 and D(x) > 0: then
localsearch(findMove);
isN egobj ← true
end
if isN egobj = f alse then
initialize()
localsearch(findMove)
end
output: the best solution found so far
1. Local search Algorithm without repetition. In other words, single start local search
algorithm. That means for each experiment, we ran a local search algorithm without
repetition with random starting solution and collected the corresponding local solution
and time it took.
2. Local search Algorithm with ten restarts. In other words, ten start local search
algorithm. Under this type of experiment, for each instance we ran a local search
Algorithm with random starting solution independently ten times, then collected the
corresponding best local solution, average value of the local solution, average time it
took for each run and the average number of iterations for the ten runs.
45
3. Modified local search Algorithm with ten restarts. For this experiment, we ran a
modified local search Algorithm for LFAP Algorithm ten times independently, then
collected the corresponding best local solution, average local solution, average time
it took for each run and the average number of iterations for the ten runs. For the
modified local search Algorithm , we try to find local solution with positive C(x) and
negative D(x) or with negative C(x) and positive D(x). If we can find neither case
as the starting solution, we use a random starting solution.
4. Modified multistart local search Algorithm with a fifteen minute time limit. As the
name suggests, we set a time limit for each experiment and ran the experiment instance
as many times as possible to find the best objective value it could get within the time
limit. Then collected the best obejctive value archieved for each instance and the
total time it took to get the object value.
Table 4.1-4.9 show the computational results of the Local Search Algorithm without
repeatation and the computational results of the Local Search Algorithm with ten repeata-
tions. Table 4.10-4.9 show the computational results of the Modified Multistart Local Search
Algorithm with ten restarts and the computational results of the Modified Multistart Local
Search Algorithm with fifteen minutes time limit. The column ‘Obj.Value’ represents the
best heuristic solution value found before reaching the termination condition. The column
‘Time(s)’ represents the time taken to solve the instance in seconds. The column ‘Itera-
tion(s)’ represents the number of iterations it took to get the solution. The column ‘Best
Obj.Value’ represents the best heuristic value it obtained among the result in multiple runs.
The column ‘Avg Obj.Value’ represents the average heuristic solution among the result in
multiple runs.
46
Table 4.1: Results of Symmetric Problems with fixed range
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -1584 -1194.5 0.02 39.2 -6059 -5809 413.60 8947.1
R2 200 -4150 -3754.4 0.04 134.9 -7729 -7652.2 458.31 25767
R3 300 -12034 -11273.8 10.29 15480.4 -12252 -12003.8 352.30 175263
48
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -1782 -1236.2 0.01 20.1 -6059 -5810.5 404.67 11096.5
R2 200 -7610 -5737.8 0.01 29.1 -28446 -27533.6 372.08 25161
R3 300 -19563 - 13488.4 0.02 35.2 -59990 -55807.4 541.60 28478
50
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -2709 -2476.6 0.01 148.4 -3853 -3806.9 269.47 17351
R2 200 -5724 -5563.4 0.08 366.4 -8189 -8138.9 301.74 21369
R3 300 -9426 -9036.4 0.34 584.6 -12934 -12842.9 423.96 28614
52
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -49249 -40080 0.01 160.2 -63409 -62480.3 415.02 20477
R2 200 -285189 -252680.65 0.08 313.8 -334084 -327217 417.73 18656.1
R3 300 -794354 -735947.3 0.30 469.3 -858967 -541856.8 562.75 27274
54
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -2970 –2648.2 0.01 152.3 -3822 -3754 352.71 19059
R2 200 -5896 -5514.1 0.03 359.9 -8251 -8186 344.15 22306
R3 300 -9805 -9200.3 0.16 534.5 -12940 -12865.9 418.95 27391
56
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -50698 -40129.2 0.01 162.2 -63448 -61862.4 437.03 10000
R2 200 -276980 -263954.9 0.02 311.2 - 327380 -32546.6 408.06 23496
R3 300 -752729 -686431.55 0.10 469.3 -865669 -843593.1 563.13 64040
58
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -777 -772 0.01 78.3 -786 -785.4 309.03 10600000
R2 200 -2.05 -2.03 0.17 522.4 -2.07 -2.07 543.62 2725009
R3 300 -0.55 -0.55 0.09 1674.3 -0.55 -0.55 412.70 1018020
60
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 0.00 0.01 0.04 390.2 -0.02 -0.02 592.89 12293927
R2 200 -0.09 -0.05 0.42 1653.7 -0.12 -0.12 407.16 2574864
R3 300 0.38 0.38 2.93 4713.9 0.36 0.37 206.18 940328
62
Modified Multistart Local Search with 10 restarts Modified Multistart Local Search with 15 mins time limit
Test n Best Obj.Value Avg. Obj.Value Times Iteration(s) Best Obj.Value Avg. Obj.Value Times Iteration(s)
R1 100 -1.01 -1.01 0.00 209 -1.02 -1.02 534.37 8874797
R2 200 -2.46 -2.45 0.03 868.5 -2.48 -2.48 525.86 2427330
R3 300 -0.52 -0.51 0.08 2174.6 -0.53 -0.53 359.71 1004211
64
50
40
Times (sec)
30
20
10
0
100 200 300 400 500 600 700 800 900 1000
Problem Size
From Table 4.1-4.9, we can see that Local Search with ten repetitions finds better
heuristic solution compare to Local Search without repetition. Time it takes for Local
Search Algorithm to terminate is similar to the time it takes for each local search run in
the Local Search with ten repetitions to terminate. From Table 4.10-4.18, we can see that
Modified Multistart Local Search with a fifteen minutes time limit gets the best heuristic
solution. However, it takes the longest time compared to other local search methods. Mod-
ified Multistart Local Search with ten restarts archieved the heuristic solution worse than
Modified Multistart Local Search with a 15 minutes time limit but takes a much shorter
time.
Overall, Modified Multistart Local Search with fifteen minutes time limit finds the best
solution among all the local search algorithms we tested. However, it takes the longest com-
putation time. Modified Multistart Local Search with ten times restart gets the second best
solution but it takes longer computation time compare to Local Search with ten repetitions
and Local Search without repetition. However, Local Search with ten repetitions and Local
Search without repetition bot has significant shorter computation time.
65
Chapter 5
Conclusion
In this thesis, we studied the unrestricted linear fractional assignment problem (LFAP)
which is a generalization of the standard linear fractional assignment problem.
We first present four different mathematical programming formulations of LFAP. Out
of these, two are of the type of MILP. We compared these two formulations experimentally
using randomly generated data. One of these formulations assume integer data, and this
formulation performed better in terms of computational time. The limitation however is
restriction to integer data.
We also proposed an exact algorithm to solve LFAP based on Newton’s method. In this
case, we solve two fractional constrained assignment problems and select the best solution.
Note that constrained assignment problem is NP-hard, but in practice, these problems can
be solved relatively fast. The resulting algorithm performed better than our algorithms
based on MILP formulations. A heuristic version of this algorithm is also developed and
compared experimentally with the exact version. The heuristic works almost like the exact
algorithm except that a time limit is imposed on the associated constrained assignment
problem solved.
Finally, we developed a local search algorithm and two other variations. One of the
variation is a multistart version of the local search. The local search uses a simple 2-
exchange neighborhood. The other variation attempts to obtain good starting solutions
that take advantage of the fractional objective function. Result of extensive experimental
study are presented.
We also generated some test instances with various properties. These instances can be
used by researches in future to conduct experimental analysis on new algorithms.
Theoretical analysis of heuristics and identify new polynomially solvable special cases
of LFAP are interesting and left as topics for future research by interested researchers.
66
Chapter 6
Bibliography
67
Bibliography
[5] G. Birkhoff. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman. Revista
A, 5:147-151, 1946.
[9] A. Charnes and W. W. Cooper. Programming with linear fractional functionals. Naval
Research Logistics, 9 (1962) 181–186.
[10] G. D. H. Claassen. Mixed integer (0–1) fractional programming for decision support
in paper production industry. Omega, 43 (2014) 21–29.
[11] G.B. Dantzig. Maximization of a linear function of variables subject to linear inequal-
ities. Activity Analysis of Production and Allocation, 1951.
68
[14] M. R. Garey and D. S. Johnson. “strong”np-completeness results: Motivation, exam-
ples, and implications. Journal of the ACM, 25 (1978) 499–508.
[16] P. C. Gilmore and R. E. Gomory. A linear programming approach to the cutting stock
problem – part ii. Operations research, 11 (1963) 863–888.
[17] F. Glover and E. Woolsey. Converting the 0-1 polynomial programming problem to a
0-1 linear program. Operations Research, 22 (1974) 180–182.
[18] D. Granot and F. Granot. On integer and mixed integer fractional programming
problems. Annals of Discrete Mathematics, 1 (1977) 221–231.
[19] D. Granot and F. Granot. On solving fractional (0, 1) programs by implicit enumera-
tion. INFOR: Information Systems and Operational Research, 14 (1976) 241–249.
[21] P.L. Hammer and S. Rudeanu. Boolean methods in operations research and related
areas. Springer Science & Business Media, 2012.
[24] J.R. Isbell and W.H. Marlow. Attrition games. Naval Research Logistics, 3 (1956)
71–94.
[26] S. N. Kabadi and A. P. Punnen. A strongly polynomial simplex method for the linear
fractional assignment problem. Operations Research Letters, 36 (2008) 402–407.
[28] H. Kuhn. The hungarian method for the assignment problem. Naval Research Logistics,
2 (1955) 83–97.
69
[30] Eugene L Lawler. Optimal cycles in doubly weighted directed linear graphs. Theory
of Graphs, pages 209–232, 1966.
[31] B. Martos. The direct power of adjacent vertex programming methods. Management
Science, 12 (1965) 241–252.
[35] Y. E. Nesterov and A.S Nemirovskii. An interior-point method for generalized linear-
fractional programming. Mathematical Programming, 69 (1995) 177–204.
[37] A. Punnen, and Y. Aneja. A tabu search algorithm for the resource-constrained as-
signment problem. Journal of the Operational Research Society, 46 (1995) 214-220.
[40] S. Raff. Routing and scheduling of vehicles and crews: The state of the art. Computers
& Operations Research, 10 (1983) 63–211.
70
[46] W. L. Winston, M. Venkataramanan, and J. B. Goldberg. Introduction to mathematical
programming. Thomson/Brooks/Cole Duxbury; Pacific Grove, CA, 2003.
[47] M. J. Yao, H. S. Soewandi, and S. E. Elmaphraby. Simple heuristics for the two
machine openshop problem with blocking. Journal of the Chinese Institute of Industrial
Engineers, 17 (2000) 537-547.
71