Fahim
Fahim
By:
Fahim Ullah
Supervised By:
Dr. Noor Badshah
Session: 2010–2012
i
Acknowledgment
The overall credit of my creative and research work undoubtedly goes to God Almighty
who has endowed me with the potentials to carry it on and to complete it in due course
of time. The credit of a thorough and successful struggle on my part goes to the last
prophet of Allah, Muhammad (PBUH) and the other prophets because they have always
been there for us as light houses.
I would like to acknowledge here the services of my supervisor, Dr. Noor Badshah
who encouraged me a lot and put me on the track towards achieving my goal. The area
of my research was quite a new thing for me and I had cold feet in the beginning but
his gestures of ownership and love always proved a guiding tool for sustaining the spirit
of doing something than doing nothing. I would also like to appreciate the vision of the
doctor sb. because he gave me the sense how new concepts are developed and how these
concepts are used in various fields for the enhancement of humanity in general. It would
be quite injustice here not to mention the love and teaching spirit of my course work
teachers, Dr. Sirajul Islam and Dr. Amjad Ali. They helped me a lot in uplifting my
spirit and endowed me with a winning heart in every kind of situation.
I am all prayer for my late parents whose goodwill and continuous care made me
able to overcome the various hazards of life with winning forehead. They have a great
share in shaping and designing my dreams in life. My elder brother, Khalid Usman, also
needs to be appreciated for shouldering family responsibilities and gave me a free hand
in utilizing my time and resources. The role of my second elder brother, Wajibullah
Khan, Assistant Professor of English at GPGC Kohat, is also unforgettable due to his
encouragement and early guidance. I would also like to appreciate the services of my
cousin, Mr. Rasool Muhammad, Assistant Professor of Statistics who taught me the
basic concepts of Mathematics while I was a college student.
Its very difficult to end the acknowledgment without mentioning all of my research
fellows whose co-operation and friendly behaviour highly encouraged me while on the
way to shoulder the task of doing research in Computational Numerical Analysis.
ii
Contents
Dedication i
Acknowledgment ii
Abstract v
List of Figures vi
Publications ix
1 Introduction: 1
1.1 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Mathematical Preludes 3
2.1 Metric and Metric Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Norm and Normed Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Continuous and Digital Images . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Computer Vision & Image Processing . . . . . . . . . . . . . . . . . . . . 7
2.5 Iterative Methods for System of Equations . . . . . . . . . . . . . . . . . . 9
2.5.1 Basic Concepts about System of Equations: . . . . . . . . . . . . . 9
2.5.2 Splitting of Matrix: . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.3 Jacobi Iteration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.4 Weighted Jacobi Iteration: . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.5 Gauss-Seidel Iteration . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.6 Successive Over Relaxation (SOR) Iteration . . . . . . . . . . . . . 12
2.6 Time Marching Iteration Schemes . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Explicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6.2 Stability of the Explicit Scheme . . . . . . . . . . . . . . . . . . . . 15
2.6.3 Implicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.4 Stability of the Implicit Scheme . . . . . . . . . . . . . . . . . . . . 17
2.6.5 Crank-Nicolson Scheme . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.6 Stability of the Crank-Nicolson Scheme . . . . . . . . . . . . . . . 18
2.6.7 Additive Operator Splitting (AOS) Scheme . . . . . . . . . . . . . 19
iii
2.6.8 Additive Multiplicative Operator Splitting (AMOS) Scheme . . . . 20
Bibliography 56
iv
Abstract
v
List of Figures
2.1 l2 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 l∞ norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 (a) Grey image (b) Colour (RGB) image (c) Grey scale image is preserved
in computer memory in a form of 2-D array of numbers where each number
carries the intensity value in the range [0, 255]. (d) RGB scale image stored
in the form of array of vectors (r, g, b). Each colour has its own intensity
level at each pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 (a) discrete value table for the small rectangle in grey image 2.3(a) (b)
discrete value table of red channel for the small rectangle in (RGB) image
2.3(b) (c) discrete value table of green channel for the small rectangle in
(RGB) image 2.3(b) (d) discrete value table of blue channel for the small
rectangle in (RGB) image 2.3(b). . . . . . . . . . . . . . . . . . . . . . . . 8
4.1 The results of SI, AOS and MG methods have been given in row 1, 2 and 3
respectively. Where as the results of channel 1, 2, 3 and segmented result
of the three channels can be found in column 1, 2, 3 and 4 respectively. . 42
4.2 The results of SI, AOS and MG methods have been given in columns 1, 2
and 3 respectively. Where as the results of channel 1, 2, 3, 4 and result of
the recovered object can be found in rows 1, 2, 3, 4 and 5 respectively. . 43
4.3 The results of SI, AOS and MG methods have been given in row 1, 2
and 3 respectively. Where as the initial contour, result of respective no.
of iterations and segmented results can be found in column 1, 2 and 3
respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
vi
5.4 Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 115 & y0 = 130 and
radius r0 = 40, size=256 × 256. (b)Result of VVCV model after 700
iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . . . . 53
5.5 Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 =
150 and radius r0 = 40, size=256 × 256. (b)Result of VVCV model after
700 iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . 53
5.6 Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 115 and
radius r0 = 40, size=256 × 256. (b)Result of VVCV model after 58 iterra-
tions. (c)Segmented result of VVCV model . . . . . . . . . . . . . . . . . 54
5.7 Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 100 & y0 =
100 and radius r0 = 45, size=256 × 256. (b)Result of VVCV model after
700 iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . 54
vii
List of Tables
4.1 Comparison table of the SI, AOS and Multi-Grid methods considering
a real image using CV Vector-Valued model in connection with to the
number of iterations and CPU time in seconds. . . . . . . . . . . . . . . . 41
viii
Publications
• Noor Badshah, Fahim Ullah and Haider Ali, “ New Variational Model for Vector
Valued Image Segmentation”, International Conference on Modeling and Simula-
tions (ICOMS-2013)-Vol. II, Ver. 1.0, November 25–27, 2013.
• Fahim Ullah, Noor Badshah,“Fast Iterative Methods for Vector Valued Image Seg-
mentation Models”, Submitted 2014.
ix
Chapter 1
Introduction:
1
state.
In order to find out edges of the object, we take a contour in the targeted image. In
this connection Osher and Sethian [34] has developed a well known level-set techniques
that shows the contour implicitly as zero level of the level-set function.
Our main work in this thesis is on the CV vector-valued image segmentation model
[14]. The main subject is to improve efficiency of the model in the form of its convergence
to the edges and CPU time. For this purpose we modify the CV model [14] by using the
co-efficient of variation rather than variance in the fitting or fidelity term of the model.
As a result it gives better results in form of CPU time and segmenting low contrast
objects, overlapping regions.
This chapter covers some primal stage mathematical tools such as:
Chapter 3:
• Experimental results.
• Conclusion.
• Future strategy.
2
Chapter 2
Mathematical Preludes
Here our discussion will be about some useful definitions along with examples and theo-
rems, which may be playing role of basic tools for the rest of the following chapters.
For any non-empty set of vectors (say) V , a metric may be explained as a function
d : V × V → < subject to the following conditions,
d(v1 , v2 ) > 0,
• d(v1 , v2 ) = 0 iff v1 = v 2
• d(v1 , v2 ) = d(v2 , v1 )
3
Definition 2.2.2 (l1 -norm or Taxicab-norm) :
is said to be l1 -norm or Taxicab-norm. This norm gives distance from the origin to the
vector v in the form of a rectangular street grid.
Definition 2.2.3 (l2 -norm or Euclidean-norm) :
For any vector v = (v1 , v2 , · · · , vn ) in n-dimensional vector space <n , the Euclidean-norm
can be defined as: v
u n q
uX
kvk = t vl2 = v12 + v22 + · · · + vn2 (2.1)
l=1
Euclidean norm of a vector v gives its ordinary distance from the origin. Figuers 5.5, 5.7
are the examples of l2 -norm.
The norm of n-dimensionally complex space Cn can be found in the following way:
p √
kzk = |z1 |2 + |z2 |2 + · · · + |zn |2 = z1 .z1 + z2 .z2 + · · · + zn .zn .
Example 2.2.1
p
Euclidean-norm of a vector v = (1, −3, 2) is kvk2 = (1)2 + (−3)2 + (2)2 = 3.72.
(a)(b)
2- 3-
dimensional
dimensional
l2 l2
norm
norm
For any vector v of n-dimensional vector space <n , a norm of the type:
(a)(b)
2- 3-
dimensional
dimensional
l∞ l∞
norm
norm
4
For any real number p > 1, a norm defined as;
n
!1/p
X
p
kvkp = |vl | (2.2)
l=1
is known as lp -norm.
Theorem 2.2.1
The sequence of vector {V (h)} converges to a vector V ∈ <n w.r.t infinity norm iff
Example 2.2.2
Therefore, according to the statement of the theorem (2.2.1), the sequence {V (h) } con-
verges to (5, 3, 0, 0)t w.r.t infinity norm.
At the same time it is very difficult to show whether the sequence in example (2.2.2), con-
verges to (5, 3, 0, 0)t w.r.t 2-norm as well. For this purpose we take help of the following
theorem (2.2.2) and apply it to this special case.
Theorem 2.2.2
or in other words,
5
A real or complex complete normed space V of vectors is said to be a Banach space V if
every Cauchy sequence vn converges in V .
Mathematically:
lim vl = v i.e., lim kvl − vkV = 0
l→∞ l→∞
6
(a)(b)
Color
Grey
(Vec-
(Scalar)
Im-tor)
ageIm-
age
(c) (d)
1- 3-
D D
ar- chan-
raynels
of of
thethe
Grey(RGB)
Im-Im-
ageage
Figure 2.3: (a) Grey image (b) Colour (RGB) image (c) Grey scale image is preserved
in computer memory in a form of 2-D array of numbers where each number carries the
intensity value in the range [0, 255]. (d) RGB scale image stored in the form of array of
vectors (r, g, b). Each colour has its own intensity level at each pixel.
Mx = b (2.3)
lim x(k) = x,
k→∞
7
(a)(b)
In- Red
ten-color
si- in-
tiesten-
ar- si-
rayties
for ar-
theray
for
rect-
an-the
gle rect-
in an-
thegle
grayin
the
fig(2.3(a))
RGB
fig(2.3(b))
(c) (d)
Green
Blue
color
color
in- in-
ten-ten-
si- si-
tiesties
ar- ar-
rayray
for for
thethe
rect-
rect-
an-an-
gle gle
in in
thethe
RGB RGB
fig(2.3(b))
fig(2.3(b))
Figure 2.4: (a) discrete value table for the small rectangle in grey image 2.3(a) (b) discrete
value table of red channel for the small rectangle in (RGB) image 2.3(b) (c) discrete value
table of green channel for the small rectangle in (RGB) image 2.3(b) (d) discrete value
table of blue channel for the small rectangle in (RGB) image 2.3(b).
8
2.5.2 Splitting of Matrix:
The matrix M in eq (2.3) in different situations can be split into various ways. The meth-
ods like Jacobi, Weighted-Jacobi, Gauss-Seidel and Successive over Relaxation(SOR) split
the matrix into different forms. The splitting of a matrix is generally given below:
M = M1 + M2 , (2.4)
this analysis mainly depends on the spectral radius of the iterative matrix:
R = M1−1 M2 ,
for more detail the readers can consult [20, 22, 32, 45]. We first here discuss the Jacobi
iteration.
(D + R)x = b
⇒ Dx + Rx = b
⇒ Dx = −Rx + b
⇒ x = −D−1 Rx + D−1 b
⇒ x = Qjac x + cjac , (2.6)
where Qjac = −D−1 R and cjac = D−1 b. Using the idea of eq (2.6) the ith equation for
xi has the following equivalent form:
n1
X −aı x bı
xı = +
aıı aıı
=1
6=ı
9
we generate the k th iteration of the ıth equation, using the previous (k − 1)th iteration
for k > 1
n1
!
(k−1)
1 X −aı x
x(k)
ı = + b , for all ı = 1, 2, 3, · · · , n1 (2.8)
aıı aıı
=1
6=ı
Theorem 2.5.1
then any square matrix M having the above property (2.9) is called a nonsingular matrix
and the Jacobian converges to its solution for any choice of the matrix B.
We use this value just an intermediate value and the new approximation can be secured
using the equation as follows:
x(k) (k−1)
ı = (1 − ω)xı + ωe
xı (2.10)
here ω is a constant value called weighting parameter that can be selected. Eq (2.10)
becomes Jacobi iteration for ω = 1. In matrix it takes the following form:
where Qω = (1 − ω)I + ωQjac and cω = ωcjac . If one of the aıı entries is zero and the
matrix is nonsingular, a reordering of the equations can be performed so that aıı 6= 0.
To speed up the convergence, the equations should be arranged so that aıı is as large as
possible.
10
of matrix eq (2.4) will take the following form:
M1 = L, M2 = U (2.11)
where L is said to be a lower triangular matrix that contains non-zero diagonal entries
and U is an upper triangular matrix.
Using eq (2.11), eq (2.3) can be written in the following way:
(L + U )x = b
⇒ Lx + U x = b
⇒ Lx = −U x + b
⇒ x = −L−1 U x + L−1 b
⇒ x = Qgs x + cgs , (2.12)
where Qgs = −L−1 U1 and cgs = L−1 b. Using the idea of eq(2.11) the ıth equation for xı
has the following equivalent form:
!
1 X X
xı = (−aı x ) + (−aı x ) + bı for all ı = 1, 2, 3, · · · , n1
aıı <ı >ı
we generate the k th iteration of the ıth equation using fresh values i.e., k th iteration of
the first (ı − 1) equations and the last (n1 − ı) equations of the previous (k − 1)th iteration
for k > 1
!
1 X X
x(k)
ı = −aı x(k)
+ −aı x(k−1)
+ bı for all ı = 1, 2, 3, · · · , n1
aıı <ı >ı
(2.14)
L−1 can easily be computed due to the triangularity of the matrix M1 = L. The Gauss-
seidel method is also known as forward gauss-seidel one. In oppose to jacobi iteration, in
the cited method, the ordering of the unknown play basic role in iteration. It updates the
unknown matrix x from the first coordinate. In the same way, we can flourish backward
gauss-seidel iteration (BGS) which begins to update the unknown matrix x from the
(n1 )th coordinate.
The splitting of M in the backward gauss-seidel iteration takes the following form:
M1 = U, M2 = L
11
in BGS, U become an upper triangular matrix that contains the diagonal entries as well
where as the matrix L is a strictly lower triangular one.
equation (2.15) is known as a symmetric gauss-seidel (SGS) iteration which is the joint
adventure of the two iterations. SGS is actually a FGS followed by a BGS iteration.
ωM = ω(D + L + U )
= (D + ωL) + (ωU + (ω − 1) D) (2.17)
where D, L&U are as defined above. Using eq (2.18), eq (2.16) can be arranged as follows:
ω(D + L + U )x = ωb
⇒ (D + ωL)x + (ωU + (ω − 1)D)x = ωb
⇒ (D + ωL)x = −(ωU + (ω − 1)D)x + ωb
⇒ x = −(D + ωL)−1 (ωU + (ω − 1)D)x
+ (D + ωL)−1 ωb
⇒ x = Qsor x + csor , (2.18)
where Qgs = −(D + ωL)−1 (ωU + (ω − 1)D) and cgs = (D + ωL)−1 ωb. Using the idea
of eq (2.18), we generate the k th iteration of the ıth equation, using fresh values i.e., k th
iteration of the first (ı − 1) equations and the last (n1 − ı) equations of the previous
(k − 1)th iteration for k > 1
!
ω X X
x(k)
ı = (1−ω)x(k−1)
ı + −aı x(k)
+ −aı x(k−1) + bı ∀ ı = 1, 2, 3, · · · , n1
aıı <ı >ı
(2.20)
where ω is a constant value called weighting factor, if ω ∈ (0, 1) then SOR is called under-
relaxation and it converges in those cases where Gauss-Seidel fails to converge. If ω > 1,
12
the role of SOR is to speed up the convergence of the system which is also convergent by
the Gauss-Seidel iteration. The merits of the SOR iteration is its dramatic improvement
in convergence with a good choice of ω whose selection by itself is a very difficult task.
Keeping ω = 1, eq(2.18) reduces to eq(2.12) position i.e., SOR becomes Gauss-Seidel in
this case.
vt + vvx = µvxx ,
similarly,
vxy + αvx + βvy + γvxy = 0,
we need numerical methods for the solution of such problems. Before discussing the time
marching iterative schemes, we want to describe some of the differential operators that
are written below:
vı+1, − vı,
Forward difference-operator (vx )ı, =
h
vı, − vı−1,
Backward difference-operator (vx )ı, =
h
vı+1, − vı−1,
Central difference-operator (vx )ı, =
2h
The above difference operators are taken along x-direction and similarly they can be
taken along y-direction. We can present second order derivatives in the following way,
Second order difference operator along x-axis
vı+1, − 2vı, + vı−1,
(vxx )ı, =
h2
Second order difference operator along y-axis,
vı,+1 − 2vı, + vı,−1
(vyy )ı, =
h2
Second order mixed derivative,
vı+1,+1 − vı−1,+1 − vı+1,−1 + vı−1,−1
(vxy )ı, =
4h2
Now we present the time-marching iterative schemes that can be used for parabolic
differential equations. We’ll also bring their stability under discussion. For further detail
one can consult [26, 31, 36, 50].
13
2.6.1 Explicit Scheme
Explicit scheme calculates approximation of a system at a later time on the basis of
approximation of the system at the current time. In order to explain Explicit Scheme,
we present 1-dimensional heat equation,
∂ ∂2
v(x, t) = β 2 v(x, t), x ∈ [0, π], t>0 (2.21)
∂t ∂x
where β is a positive constant, the initial and boundary conditions for the above equation
(2.21) are as under:
(
v(x, 0) = φ(x)
(2.22)
v(0, t) = 0, v(π, t) = 0 for t > 0,
the discretized form of eq(2.21) (starting from the initial time and space value t0 = 0,
x0 = 0 and taking the increments ∆t & ∆x of the variables t and x respectively) will be:
14
consuming computationally. However, it is conditionally stable and its stability has been
discussed in the next section.
β4t
Ac eιc4x ξ (k+1) = Ac eιc4x ξ (k) + Ac eιc(+1)4x ξ (k) − 2Ac eιc4x ξ (k)
(4x)2
+ Ac eιc(−1)4x ξ (k)
β4t ιc4x −ιc4x
⇒ξ = 1+ e − 2 + e
(4x)2
2β4t eιc4x + e−ιc4x
⇒ξ = 1− 1 −
(4x)2 2
2β4t
⇒ξ = 1− 1 − cos(c4x)
(4x)2
4β4t 2 c4x
⇒ξ = 1− sin . (2.31)
(4x)2 2
(k)
Now we check whether the function v (as shown in 2.30) gives us the exact solution
of the eq(2.26). For this purpose, we check IBVP given in (2.27) and (2.28) using Ac as
2β4t
given in (2.25). (4x)2
> 0 because all the three variables are +ve. So we get the following
relations:
4β4t 4β4t c4x
1− 2
61− sin2 6 ξ(c) 6 1
(4x) (4x)2 2
4β4t
⇒ |ξ(c)| 6 1 iff 1− > −1 (2.32)
(4x)2
therefore, we conclude that Explicit Scheme conditionally stable and eq (2.32) is a sta-
bility condition for the scheme.
15
2.6.3 Implicit Scheme
In the Implicit Scheme, we calculate approximation of a system of equations at a later
time on the basis of both the approximation i.e., current and a later one. Here we present
1-dimensional Heat equation in order to explain the implicit scheme (2.21).
The discretized form of eq (2.21) with t0 = 0, x0 = 0 and taking the increments ∆t &
∆x of the variables t and x respectively, we get the following form:
(k+1) (k) (k+1) (k+1) (k+1)
v − v v+1 − 2v + v−1
= β
∆t (∆x)2
∆tβ
(k+1) (k+1)
(k)
⇒ v(k+1) − v (k+1)
− 2v + v−1 = vl (2.33)
(4x)2 +1
β4t
Ac eιc4x ξ (k+1) − A c eιc(+1)4x (k+1)
ξ − 2A c eιc4x (k+1)
ξ + Ac e ιc(−1)4x (k+1)
ξ
(4x)2
= Ac eιc4x ξ (k)
β4t ιc4x −ιc4x
⇒ξ 1− e −2+e =1
(4x)2
4β4t 2 c4x
⇒ ξ 1+ sin =1
(4x)2 2
1
⇒ ξ= (2.34)
4β4t 2 c4x
1 + (4x)2 sin 2
In eq (2.34) we see that RHS is +ve therefore, it can also be written as under:
1
⇒ |ξ| = (2.35)
4β4t c4x
1+ (4x)2
sin2 2
4β4t c4x 1
since 1 + sin2 > 1 therefore, 6 1 consequently eq(2.35)
(4x)2
2 1+ 4β4t
sin2 c4x
(4x)2 2
can be written as:
|ξ| 6 1, (2.36)
for each selection of 4x and 4t eq (2.36) is satisfied, therefore, we conclude that Implicit
scheme is unconditionally stable.
16
2.6.5 Crank-Nicolson Scheme
As we have noticed in section (2.6.2) that explicit scheme is although simple but stable for
short interval of time i.e., conditionally stable. While in section (2.6.4), we have noticed
that implicit scheme is unconditionally stable for all choices of time interval but it is time
consuming. Therefore a new type of scheme is introduced called Crank-Nicolson scheme
which is the combination of explicit and implicit schemes. The Crank-Nicolson scheme
for 1-D heat equation (2.21) can be shown as:
(k+1) (k)
( (k+1) (k+1) (k+1) ! (k) (k) (k) !)
v − v v+1 − 2v + v−1 v+1 − 2v + v−1
= β α + (1 − α)
4t (4x)2 (4x)2
4tαβ (k+1) (k+1)
4t(1 − α)β
(k) (k)
⇒ v(k+1) = v(k) + v − 2v (k+1)
+ v + v − 2v (k)
+ v
(4x)2 +1 −1
(4x)2 +1 −1
(2.37)
where α ∈ [0, 1], for α = 0 the above scheme reduces to the explicit scheme (2.26) and
for α = 1, the scheme reduces to implicit form (2.33) [7, 46].
4tαβ
⇒ Ac eιcj4x ξ (k+1) − A c e ιc(+1)4x (k+1)
ξ − 2A c e ιc4x (k+1)
ξ + A c e ιc(−1)4x (k+1)
ξ
(4x)2
4t(1 − α)β
= Ac eιcj4x ξ (k) + A c e ιc(+1)4x (k)
ξ − 2A c e ιc4x (k)
ξ + A c e ιc(−1)4x (k)
ξ
(4x)2
4tαβ ιc4x −ιc4x
4t(1 − α)β ιc4x −ιc4x
⇒ ξ 1− e −2+e = 1+ e −2+e
(4x)2 (4x)2
44tαβ 2 c4x 44t(1 − α)β 2 c4x
⇒ ξ 1+ sin = 1 − sin
(4x)2 2 (4x)2 2
1 − 44t(1−α)β
(4x)2
sin2 c4x2
⇒ ξ= (2.38)
1 + 44tαβ
(4x) 2 sin 2 c4x
2
since
44t(1 − α)β c4x 44tαβ c4x
1− sin2 61+ sin2
(4x)2 2 (4x)2 2
therefore,
44t(1−α)β c4x
1− (4x)2
sin2 2
6 1,
44tαβ c4x
1+ (4x)2
sin2 2
17
consequently it can be deduced from eq (2.38) that:
|ξ| 6 1, (2.39)
for each choice of 4x and 4t eq (2.39) is satisfied, therefore, we conclude that Crank-
Nicolson scheme is also unconditionally stable.
vt = ∇. (G∇v) + g (2.40)
1
where G = |∇v| , 0 6 t 6 T and Ω ⊆ <d with initial and boundary conditions:
(
v(0, .) = v0 in Ω
∂v
(2.41)
∂−
→
n
=0 on ∂Ω
considering the 1st -term on RHS of eq (2.40) and vanishing the reaction term g. Thus
we get an Euler equation that is implicit w.r.t. discretization and semi-implicit w.r.t.
spatial differences:
−1
(k+1) 4t
v = I− M v (k) , k = 1, 2, · · · (2.42)
4x2
here v (k) is nd1 -dimensional vector. Matrices I and M are of size nd1 × nd1 . Using Tensor
product, Matrix M can be split in the following way:
M = M1 ⊗ M2 ⊗ · · · ⊗ M d
then
α11 M α12 M α13 M
Mı ⊗ M = α21 M α22 M α23 M .
18
In AOS scheme we do additive decomposition of the evolution matrix M . The process
is shown in in the following equation:
d −1
(k+1) 1X 4t
v = I −d Mı v (k) , k = 1, 2, · · · (2.43)
d 4x2
ı=1
since in AOS scheme we split m-dimensional system into m 1-dimensional system due
to which an evolution matrix M arisen from the system is a tridiagonal matrix. which
makes the method speedy as compared to the already existing explicit, implicit and
semi-implicit methods.
Similarly discretization of eq (2.40) along the other direction (say y-direction) is shown
in the following equation:
4t
I− M2 (v ) v (k+1) = v (k) ,
(k)
k = 1, 2, · · · (2.45)
4x2
changing order of eq (2.46) i.e., it can also be written in the following way:
4t (k) 4t
I− M2 (v ) I− M1 (v ) v (k+1) = v (k) ,
(k)
4x2 4x2
−1 −1
(k+1) 4t (k) 4t (k)
⇒ v = I− M1 (v ) I− M2 (v ) v (k) , k = 1, 2, · · ·
4x2 4x2
(2.47)
19
taking average of eq (2.46) and eq (2.47) we get the following form:
( −1 −1
(k+1) 1 4t (k) 4t (k)
v = I− M2 (v ) I− M1 (v )
2 4x2 4x2
−1 −1 )
4t 4t
+ I− M1 (v (k) ) I− M2 (v (k) ) v (k) , (2.48)
4x2 4x2
eq (2.48) is known as AMOS scheme. The advantage of AMOS scheme is that it gives bet-
ter 1st order accuracy using semi-implicit scheme than the respective AOS scheme. The
AMOS scheme is more advantageous than the corresponding AOS scheme in achieving
the 2nd order accuracy in each direction while using crank-nicolson scheme.
20
21
Chapter 3
In this chapter, we include some of the models which are commonly use in image seg-
mentation. All these model use image as its input and output data. The cornerstone of
us in this chapter will be the type of variational (i.e., scalar and vector-valued) models
that are used in image segmentation.
22
larization term and its work is to keep u smooth inside the Ω \ C. The last term is the
constraint on the discontinuous (edges) C and its work is to keep the boundary as short
as possible. The existence, regularity of the minimizers and theoretical results of eq(3.1)
can also be found in detail in [18, 28, 29] and [30].
Model (3.1) is reduced to another form by assuming u as a constant piecewise function
inside different connected closed region Ωq i.e., u = aq in each and every closed region
Ωq . Where, no
[ \
Ω= Ωq , and Ωq = (3.2)
q q
where q denote number of different connected closed regions, Ωq stands for the interior
of Ωq [44]. Using the idea of eq (3.2), the Mumford-Shah model (3.1) tekes the following
form:
XZ
MS
ECR (u, C) = β |I − aq |2 dxdy + α|C|, (3.3)
q Ωq
we consider the problem (3.3) mostly in 2-dimension. Keeping C fixed and minimization
of eq (3.3) w.r.t. aq , the mean intensity value aq can be expressed in the following way:
R
Ω Idxdy
aq = R q
Ωq dxdy
despite the fact that the segmentation model (3.1) extract all the noticeable parts in the
given image. But there can be some more important parts than the other depending on
the application as in medical imaging. Thus we feel need of another type of segmentation
models. In this regard we use the variational models whose work is to detect edges in
the image.
23
the functional (3.6) is a length term weighting the length element ds of Euclidean through
the function g that gives information in connection with the edges of the object [3]. Where
g is a function known as edge detector which is defined below:
1
g (∇(I ∗ Gσ )) = ,
1 + γ |∇(I ∗ Gσ )|2
1
exp (x − µx )2 + (y − µy )2 /2σ 2 is a Gaussian where as σ is a stan-
here, Gσ = 2
2πσ
dard deviation, χ is a positive constant, µx , µy are mean values and Gσ is smooth sort
of the image I.
In the next section, we present another type of active model that does not depend on the
edge function [16].
here a and b have been taken as the average intensity values inner and outer of the contour
C respectively whereas the unknown quantity C is an evolving curve,. To minimize (3.7),
the regularization term consist of the length term of C and the area term inside the
contour C is as used in [16] is added and consequently the following energy equation is
obtained:
where µ, ν > 0, λ1 , λ2 > 0 are constant coefficients, a and b are the unknown inner
and outer mean intensities respectively. C is n-D hyper surface where as length(C) is
a (n-1)-D Hausdorff Hn−1 (C). Chan-Vese model [16] is said to be a piecewise constant
Mumford-Shah segmentation model [30]. Chan-Vese model is restricted to divide image
into two regions only.
24
in order to replace the variable curve by the whole region Ω ∈ R2 , we define Heaviside
and Dirac delta functions respectively in the following equations:
(
1 if x > 0 dH(x)
H(x) = , and δ(x) = ,
0 if x < 0 dx
expressing each term of the energy functional in terms of level set function Ψ, we get the
following number of equations:
Z Z
length(C) = length(Ψ = 0) = |∇H(Ψ)|dxdy = δ(Ψ)|∇Ψ|dxdy
ZΩ Ω
u = a · H(Ψ) + b · H(−Ψ),
keeping Ψ fixed and minimizing eq (3.9) w.r.t the two unknown quantities a, b, we have:
R
I · H(Ψ)dxdy
a = ΩR ,
Ω H(Ψ)dxdy
R
in the case when Ω H(Ψ)dxdy > 0, which means that the curve has a non-empty interior
in Ω otherwise, we’ll have to reconstruct level set formulation.
Similarly, R
Ω I · H(−Ψ)dxdy
b= R ,
Ω H(−Ψ)dxdy
R
in the case when Ω H(−Ψ)dxdy > 0, which means that the curve has a non-empty
exterior in Ω otherwise, we’ll have to reconstruct level set function Ψ.
so that to get an Euler-Lagrange’s equation in φ, we take the regularization form of the
Heaviside H and Dirac Delta δ functions that are represented by H and δ respectively
25
because H is not differentiable at point 0. We take H and δ as in [9, 16, 17] i.e.,
1 2 −1 y
H (y) = 1 + tan
2 π
1
δ (y) = H0 (y) = ,
π 2 + y 2
setting a, b fixed and minimizing eq (3.10) w.r.t Ψ in order to find its Euler Lagrange’s
and this purpose we take help of Gâteaux derivative of E i.e.,
1n o
lim E (Ψ + τ Φ, a, b) − E (Ψ, a, b) = 0, (3.11)
τ →0 τ
26
using eq (3.13), eq (3.12) take the following form:
∇Ψ ∇Ψ
Z Z
0 0
µδ (Ψ)|∇Ψ|Φdxdy + µ −δ (Ψ)∇Ψ Φ − δ (Ψ)∇ · Φ dxdy
Ω Ω |∇Ψ| |∇Ψ|
Z Z
δ (Ψ) ∂Ψ n o
+µ Φ ds + δ (Ψ) ν + λ1 (I − a)2 − λ2 (I − b)2 Φdxdy = 0,
Ω |∇Ψ| ∂n Ω
∇Ψ
Z Z Z
δ (Ψ) ∂Ψ n
⇒− µδ (Ψ)∇ · Φdxdy + µ Φ ds + δ (Ψ) ν + λ1 (I − a)2
Ω |∇Ψ| Ω |∇Ψ| ∂n Ω
o
2
− λ2 (I − b) Φdxdy = 0,
where Φ is a test function, we take here an arbitrary choice of Φ and as a result we obtain
Euler-Lagrange’s equation that is given as follows:
n o
δ (Ψ) µ∇ · ∇Ψ − ν − λ (I − a)2 + λ (I − b)2 = 0 in Ω,
|∇Ψ| 1 2
(3.14)
δ (Ψ) ∂Ψ
→ =0
− ⇒ ∂Ψ
→ = 0 on the boundary of Ω
−
|∇Ψ| ∂ n ∂n
the author has considered the steady state solution of the above system (3.14) in [16],
and as a result the following parabolic equation is deduced:
n o
∂Ψ ∇Ψ 2 + λ (I − b)2
= δ (Ψ) µ∇ · |∇Ψ| − ν − λ 1 (I − a) 2 in Ω,
∂t
Ψ(t, x, y) = Ψ0 (x, y) in Ω, (3.15)
t=0
δ (Ψ) ∂Ψ
∂Ψ
=0 ⇒ =0 on the boundary of Ω
|∇Ψ| ∂ −→
n ∂−
→
n
This is something called Evolution equation of the CV model [16] which is discretized
through Semi-Implicit method in the next section.
∂Ψ ∇Ψ
= µ · δ (Ψ)∇ · +f (3.16)
∂t |∇Ψ|
where,
27
the discretization of eq (3.16) take the form,
(k+1) (k) (k+1)
4x+ Ψı,
Ψı, − Ψı, 1
= δ Ψ(k) x
ı, µ. 2 4 − r
∆t h 1
(k)
2
(k)
2
4x+ Ψı, /h1 + 4y+ Ψı, /h2 + β
y (k+1)
1 y 4 + Ψ ı,
+ µ. 2 4− r
+ fı, (3.18)
h2 x (k)
2
y (k)
2
4 Ψı, /h1 + 4 Ψı, /h2 + β
+ +
where the differences 4x+ , 4x− , 4y+ , 4y− are given by,
(k) (k) (k) (k) (k) (k)
4x+ Ψı, = Ψı+1, − Ψı, , 4x− Ψı, = Ψı, − Ψı−1, ,
(k) (k) (k) (k) (k) (k) (3.19)
4y+ Ψı, = Ψı,+1 − Ψı, , 4y− Ψı, = Ψı, − Ψı,−1 ,
implies that:
n
(k) (k+1) (k) (k+1)
Ψ(k+1)
ı, = Ψ (k)
ı, + µ4tδ Ψ(k)
ı, Hı−1, Ψı−1, + Hı,−1 Ψı,−1
o
(k) (k) (k) (k) (k+1) (k) (k+1)
− Hı−1, + Hı,−1 + 2Hı, Ψ(k+1)
ı, + H ı, Ψ ı+1, + Hı, Ψı,+1 + 4tfı, .
(3.21)
The functionals Hı, , Hı−1, and Hı,−1 has been freezed at k th term and thus equation
(3.21) becomes a linear system of equations which can be resolved by iterative methods.
28
Although the SI method is unconditionally stable for large time steps as well. But at the
same time the main flaw in this method is its computational cost for large sized images.
in order to find an Euler’s equation we keep the mean intensities E (φ, a, b, a, b) fixed and
minimizing eq (3.22) in connection with the level set function Ψ using Gâteaux derivative:
1n o
lim E (Ψ + τ Φ, a, b, a, b) − E (Ψ, a, b, a, b) = 0, (3.23)
τ →0 τ
now using same process as done in section (3.3.1) we get the following equation:
∂Ψ h
= δ (Ψ) − λ1 (I − a1 )2 − λ2 (I ∗ − b1 )2
∂t i
+ λ1 (I − a2 )2 + λ2 (I ∗ − b2 )2
h ∇Ψ ∇Ψ i
+ µδ (Ψ)∇ · + ∇2 Ψ − ∇ · ,
|∇Ψ| |∇Ψ|
Ψ(x, y, t) = Ψ0 (x, y, 0), in Ω.
The LCV model does well in those images having inhomogeneity problems as well, but
the performance of the model is not satisfactory in those images which are obtained with
low frequencies, unilluminated objects, overlapping regions of homogeneous intensities.
For this purpose we develop coefficient of variation based variational model (see section
5.2).
29
3.5 Active Contour Without Edges (Vector-Valued Case)
This model is the expansion of Chan-Vese model [16]. This model [14] also have some
properties in common with the CV model, like not depending on the gradient of the
Image I and consequently can detect edges with and without gradient. The fidelity term
of the model is as follows:
N Z N Z
1 X 2 1 X
E2 (C, a, b) = λ+
` (I` − a ` ) dxdy + λ− 2
` (I` − b` ) dxdy, (3.24)
N in(C) N out(C)
`=1 `=1
−
where λ+
` , λ` > 0, µ, ν > 0 are constant parameters. Just like Chan-Vese model [16],
this model is also piecewise constant Mumford-Shah segmentation model [30].
∇Ψ N
1 X +
µδ (Ψ)∇ · − δ (Ψ)ν − δ (Ψ) λ` (I` − a` )2 − λ−
` (I` − b` )
2
= 0,
|∇Ψ| N
`=1
(3.26)
30
keeping ν = 0 the steady state form of (3.26) become:
N N
∂Ψ n ∇Ψ 1 X + 1 X − o
= δ (Ψ) µ∇. − λ` (I` − a` )2 + λ` (I` − b` )2 in Ω, (3.27)
∂t |∇Ψ| N N
`=1 `=1
where, →
−
n is a unit normal on the boundary of Ω.
To solve the above evolution problem (3.28) numerically, we use finite difference scheme
as used in [16].
The CV vector-valued model work well in images that have homogeneous regions, but the
performance of the model is not adequate in those images which are obtained with low
frequencies, unilluminated objects, overlapping regions of homogeneous intensities. For
this purpose we develop coefficient of variation based vector-valued model (see section
5.2).
31
Chapter 4
In this chapter we present Semi-Implicit (SI) and Additive Operator Splitting (AOS)
methods for discretization of Euler’s Lagrange equation (3.27) of the Chan-Vese vector
valued model [14]. We also propose Multi-Grid (MG) method for the said model and
compare the results of MG with SI and AOS methods.
∂Ψ ∇Ψ
= µδ (Ψ)∇ · +f (4.1)
∂t |∇Ψ|
Where,
N
1 X n + 2 2 o
f = δ λ` I` − a` + λ−
` I` − b ` (4.2)
N
`=1
Thus in order to discretize the above equation in Ψ, we use a finite implicit scheme. Here
we take the observed image Il in the form of n1 ×n1 pixel of size h1 ×h2 , where h1 = 1/n1
and h2 = 1/n1 . Each pixel represent the average light intensity over a small rectangular
portion. Thus the (ıth , th ) grid point is located as, (xı , y ) = (ı − 12 )h1 , ( − 12 )h2 . Thus
using finite difference scheme, the discretization of the above equation take the form,
(k+1) (k) (k+1)
4x+ Ψı,
Ψı, − Ψı, 1 x
= δ Ψ(k)
ı, µ · 2 4−
r
∆t h1
(k)
2
(k)
2
4x+ Ψı, /h1 + 4y+ Ψı, /h2 + β
y (k+1)
1 y 4 + Ψ ı,
+ µ · 2 4− r
+ fı, (4.3)
h2 (k)
2
y (k)
2
4x Ψı, /h1 + 4 Ψı, /h2 + β
+ +
32
Where the differences 4x+ , 4x− , 4y+ , 4y− are given by,
implies that:
n
(k) (k+1) (k) (k+1)
Ψ(k+1)
ı, = Ψ (k)
ı, + µ4tδ Ψ(k)
ı, Hı−1, Ψı−1, + Hı,−1 Ψı,−1
o
(k) (k) (k) (k) (k+1) (k) (k+1)
− Hı−1, + Hı,−1 + 2Hı, Ψ(k+1)
ı, + Hı, Ψı+1, + Hı, Ψı,+1 + 4tfı, .
(4.6)
The functionals Hı, , Hı−1, and Hı,−1 has been freezed at k th term and thus equation
(4.6) becomes a linear system of equations which can be solved by iterative methods.
Although the Semi-implicit method is unconditionally stable for large time steps as well.
But at the same time the main flaw in this method is its computational cost for large
sized images.
Thus we develop an Additive Operator Splitting (AOS) method as done in [4, 25, 51, 12,
26, 23] to solve the PDE (3.27).
33
4.2 Additive Operator Splitting (AOS) method
Considering equation (3.27) in the form,
∂Ψ 1
= µ δ (Ψ)∇ · G∇Ψ + f where G =
∂t |∇Ψ|
n o
= µ δ (Ψ) ∂x G∂x Ψ + ∂y G∂y Ψ + f (4.7)
⇒ Ψk+1
= Ψk + ∆t F1 Ψk+1
−1 − F Ψk+1
+ F2 Ψk+1
+1 + ∆tf , (4.8)
Where,
Fk +F−1
k k +2F k +F k
F+1
F1 = µδ (Ψk ) 2 , F = µδ (Ψk ) 2
−1
,
Fk +F+1
k (4.9)
F2 = µδ (Ψk ) 2 ,
Thus the above equation (4.8) take the form,
−∆tF1 Ψk+1
−1 + 1 + ∆tF Ψ
k+1
− ∆tF2 Ψk+1 k
+1 = Ψ + ∆tf . (4.10)
A (Ψk )Ψk+1
= Ψk + f k for = 1, 2,
the system of equations (4.10) is solved in one direction (say x-direction) where A (Ψk )
for = 1, 2 is a tri-diagonal matrix. In the same way After solving the Euler Lagrange’s
equations in y-direction also, we take average of the two system of equations. We get
next approximation to the exact solution:
1 X k+1
Ψk+1 = Ψ for p = 1, 2. (4.11)
2 p p
34
4.3 Multi-Grid Algorithm for the Non-linear PDE of CV
Vector-Valued model
We describe here multi-grid method for the CV vector-valued model. The non-linear PDE
(3.26) without considering the artificial time-step (as included in (3.27) while applying
time marching schemes). Thus without using the time variable t, the approximation
at any pixel (i, j) can be denoted by Ψij = Ψ(xi , yj ). Then the elliptic PDE given in
equation (3.26) can be written as:
N
∇Ψ 1 Xn + o
µdiv − λ` (I` − a` )2 − λ−
` (I ` − b` )2
= 0, (4.12)
|∇Ψ| N
`=1
where I`∗ is as defined above. Equation (4.12) and (4.13) possess the same stationary
points [10, 13]. Discretizing eq (4.13) at any grid point (ı, ) is given as:
4x x
4− Ψı, /h1
+
µ q
h1 x 2 y 2
(4− Ψı, /h1 ) + (4− Ψı, /h2 ) + β
4y+ 4y− Ψı, /h2
+ q
h2 (4x Ψ /h )2 + (4y Ψ /h )2 + β
− ı, 1 − ı, 2
N N
1 X + 1 X −
+ λ` (I` (ı, ) − a` )2 dxdy − λ` (I` (ı, ) − b` )2 dxdy = 0, (4.14)
N N
`=1 `=1
Note 1 We have used β for different values in the region (0, 1] in our experiments but
it have no effect on final result.
35
4 xΨ
− ı,
⇒ µ1 4x+ q
(4− Ψı, ) + (γ12 4y− Ψı, )2 + β1
x 2
y
4 Ψ
− ı,
+ γ12 4y+ q
(4x− Ψı, )2 + (γ12 4y− Ψı, )2 + β1
N N
1 X + 1 X −
= − λ` (I` (ı, ) − a` )2 dxdy + λ` (I` (ı, ) − b` )2 dxdy, (4.15)
N N
`=1 `=1
µ h1
where µ1 = , β1 = h21 β and γ1 = using the Neumann’s boundary conditions:
h1 h2
The left hand side of the equation (4.14) is as like TV regularization denoising model
[37]. In order to avoid gradient to be zero(0), we use a small positive parameter β as in
[37, 47].
Ψh = ψ h + eh , (4.17)
Ψh − ψ h = eh
⇒ N h (Ψh ) − N h (ψ h ) = N h (eh )
⇒ f h − N h (ψ h ) = rh , (4.18)
we use the iterative methods to smooth the error on the fine grid. After smoothing the
error on fine grid we come on coarse grid by using the restriction operator. We solve
residual equation using iterative method in order to get approximation of the error on
the coarse grid (As compared to the fine grid, the solution on the coarse grid is less
expensive), then we come back on the fine grid by using interpolation operator to correct
36
approximation ψ h . It is said to be a two-grid method.
We want to discuss here the restriction and interpolation operators on the rectangular
domains Ωh and Ω2h .
Restriction Operator:
Ih2h ψ h = ψ 2h ,
where,
2h 1 h h h h
n1 n1
ψı, = ψ2ı−1,2−1 + ψ2ı−1,2 + ψ2ı,2−1 + ψ2ı,2 , 16ı6 , 166 .
4 2 2
is full weighting restriction operator [17, 43].
Interpolation Operator:
h h
I2h ψ = ψh,
where,
h 1 2h 2h 2h 2h
ψ2ı,2 = 9ψı, + 3ψı+1, + 3ψı,+1 + ψı+1,+1
16
h 1 2h 2h 2h
ψ2ı−1,2 = 9ψı, + 3ψı−1, + 3ψı,+1 + ψı2h
1 ,+1
16
h 1
2h 2h 2h 2h
ψ2ı,2−1 = 9ψı, + 3ψı+1, + 3ψı,−1 + ψı+1,−1
16
h 1 2h 2h 2h 2h
ψ2ı−1,2−1 = 9ψı, + 3ψı−1, + 3ψı,−1 + ψı−1,−1
16
n1 n1
for 1 6 ı 6 , 166 .
2 2
It is said to be nonlinear interpolating operator [17, 43].
Now we are going to discuss smoother which is the most basic part of the multi-grid
algorithm. We first discuss here the non-linear smoother.
37
where,
1
D(Ψ)ı, = q ,
(4− Ψı, )2 + (γ12 4y− Ψı, )2 + β1
x
1
D(Ψ)ı+1, = q , (4.20)
(4x− Ψı+1, )2 + (γ12 4y− Ψı+1, )2 + β1
1
D(Ψ)ı,+1 = q ,
(4x− Ψı,+1 )2 + (γ12 4y− Ψı,+1 )2 + β1
are the denominators that are to be freezed on the post iteration in order to make the
problem linear. So using (4.20), eq (4.19) take the following form:
D(Ψ)ı+1, 4x− Ψı+1, − D(Ψ)ı, 4x− Ψı, + γ12 D(Ψ)ı,+1 4y− Ψı,+1 − D(Ψ)ı, 4y− Ψı,
µ1
N N
1 X + 2 1 X −
= − λ` (I` (ı, ) − a` ) + λ` (I` (ı, ) − b` )2 ,
N N
`=1 `=1
implies that
µ1 (D(Ψ)ı+1, (Ψı+1, − Ψı, ) − D(Ψ)ı, (Ψı, − Ψı−1, )) +
γ12 (D(Ψ)ı,+1 (Ψı,+1 − Ψı, ) − D(Ψ)ı, (Ψı, − Ψı,−1 )) = fı, , (4.21)
we compute the coefficients D(Ψ)ı, , D(Ψ)ı+1, and D(Ψ)ı,+1 each containing Ψı, , at the
previous iteration in a freeing process. Let Ψ̌ be the next approximation to the solution
Ψ. Putting the value of Ψ̌ at each grid point except on the grid point (ı, ) of eq (4.21).
As a result a linear equation of the following firm take place:
µ1 D(Ψ̌)ı+1, (Ψ̌ı+1, − Ψı, ) − D(Ψ)ı, (Ψı, − Ψ̌ı−1, ) +
γ12
D(Ψ̌)ı,+1 (Ψ̌ı,+1 − Ψı, ) − D(Ψ)ı, (Ψı, − Ψ̌ı,−1 ) = fı, , (4.22)
38
Algorithm 1 (Algorithm for the local smoother:)
f or ı = 1 : n1
f or = 1 : n2
f or itr = 1 : maxit
ψ̌ h← ψh
D(ψ̌ h )ı+1, ψ̌ı+1,
h + D(ψ̌ h )ı, ψ̌ı−1,
h + γ12 D(ψ̌ h )ı,+1 ψ̌ı,+1
h + γ12 D(ψ̌ h )ı, ψ̌ı,−1
h − f˘ı,
ψı, =
D(ψ̌ h )ı, + D(ψ̌ h )ı−1, + γ12 D(ψ̌ h )ı, + D(ψ̌ h )ı,−1
h
if ψı, − ψ̌ı, < töl then stop
end
end
end
f or ı = 1 : n1
f or = 1 : n2
− 12
D(ψ h )ı, = (4x− ψı, )2 + (γ12 4y− ψı, )2 + β1
end
end
Φh = ψ h
f or itr = 1 : maxit
f or ı = 1 : n1
f or = 1 : n2
Φ̌h ←Φ h
D(ψ)hı+1, Φ̌hı+1, + D(ψ h )ı, Φ̌hı−1, + γ12 D(ψ h )ı,+1 Φ̌hı,+1 + γ12 D(ψ h )ı, Φ̌hı,−1 − f˘ı,
Φı, =
D(ψ h )ı, + D(ψ h )ı−1, + γ12 (D(ψ h )ı, + D(ψ h )ı,−1 )
end
end
end
ψh ← Φ
In global smoother, we update the coefficients (4.20) globally in the start of the smoothing
step and is stored for the relaxation use.
39
4.3.4 Multi-Grid Algorithm
In order to solve eq (4.15), we use the multi-grid algorithm that is given below:
Algorithm 3 (Multi-Grid Algorithm:)
Table 4.1: Comparison table of the SI, AOS and Multi-Grid methods considering a real
image using CV Vector-Valued model in connection with to the number of iterations and
CPU time in seconds.
4.4 Conclusion
In this chapter, we propose Multi-Grid method for solving Chan-Vese vector valued model
[14]. The method gives best results regarding its efficiency in edge detection and CPU
time as compared to the SI and AOS methods. The method is also effective for images
of large sizes where SI and AOS methods fails in obtaining the desired result (one can
consult table (4.1) as reference).
40
(b)
(a) (d)
(c)
Re-
Re- Seg-
Re-
sult
sult mented
sult
ofofofRe-
sult
chan-
chan-
chan-
nel
nel of
nel
123the
three
chan-
nels
(e)
(f)
(g)
(h)
Re-
Re-Re-
Seg-
sult
sult
sult
mented
ofofofRe-
chan-
chan-
chan-
sult
nel
nel
nel
of
123the
three
chan-
nels
(i)
(j)
(k)
(l)
Re-
Re-
Re-
Seg-
sult
sult
sult
mented
ofofofRe-
chan-
chan-
chan-
sult
nel
nel
nel
of
123the
three
chan-
nels
Figure 4.1: The results of SI, AOS and MG methods have been given in row 1, 2 and 3
respectively. Where as the results of channel 1, 2, 3 and segmented result of the three
channels can be found in column 1, 2, 3 and 4 respectively.
41
(a)(c)
(b)
Re-Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nelnel
nel
111
us-us-
us-
inging
ing
SI MG
AOS
method
method
method
(d)(f)
(e)
Re-Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nelnel
nel
222
us-us-
us-
inging
ing
SI MG
AOS
method
method
method
(g)
(h)
(i)
Re-
Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nel
nel
nel
333
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method
(j)
(k)
(l)
Re-
Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nel
nel
nel
444
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method
(m)
(n)
(o)
Re-
Re-
Re-
cov-
cov-
cov-
ered
ered
ered
ob-
ob-
ob-
ject
ject
ject
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method
Figure 4.2: The results of SI, AOS and MG methods have been given in columns 1, 2
and 3 respectively. Where as the results of channel 1, 2, 3, 4 and result of the recovered
object can be found in rows 1, 2, 3, 4 and 5 respectively.
42
(c)
(b)
(a)
Seg-
Re-
Ini-
mented
sult
tial
Re-
af-
con-
sult
ter
tour
194
it-
er-
a-
tions
(d)
(e)
(f)
Ini-
Re-
Seg-
tial
sult
mented
con-
af-
Re-
tour
ter
sult
195
it-
er-
a-
tions
(g)
(h)
(i)
Ini-
Re-
Seg-
tial
sult
mented
con-
af-
Re-
tour
ter
sult
2
cy-
cles
Figure 4.3: The results of SI, AOS and MG methods have been given in row 1, 2 and
3 respectively. Where as the initial contour, result of respective no. of iterations and
segmented results can be found in column 1, 2 and 3 respectively.
43
Chapter 5
In this chapter we propose a new variational model that have co-efficient of variation
based fidelity term and also having local statistical function rather than existing models
[16, 49]. Section (5.3) shows its best results in terms of its detection.
5.1 Introduction
Image segmentation play a key role in the application of image processing and com-
puter vision. The image segmentation is to divide image in foreground and background.
Inn the end each pixel of the image will belong to any class i.e., either foreground or
background. In this chapter we have discussed energy functionals of variational models.
Image segmentation consists of usually two functionals i.e., internal energy functional
and external energy functional. The work of external energy functional is to bend active
contour to the edges of the object while the work of internal energy functional is to keep
the contour smooth. In most variational models [5, 16, 49] the fidelity term is covariance
based. While in this chapter wee have proposed a model whose fidelity term is based
on co-efficient of variation. Experimental results also shows that the performance of the
model is more better than the already existing Chan-Vese Vector-Valued model. The
Chan-Vese (Both Scalar and Vector-Valued) model work well in those images that are
free of inhomogeneity problems. Because in homogeneous problems the average inside
and outside intensities i.e., a, a and b, b respectively approximate I(x, y) in a better way
(see section 3.3 and 3.5). However these models (Both Scalar and Vector-Valued) be-
come weak in case when there is unilluminated objects, low contrast images, overlapping
homogeneous regions or images with low frequencies.
44
to variance. As we see in eq (5.1) also, the value of Coefficient of Variation is smaller
in the uniform regions as compared to in the regions where edges of the object located
[27, 40]. Thus the smaller values shows that pixel is in the uniform region and the larger
value shows that the pixel is about to locate on the edge of the object. The attributes
of CoV [6, 27, 40] shows that this method can also be used as best region descriptor.
Before defining CoV , we want to define variance first. Denoting image intensity at any
point (ı, ) as Iı, , the covariance is defined as follows:
1 X 2
V ar(I) = Iı, − M ean(I) ,
N ı,
where M ean(I) denote mean intensity of the given image I. Covariance can be found as
the fitting term in many variational models [5, 16, 49]. The Coefficient of Variation CoV
can be defined as,
V ar(I)
CoV 2 = 2 . (5.1)
M ean(I)
Using the motif of coefficient of variation CoV [1, 27], the fidelity term of our proposed
model in image segmentation take the following form:
N N
1 X (I` − a`)2 1 X (I` − b` )2
Z Z
E1 (C, a, b) = dxdy + dxdy, (5.2)
in(C) N a2` out(C) N b2`
`=1 `=1
where a = (a1 , a2 , ..., aN ) and b = (b1 , b2 , ...bN ), the regularization term that consist of
the length term of the contour C and the area term inside the contour C as used in [16]
is added with global and local fidelity terms as in eq (5.2) and (5.3) respectively and as
a result the following energy equation is obtained:
45
5.2.1 Level Set Formulation
The level set formulation of our proposed model is also that of CV model [16]. Therefore
can consult section (3.3.1). In order to express each term of eq (5.4) in terms of level set
function φ we get the following number of equations:
Z Z
length(C) = length(Ψ = 0) = |∇H(Ψ)|dxdy = δ(Ψ)|∇Ψ|dxdy
Ω Z Ω
N Z
(I` − a` )2 (I`∗ − a` )2
1 X
λ+` + dxdy
N in(C) a2` a2`
`=1
N Z 2 (I`∗ − a` )2
1 + (I` − a` )
X
= λ` + H(Ψ)dxdy
N a2` a2`
`=1 Ω
N Z
!
2 ∗ − b )2
1 X (I ` − b` ) (I `
λ− ` + ` 2 dxdy
N out(C) b2` b`
`=1
N Z
!
1 X − (I` − b` )2 (I`∗ − b` )2
= λ` + (1 − H(Ψ))dxdy,
N Ω b2` b
2
`=1 `
thus using level set function φ eq (5.4) take the following form:
Z Z
E(Ψ, a, b, a, b) = µ δ(Ψ)|∇Ψ|dxdy + ν H(Ψ)dxdy
Ω Ω
N Z 2 (I`∗ − a` )2
1 X + (I` − a` )
+ λ` + H(Ψ)dxdy (5.5)
N a2` a2`
`=1 Ω
N Z
!
1 X − (I` − b` )2 (I`∗ − b` )2
+ λ` + (1 − H(Ψ))dxdy,
N b2` 2
b`
`=1 Ω
where I`∗ = Aco ∗ I` and Aco is an average Convolution operator. We take the regularized
form of he Heaviside function H and Delta function δ i.e., H and δ respectively as the
Heaviside function H is not differentiable at point 0. We use H and δ as in [9, 16, 17]
i.e.,
1 2 −1 x
H (x) = 1 + tan
2 π
1
δ (x) = H0 (x) = ,
π 2 + x2
46
thus the regularization form of eq (5.5) take the following shape:
Z Z
E (Ψ, a, b, a, b) = µ δ (Ψ)|∇Ψ|dxdy + ν H (Ψ)dxdy
Ω Ω
N Z
)2 (I ∗ − a` )2
1 X (I` − a`
+ λ+
` + ` 2 H (Ψ)dxdy (5.6)
N a2` a`
`=1 Ω
N Z
!
1 X (I` − b` )2 (I`∗ − b` )2
+ λ−
` + (1 − H (Ψ))dxdy,
N Ω b2` 2
b`
`=1
(I ∗ )2 H (Ψ)dxdy
R
a` = Ω R `∗ ,
Ω I` H (Ψ)dxdy
∗ )2 1 − H (Ψ) dxdy
R
Ω (I `
b` = R ,
I ∗ 1 − H (Ψ) dxdy
Ω `
in order to minimize eq (5.6) with respect to the level-set function Ψ. For this minimiza-
tion, we use Gâteaux derivative of the functional E while keeping a` , b` , a` and b` fixed.
we get an equation of the following form:
1
lim E (Ψ + tΦ, a, b, a, b) − E (Ψ, a, b, a, b) = 0
t→0 t
∇Ψ · ∇Φ
Z
⇒ µ δ0 (Ψ(x, y)) · |∇Ψ(x, y)|.Φ + δ (Ψ) · dxdy
Ω |∇Ψ|
N Z 2 (I`∗ − a` )2
+ (I` − a` )
Z
1 X
+ δ (Ψ) λ` + Φdxdy (5.7)
Ω N Ω a2` a2`
`=1
N Z
!
2 ∗ − b )2
−
Z
1 X (I` b` ) (I `
− δ (Ψ) λ−
` + ` 2 Φdxdy = 0.
Ω N Ω b2` b`
`=1
47
∇Ψ
now using the Green’s theorem and putting Φ = s and δ (Ψ) |∇Ψ| =→
−
v , we have:
Z ∇Ψ Z
δ (Ψ) ∂Ψ
−µ δ (Ψ)∇ · · Φdxdy + µ · →
− Φds
Ω |∇Ψ| Ω |∇Ψ| ∂ n
N Z 2 (I`∗ − a` )2
+ (I` − a` )
Z
1 X
+ δ (Ψ) λ` + Φdxdy (5.8)
Ω N Ω a2` a2`
`=1
N Z
!
2 ∗ − b )2
−
Z
1 X (I ` b` ) (I `
− δ (Ψ) λ−
` + ` 2 Φdxdy = 0.
Ω N Ω b2` b`
`=1
where,
δ (Ψ) ∂Ψ
. − = 0, on ∂Ω.
|∇Ψ| ∂ →n
Thus a steady state evolution equation take the following form:
N
(
∂Ψ ∇Ψ 1 X + (I` − a` )2 (I`∗ − a` )2
= δ (Ψ). µ∇. − λ` +
∂t |∇Ψ| N a2` a2`
`=1
N
)
1 X − (I` − b` )2 (I`∗ − b` )2
+ λ` + . (5.10)
N b2` b
2
`=1 `
The above Euler-Lagrange’s equation (5.10) of our proposed model (5.6) is discretized
through AOS method in the following section.
∂Ψ 1
= µδ (Ψ)∇ · G∇Ψ + f where G =
∂t |∇Ψ|
n o
= µδ (Ψ) ∂x G∂x Ψ + ∂y G∂y Ψ + f
where,
N (I − a )2 (I ∗ − a )2
1 X ` ` `
f = δ (Ψ) −λ+
` 2 + ` 2
N a` a`
`=1
)
(I − b )2 (I ∗ − b )2
` ` `
+ λ−` + ` 2 .
b2` b `
48
We split n-dimensional spatial operator into n 1-dimensional operators in AOS Scheme
[4, 12, 23, 26, 50]. In the end the n-dimensional operator can be found by taking the sum
of n 1-dimensional discretizations.
Thus the above equation can be discretized as:
k+1 k+1
⇒ Ψk+1
= Ψk
+ ∆t F1 Ψ−1 − F Ψk+1
+ F2 Ψ+1 + ∆tf , (5.11)
where,
Fk +F−1
k k +2F k +F k
F+1
F1 = µδ (Ψk ) 2 , F = µδ (Ψk ) 2
−1
,
Fk +F+1
k (5.12)
F2 = µδ (Ψk ) 2 ,
equation (5.11) take the following form:
−∆tF1 Ψk+1
−1 + 1 + ∆tF Ψk+1
− ∆tF2 Ψk+1 k
+1 = Ψ + ∆tf . (5.13)
A (Ψk )Ψk+1
= Ψk + f k for = 1, 2, (5.14)
the system of equations (5.14) is solved in one direction (say x-direction) where A (Ψk )
for = 1, 2 is a tri-diagonal matrix. In the same way After solving the Euler Lagrange’s
equations in y-direction also, we take average of the two system of equations. We get
next approximation to the exact solution:
1 X k+1
Ψk+1 = Ψ for p = 1, 2. (5.15)
2 p p
49
see the segmented result of the CVVV model (see fig 5.4(c)) that fails in segmenting
low contrast regions and fuzzy edges. On the other hand the segmented result of our
proposed (CoVVV) model (5.4(c)) is quit better in segmenting the low contrast regions
and fuzzy edges.
Our third RGB image as in figures (5.6) and (5.7) contains homogeneous overlapping
regions. The segmented result of CVVV model ( see fig 5.6(c)) shows that CVVV model
fail to segment those edges where two homogeneous regions are overlapped. While the
segmented results our model (see fig 5.7(c)) shows that our model is able to segment the
overlapping regions also.
On the basis of these experiments we can say that the result of our Coefficient of
Variation based variation model (CoVVV) comparatively better than the existing Chan-
Vese Vector-Valued variational model (CVVV) in those images that have overlapping
regions, low contrast regions or fuzzy edges.
(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions
Figure 5.2: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius r0 = 40,
size=256 × 256. (b) Result of VVCV model after 700 iterrations. (c) Segmented result
of VVCV model
(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions
Figure 5.3: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius
r0 = 40, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model
50
(c)
(b)
(a)
Seg-
Re-
Ini-
mented
sult
tial
Re-
af-
Con-
sult
ter
tour
700
iter-
ra-
tions
Figure 5.4: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 115 & y0 = 130 and radius r0 = 40,
size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented result of
VVCV model
(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions
Figure 5.5: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius
r0 = 40, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model
(c)
(a)
(b)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions
Figure 5.6: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 115 and radius r0 = 40,
size=256 × 256. (b)Result of VVCV model after 58 iterrations. (c)Segmented result of
VVCV model
51
(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions
Figure 5.7: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 100 & y0 = 100 and radius
r0 = 45, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model
52
Chapter 6
In this chapter, we present conclusion of our proposed model (sec 5.2) on the basis of
experimental results (sec 5.3). We also present further work to be done in future in image
processing.
6.1 Conclusion
Semi-Implicit (SI) method for Euler’s Lagrange (EL) equation arisen from the minimiza-
tion of CV vector-valued (VV) model is used which is unconditionally stable, but for
images of large sizes it may not work. Thus we propose, multi-grid (MG) method for the
solution EL equation.
We have also developed new active contour VV model for segmentation of vector-
valued (VV) images. Our proposed VV variational image segmentation model is equipped
with an efficient global and local fidelity terms, due to which it gives better results as
compared to the Chan-Vese VV mode.
The use of coefficient of variation in place of variance in both global and local fidelity
terms more strengthen the results of the model and as a result segment valuable details
and detect more regions of interest.
• Along with AOS scheme, we will also use AMOS scheme –as given in section 2.6.8–
for our model. This scheme will improve the efficiency of the model.
• We will also develop multigrid algorithm for our proposed model (5.2).
• We will also use our model in the area of image denoising as well.
53
Bibliography
[1] S. E. Ahmad, A pooling methodology for coefficient of variation, The Indian Journal
of Statistics 32 (1995), no. B1, 235–238.
[4] N. Badshah and K. Chen, Multigrid method for the chan-vese model in variational
segmentation, Communication and Computational Physics 4 (2008), no. 2.
[6] N. Badshah, K. Chen, H. Ali, and G. Murtaza, A coefficient of variation based image
selective segmentation model using active contours, East Asian Journal on Applied
Mathematics 4 (2012).
[7] E. Bae, Efficient global minimization methods for variational problems in imaging
and vision, PhD thesis, Department of Mathematics, University of Bergen (2011).
[8] D. Barash and R. Kimmel, An accurate operator splitting scheme for non-linear
diffusion filtering, Scale-space and Morphology in computer Vision 21 (2001), no. 06,
281–289.
[9] R. P. Beyer and R. J. Leveque, Analysis of a one-dimensional model for the immersed
boundary method, SIAM Journal of Numerical Analysis 29 (1992), no. 2, 332–364.
[10] X. Bresson, S. Esedoglu, P. Vandergheynst, J.P. Thiran, and S. Osher, Global min-
imizers of the active contour/snake model, CAM Report (2005), 04–05.
[11] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, International Jour-
nal of Computer Vision 22 (1997), no. 1, 61–79.
[12] T. F. Chan, K. Chen, and X-C. Tai, Non-linear multilevel schemes for solving the
total variation image minimization problem, Second edition. Interscience Tracts in
Pure and Applied Mathematics, No. 4. Springer Berlin Heidelberg (2007).
54
[13] T. F. Chan, S. Esedoglu, and M. Nikolov, Algorithm for finding global minimizers
of image segmentation and denoising models, UCLA CAM Report (2004), 04–54.
[14] T. F. Chan, B. Y. Sandberg, and L. A. Vese, Active contours without edges for
vector valued images, Journal of Visual Communication and Image Representation
11 (2000).
[15] T. F. Chan and J. Shen, Image processing and analysis, Society for Industrial and
Applied Mathematics SIAM, 3600 University City Science Center, Philadelphia, PA,
USA (2005), 1904–2688.
[16] T. F. Chan and L. A. Vese, Active contours without edges, IEEE Transaction on
Image Processing 10 (2001), no. 2.
[18] D. E. Giorgi, M. Carriero, and A. Leaci, Existence theorem for a minimum problem
with free discontinuity set, Arch. Rational Mech. Anal. 108 (1989), no. 3, 195–218.
[19] R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky, Fast geodesic active contours,
IEEE Transaction on image processing 10 (2001), no. 10, 14671475.
[22] E. Isaacson and H. B. Keller, Analysis of numerical methods, John Wiley, New York
(1966).
[24] M. Kass, A. Witkin, and A. Terzzopoulos, Snakes: Active contour models, Interna-
tional Journal of Computer Vision 6 (1987), no. 4, 321–331.
[25] T. Lu, P. Neittaanmaki, and X-C. Tai, A parallel splitting up method and its appli-
cation to navier-stokes equations, Applied Mathematics Lett. 4 (1991), no. 2, 25–29.
[26] , A parallel splitting-up method for partial differential equations and its appli-
cations to navier-stokes equations, RAIRO Model. Math. Anal. Numer. 26 (1992),
no. 6, 673–708.
[27] M. Mora, C. Tauber, and H. Batatia, Robust level set for heart cavities detection in
ultrasound images, Computers in Cardiology 32 (2005), 235238.
55
[28] J. M. Morel and S. Solimini, Variational methods in image segmentation: A construc-
tive approach, Revista Matematica Universidad Complutense de Madrid 1 (1988),
169–182.
[30] D. Mumford and J. Shah, Optimal approximation by piecewise smooth functions and
associated variational problems, Communications on Pure Applied Mathematics 42
(1989).
[31] M. V. Oehsen, Multiscale methods for variational image processing, Logos Verlag
Berlin, Comeniushof, Gubener str., 47, 10243 Berlin, first edition (2002).
[32] J. M. Ortega, Numerical analysis a second course, classics in appl. math., no. 3,
Societyfor Industrial and Applied Mathematics, Philadelphia, PA (1990).
[33] S. Osher, L. I. Rudin, and E. Fatemi, Non-linear total variation based noise removal
algorithms, Physica D 60 (1992), 259–268.
[34] S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed, algo-
rithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79 (1988), no. 1,
12–49.
[35] N. Paragios and R. Deriche, Geodesic active contours and level sets for the detection
and tracking of moving objects, IEEE Transactions on Pattern Analysis and Machine
Intelligence 22 (2000), no. 3, 266–280.
[37] L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal
algorithm, Physica D North Holland 60 (1992), no. 14, 259–268.
[38] M. Rudzsky, E. Rivlin, R. Kimmel, and R. Goldenberg, Fast geodesic active contours,
Scale-Space Theories in Computer Vision 16 (1999), no. 82, 34–45.
[39] J. Savage and K. Chen, An improved and accelerated non-linear multigrid method
for total-variation denoising, International Journal of Computer Mathematics 82
(2005), no. 8, 1001–1015.
[41] M. Sonka, V. Hlavac, and R. Boyle, Image processing, analysis, and machine vision,
Chapman & Hall 2 (1998).
56
[42] G. W. Stewart, Introduction to matrix computations, Academic Press, New York
(1993).
[43] U. Trottenberg and A. Schuller, Multigrid, Academic Press, Inc., Orlando, FL, USA
(2001).
[44] Y. H. R. Tsai and S. Osher, Total variation and level set based methods in image
science, Acta Numerica, Cambridge University Press, UK (2005), 01–61.
[45] R. S. Varga, Matrix iterative analysis, Prentice Hall, Englewood Cliffs, NJ (1962).
[47] C. R. Vogel, Computational methods for inverse problems, Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA (2002).
[48] L. Wang, C. Li, D. Xia, and C. Y. Kao, Active contour driven by local and global in-
tensity fitting energy with aplication to brain mr image segmentation, Computerized
Medical Imaging and Graphics 33 (2009), 520–531.
[49] X. F. Wang, D. S. Huang, and H. Xu, An efficient local chan-vese model for image
segmentation, Pattern Recognition 43 (2010).
[50] J. Weickert, B. M. ter Haar Romeny, and M. A. Viergever, Efficient and reliable
schemes for nonlinear diffusion filtering, Scale-Space theory in computer vision,
Lecture Notes in Computer Science 12 (1997), no. 52, 260–271.
[51] , Efficient and reliable schemes for nonlinear diffusion filtering, 7 (1998),
no. 3, 398–410.
57