0% found this document useful (0 votes)
10 views67 pages

Fahim

This thesis focuses on efficient iterative methods for solving vector-valued image segmentation models, particularly the Chan-Vese model, which is essential for extracting meaningful objects from complex images. It proposes a multi-grid method to enhance the stability and performance of existing models, especially in medical imaging scenarios where different channels may provide complementary information. Additionally, the work introduces a new active contour model that utilizes the Coefficient of Variation to improve segmentation accuracy in diverse images.

Uploaded by

luciferf82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views67 pages

Fahim

This thesis focuses on efficient iterative methods for solving vector-valued image segmentation models, particularly the Chan-Vese model, which is essential for extracting meaningful objects from complex images. It proposes a multi-grid method to enhance the stability and performance of existing models, especially in medical imaging scenarios where different channels may provide complementary information. Additionally, the work introduces a new active contour model that utilizes the Coefficient of Variation to improve segmentation accuracy in diverse images.

Uploaded by

luciferf82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Efficient Fast Iterative Methods for Solving Vector Valued

Image Segmentation Models

By:
Fahim Ullah

Supervised By:
Dr. Noor Badshah

Thesis submitted in connection with the


requirements of MS degree in Mathematics at
University of Engineering & Technology Peshawar.

Session: 2010–2012

Department of Basic Sciences & Islamiat


University of Engineering & Technology Peshawar, Pakistan.
Dedicated
to the Memory of my
(Late) Parents

i
Acknowledgment

The overall credit of my creative and research work undoubtedly goes to God Almighty
who has endowed me with the potentials to carry it on and to complete it in due course
of time. The credit of a thorough and successful struggle on my part goes to the last
prophet of Allah, Muhammad (PBUH) and the other prophets because they have always
been there for us as light houses.
I would like to acknowledge here the services of my supervisor, Dr. Noor Badshah
who encouraged me a lot and put me on the track towards achieving my goal. The area
of my research was quite a new thing for me and I had cold feet in the beginning but
his gestures of ownership and love always proved a guiding tool for sustaining the spirit
of doing something than doing nothing. I would also like to appreciate the vision of the
doctor sb. because he gave me the sense how new concepts are developed and how these
concepts are used in various fields for the enhancement of humanity in general. It would
be quite injustice here not to mention the love and teaching spirit of my course work
teachers, Dr. Sirajul Islam and Dr. Amjad Ali. They helped me a lot in uplifting my
spirit and endowed me with a winning heart in every kind of situation.
I am all prayer for my late parents whose goodwill and continuous care made me
able to overcome the various hazards of life with winning forehead. They have a great
share in shaping and designing my dreams in life. My elder brother, Khalid Usman, also
needs to be appreciated for shouldering family responsibilities and gave me a free hand
in utilizing my time and resources. The role of my second elder brother, Wajibullah
Khan, Assistant Professor of English at GPGC Kohat, is also unforgettable due to his
encouragement and early guidance. I would also like to appreciate the services of my
cousin, Mr. Rasool Muhammad, Assistant Professor of Statistics who taught me the
basic concepts of Mathematics while I was a college student.
Its very difficult to end the acknowledgment without mentioning all of my research
fellows whose co-operation and friendly behaviour highly encouraged me while on the
way to shoulder the task of doing research in Computational Numerical Analysis.

ii
Contents

Dedication i

Acknowledgment ii

Abstract v

List of Figures vi

List of Tables viii

Publications ix

1 Introduction: 1
1.1 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Mathematical Preludes 3
2.1 Metric and Metric Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Norm and Normed Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Continuous and Digital Images . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Computer Vision & Image Processing . . . . . . . . . . . . . . . . . . . . 7
2.5 Iterative Methods for System of Equations . . . . . . . . . . . . . . . . . . 9
2.5.1 Basic Concepts about System of Equations: . . . . . . . . . . . . . 9
2.5.2 Splitting of Matrix: . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.3 Jacobi Iteration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.4 Weighted Jacobi Iteration: . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.5 Gauss-Seidel Iteration . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.6 Successive Over Relaxation (SOR) Iteration . . . . . . . . . . . . . 12
2.6 Time Marching Iteration Schemes . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.1 Explicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6.2 Stability of the Explicit Scheme . . . . . . . . . . . . . . . . . . . . 15
2.6.3 Implicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.4 Stability of the Implicit Scheme . . . . . . . . . . . . . . . . . . . . 17
2.6.5 Crank-Nicolson Scheme . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.6 Stability of the Crank-Nicolson Scheme . . . . . . . . . . . . . . . 18
2.6.7 Additive Operator Splitting (AOS) Scheme . . . . . . . . . . . . . 19

iii
2.6.8 Additive Multiplicative Operator Splitting (AMOS) Scheme . . . . 20

3 Variational Scalar and Vector-Valued Models in Image Segmentation 23


3.1 Variational Image Segmentation Models . . . . . . . . . . . . . . . . . . . 23
3.2 Geodesic Active Contours Model . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Active Contours Without Edges . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 Level Set Formulation of Chan-Vese Model: . . . . . . . . . . . . . 25
3.3.2 Semi Implicit Method . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 The Local Chan-Vese Model (LCV) . . . . . . . . . . . . . . . . . . . . . 30
3.5 Active Contour Without Edges (Vector-Valued Case) . . . . . . . . . . . . 31
3.5.1 Level Set Formulation: . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Multigrid method for Active Contour Vector-Valued Model 33


4.1 Semi-Implicit Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Additive Operator Splitting (AOS) method . . . . . . . . . . . . . . . . . 35
4.3 Multi-Grid Algorithm for the Non-linear PDE of CV Vector-Valued model 36
4.3.1 Full Approximation Scheme (FAS) of Multi-Grid Algorithm . . . . 37
4.3.2 Local Smoother: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3.3 Global Smoother . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.4 Multi-Grid Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5 Co-efficient of Variation based Variational Model 45


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2 The Coefficient of Variaton based Vector-Valued Model (CoVVV) . . . . . 45
5.2.1 Level Set Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2.2 Additive Operator Splitting Method (AOS) . . . . . . . . . . . . . 49
5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Conclusion and Future Work 55


6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Bibliography 56

iv
Abstract

Image segmentation is an important branch of image processing. The purpose of image


segmentation is to extract meaningful and important objects, which have some properties
in common like intensity, color and texture etc. in the image. Image segmentation, in
general, is a very difficult task because natural images are very complex and diverse to
deal with them.
In this thesis, our main work is based on active contour vector-valued model like Chan-
Vese model. These type of models are mostly used in image segmentation, on behalf of
their solid mathematical formulation. As an aplication of the vector-valued model we
consider an object which has different missing parts in different channels, but when all
the channels are combined, the complete object is detected. The already existed models
like active contour model etc., have been developed to detect objects in gray images, but
it fails to detect the object while using different images of the same object. This case
occurs in medical sciences when we take images by different equipments (i.e., PET, MRI,
and CT), in color images, or in textured images. Each image channel may have signal
characteristics that can be combined with other channels to enhance contour detection.
The first part of this thesis is concerned about developing fast iterative methods.
Since Semi-Implicit (SI) method for Euler’s Lagrange (EL) equation is used which is
unconditionally stable, but for images of large sizes it may not work. Thus we propose,
multi-grid (MG) method for the solution EL equation that arise from the minimization
Chan-Vese vector-valued model.
The second part of the thesis is concerned about developing new active contour model
for segmentation of vector-valued (VV) images. Since Chan-Vese VV uses variance as
a fitting term and variance is a measure of dispersion. So images having diversity can
be segmented easily by Chan-Vese VV but images having diffusivity in it may not be
segmented properly. We know that Coefficient of Variation (CoV ) measures diffusiv-
ity. So motivation from this property we modify the fidelity or fitting term by using
(CoV ) rather than variance of the CV active contour vector-valued model. For the local
information We develop local fidelity term along with the global term.

v
List of Figures

2.1 l2 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 l∞ norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 (a) Grey image (b) Colour (RGB) image (c) Grey scale image is preserved
in computer memory in a form of 2-D array of numbers where each number
carries the intensity value in the range [0, 255]. (d) RGB scale image stored
in the form of array of vectors (r, g, b). Each colour has its own intensity
level at each pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 (a) discrete value table for the small rectangle in grey image 2.3(a) (b)
discrete value table of red channel for the small rectangle in (RGB) image
2.3(b) (c) discrete value table of green channel for the small rectangle in
(RGB) image 2.3(b) (d) discrete value table of blue channel for the small
rectangle in (RGB) image 2.3(b). . . . . . . . . . . . . . . . . . . . . . . . 8

4.1 The results of SI, AOS and MG methods have been given in row 1, 2 and 3
respectively. Where as the results of channel 1, 2, 3 and segmented result
of the three channels can be found in column 1, 2, 3 and 4 respectively. . 42
4.2 The results of SI, AOS and MG methods have been given in columns 1, 2
and 3 respectively. Where as the results of channel 1, 2, 3, 4 and result of
the recovered object can be found in rows 1, 2, 3, 4 and 5 respectively. . 43
4.3 The results of SI, AOS and MG methods have been given in row 1, 2
and 3 respectively. Where as the initial contour, result of respective no.
of iterations and segmented results can be found in column 1, 2 and 3
respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.1 Images that are used in our experiments . . . . . . . . . . . . . . . . . . . 51


5.2 Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 150 and
radius r0 = 40, size=256 × 256. (b) Result of VVCV model after 700
iterrations. (c) Segmented result of VVCV model . . . . . . . . . . . . . 52
5.3 Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 =
150 and radius r0 = 40, size=256 × 256. (b)Result of VVCV model after
700 iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . 52

vi
5.4 Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 115 & y0 = 130 and
radius r0 = 40, size=256 × 256. (b)Result of VVCV model after 700
iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . . . . 53
5.5 Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 =
150 and radius r0 = 40, size=256 × 256. (b)Result of VVCV model after
700 iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . 53
5.6 Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 115 and
radius r0 = 40, size=256 × 256. (b)Result of VVCV model after 58 iterra-
tions. (c)Segmented result of VVCV model . . . . . . . . . . . . . . . . . 54
5.7 Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 100 & y0 =
100 and radius r0 = 45, size=256 × 256. (b)Result of VVCV model after
700 iterrations. (c)Segmented result of VVCV model . . . . . . . . . . . 54

vii
List of Tables

4.1 Comparison table of the SI, AOS and Multi-Grid methods considering
a real image using CV Vector-Valued model in connection with to the
number of iterations and CPU time in seconds. . . . . . . . . . . . . . . . 41

viii
Publications

• Noor Badshah, Fahim Ullah and Haider Ali, “ New Variational Model for Vector
Valued Image Segmentation”, International Conference on Modeling and Simula-
tions (ICOMS-2013)-Vol. II, Ver. 1.0, November 25–27, 2013.

• Fahim Ullah, Noor Badshah,“Fast Iterative Methods for Vector Valued Image Seg-
mentation Models”, Submitted 2014.

ix
Chapter 1

Introduction:

Computer vision is something called a branch of artificial intelligence. Its train is to


give vision to machines and to enhance machine with an advance vision capabilities
as human sight by developing mathematical techniques, algorithms and mathematical
models. Computer vision is divided into different areas like pattern recognition, image
processing etc. The assignment of image processing is to enhance a raw image for various
applications. For this purpose, various techniques have been developed during the last
three decades e.g, techniques developed for enhancing images obtained from the military
intelligence flights and spacecrafts[33]. Image processing is further subdivided into image
segmentation, image in-painting, image denoising etc. We mainly worked in this thesis
on a specific area of image processing known as image segmentation.

1.1 Image Segmentation


Image segmentation is one of the highly significant and basic area in computer graphics
and image processing. In image segmentation, we divide the given image I(x, y) into
different homogeneous features which are different from each other with respect to inten-
sity, texture etc. We choose those features which are of our interest. We keep rest of its
features as background of the image. In other words, the work of image segmentation is
to picking out a region/regions of one’s interest (i.e., foreground) from the background
of the image. The other way to define image segmentation is feature based, when the
projection of those regions in the image which are of interest are shown on the screen
and then the work of image segmentation is to locate and identify those regions rather
than their circumstantial due to lighting, image acquisition and so on.
From the last three decades, Mathematicians have made a lot of efforts in the field of
image segmentation. They have developed a number of mathematical models to achieve
the required goal. Recently auspicious models are been developed that help to resolve
those image segmentation models that are based on variational approaches and PDE’s.
This thesis mainly focuses on such variational and active-contour algorithms and models.
The Euler Lagrange equations obtained from these models are often a parabolic differ-
ential equations that can be iterated with respect to time until it arrives to its stable

1
state.
In order to find out edges of the object, we take a contour in the targeted image. In
this connection Osher and Sethian [34] has developed a well known level-set techniques
that shows the contour implicitly as zero level of the level-set function.
Our main work in this thesis is on the CV vector-valued image segmentation model
[14]. The main subject is to improve efficiency of the model in the form of its convergence
to the edges and CPU time. For this purpose we modify the CV model [14] by using the
co-efficient of variation rather than variance in the fitting or fidelity term of the model.
As a result it gives better results in form of CPU time and segmenting low contrast
objects, overlapping regions.

1.2 Thesis Outline


Chapter 2:

This chapter covers some primal stage mathematical tools such as:

• Definition and examples of Vector, Normed and Metric spaces.

• Definition and explanation of Image Processing.

• Some introductory iterative techniques for resolving system of equations.

• Explanation of time marching schemes and discussion about their stability.

Chapter 3:

This chapter consist of image segmentation models as literature review.

• Some already exist region based models.

• Brief discussion of variational models specially CV model ”active contour model


without edges” .

• Brief discussion of Local C-V model.

• We have discussed of the CV vector-valued model also.

This chapter consist of our propose variational model:

• Co-efficient of Variation based Vector-Valued Model (Our propose model).

• Numerical method applied to the model.

• Experimental results.

This chapter consist of Future Planning:

• Conclusion.

• Future strategy.

2
Chapter 2

Mathematical Preludes

Here our discussion will be about some useful definitions along with examples and theo-
rems, which may be playing role of basic tools for the rest of the following chapters.

2.1 Metric and Metric Space


Definition 2.1.1 (Metric) :

For any non-empty set of vectors (say) V , a metric may be explained as a function
d : V × V → < subject to the following conditions,

d(v1 , v2 ) > 0,

• d(v1 , v2 ) = 0 iff v1 = v 2

• d(v1 , v2 ) = d(v2 , v1 )

• d(v1 , v3 ) 6 d(v1 , v2 ) + d(v1 , v3 )

where v1 , v2 and v3 are vectors in V .

2.2 Norm and Normed Space


Definition 2.2.1 (Seminorm) :

If V be taken as a vector space on a field F . Then the function k · k : V → < is said to


be a Seminorm subject to the following conditions:

• kvk > 0, for all v ∈ V

• kλvk = |λ|kvk, for all v ∈ V and λ ∈ <

• kv1 + v2 k 6 kv1 k + kv2 k for all v1 , v2 ∈ V

A seminorm is said to be a Norm if it combines with the following additional property:

• kvk = 0, for any zero vector v ∈ V

3
Definition 2.2.2 (l1 -norm or Taxicab-norm) :

A norm of vector v ∈ <n of the type:


n
X
kvk = |vl |,
l=1

is said to be l1 -norm or Taxicab-norm. This norm gives distance from the origin to the
vector v in the form of a rectangular street grid.
Definition 2.2.3 (l2 -norm or Euclidean-norm) :

For any vector v = (v1 , v2 , · · · , vn ) in n-dimensional vector space <n , the Euclidean-norm
can be defined as: v
u n q
uX
kvk = t vl2 = v12 + v22 + · · · + vn2 (2.1)
l=1

Euclidean norm of a vector v gives its ordinary distance from the origin. Figuers 5.5, 5.7
are the examples of l2 -norm.
The norm of n-dimensionally complex space Cn can be found in the following way:
p √
kzk = |z1 |2 + |z2 |2 + · · · + |zn |2 = z1 .z1 + z2 .z2 + · · · + zn .zn .

Example 2.2.1
p
Euclidean-norm of a vector v = (1, −3, 2) is kvk2 = (1)2 + (−3)2 + (2)2 = 3.72.

(a)(b)
2- 3-
dimensional
dimensional
l2 l2
norm
norm

Figure 2.1: l2 norm

Definition 2.2.4 (l∞ -norm or Maximum-norm) :

For any vector v of n-dimensional vector space <n , a norm of the type:

kvk∞ = max(|v1 |, |v2 |, · · · , |vn |)

(a)(b)
2- 3-
dimensional
dimensional
l∞ l∞
norm
norm

Figure 2.2: l∞ norm

Definition 2.2.5 (lp -norm) :

4
For any real number p > 1, a norm defined as;

n
!1/p
X
p
kvkp = |vl | (2.2)
l=1

is known as lp -norm.

Theorem 2.2.1

The sequence of vector {V (h)} converges to a vector V ∈ <n w.r.t infinity norm iff

lim V (h) = V , for each j = 1, 2, · · · , n


h→∞

Example 2.2.2

Consider a vector V (h) ∈ <4 such that


 t  2 4 t
V (h) = v1 (h), v2 (h), v3 (h), v4 (h) = 5, 3 + , 2 , e−2h sin(h)
h h
Applying limit on the components of the vector, we get,
 2 4
lim 5 = 5, lim 3 + = 3, lim =0 and lim e−2h sinh = 0.
h→∞ h→∞ h h→∞ h2 h→∞

Therefore, according to the statement of the theorem (2.2.1), the sequence {V (h) } con-
verges to (5, 3, 0, 0)t w.r.t infinity norm.
At the same time it is very difficult to show whether the sequence in example (2.2.2), con-
verges to (5, 3, 0, 0)t w.r.t 2-norm as well. For this purpose we take help of the following
theorem (2.2.2) and apply it to this special case.

Theorem 2.2.2

For any vector V ∈ <n ,



kV k∞ 6 kV k2 6 nkV k∞

Definition 2.2.6 (Cauchy Sequence) :

A sequence {v1 , v2 , · · · } is said to be a cauchy sequence in a normed vector space V iff


∀  > 0, there exist N ∈ N such that

kvn − vm k < , for all m, n > N

or in other words,

lim kvn − vm k = 0, for all m, n > N


min(m,n)→∞

Definition 2.2.7 (Banach Space) :

5
A real or complex complete normed space V of vectors is said to be a Banach space V if
every Cauchy sequence vn converges in V .
Mathematically:
lim vl = v i.e., lim kvl − vkV = 0
l→∞ l→∞

2.3 Continuous and Digital Images


This thesis mainly addresses the segmentation of vector-valued images. Therefore, first
of all, we would like to discuss that how grey and colour images can be interpreted?
Definition 2.3.1
Grey image can be considered as a real valued function on a bounded domain Ω of <2 .
Definition 2.3.2
Colour image can be considered as a vector valued function on a bounded domain Ω of
<2 that contains color information for each pixel e.g, in RGB image each pixel has a
digital value in the form of three dimensional vectors (r, g, b) called red, green and blue
channels respectively.
Definition 2.3.3
A digital or discrete image takes the data from a continuous image in the form of an array
of numbers where each pixel1 of the image is assigned a value and as a result all these
values form a matrix. For example, the discrete form of a grey image is a matrix of real
numbers where each element of the matrix gives a value(intensity) at certain point in the
image that varies in between 0 & 255. The value 0 gives a complete black dot on a paper
while that of 255 gives a complete white one. The in-between values of the two extremes
are different combinations of the black and white colours. Similarly an RGB image is a
two dimensional vector matrix. Here at any pixel (x, y), the digital elemens of the matrix
are of the form (r, g, b), which gives information about the various combinations of the
the three colored channels.
Following are the two images, Grey and RGB, both of the same size as (256)2 and their
respective intensity tables for the corresponding rectangles are in the figures below:

2.4 Computer Vision & Image Processing


Like the other branches (i.e., pattern recognition, statistical learning etc.), image pro-
cessing is also one the branches of computer vision. The main objectives of the image
processing is to recognize and to detect objects in images. Image processing is further
sub divided into different parts like, image denoising, image inpainting, image registra-
tion, image segmentation, image tracking, image recognition etc. In this thesis mainly,
we have focussed our attention on image segmentation.
1
In grey images a pixel is an element containing information about the position (x, y) ∈ <2 . In colour
images, each pixel has a corresponding 3D colour information which determines the different colour
intensities about the position (x, y) ∈ <2 .

6
(a)(b)
Color
Grey
(Vec-
(Scalar)
Im-tor)
ageIm-
age

(c) (d)
1- 3-
D D
ar- chan-
raynels
of of
thethe
Grey(RGB)
Im-Im-
ageage

Figure 2.3: (a) Grey image (b) Colour (RGB) image (c) Grey scale image is preserved
in computer memory in a form of 2-D array of numbers where each number carries the
intensity value in the range [0, 255]. (d) RGB scale image stored in the form of array of
vectors (r, g, b). Each colour has its own intensity level at each pixel.

2.5 Iterative Methods for System of Equations


2.5.1 Basic Concepts about System of Equations:
We begin by reviewing some basic ideas of iterative methods about which we expect the
readers will be familiar with. For study in detail, the readers can also consult [42]. In
order to discuss the fundamental ideas of direct methods and numerical linear algebra
for solving linear equations. We can express linear equations mathematically as under,

Mx = b (2.3)

where M is a square matrix of order n1 × n1 , x = M −1 b ∈ Rn1 is to be found and where


as b ∈ Rn1 is given. If eq (2.3) can not be solved by any direct method then by using
iterative techniques, we take first an initial approximation x(0) and get new value x(1) ,
which is comparatively a good approximation for the exact solution x. after selecting the
initial approximation x(0) , each iterative method alters the system of equations eq (2.3)
to an equivalent system that can be expressed in the manner below:

x(k) = M1−1 M2 x(k−1) + M1−1 b for all k = 1, 2, 3, · · ·

Repeating this process, we generate a sequence of vectors that converge to x i.e.,

lim x(k) = x,
k→∞

7
(a)(b)
In- Red
ten-color
si- in-
tiesten-
ar- si-
rayties
for ar-
theray
for
rect-
an-the
gle rect-
in an-
thegle
grayin
the
fig(2.3(a))
RGB
fig(2.3(b))

(c) (d)
Green
Blue
color
color
in- in-
ten-ten-
si- si-
tiesties
ar- ar-
rayray
for for
thethe
rect-
rect-
an-an-
gle gle
in in
thethe
RGB RGB
fig(2.3(b))
fig(2.3(b))

Figure 2.4: (a) discrete value table for the small rectangle in grey image 2.3(a) (b) discrete
value table of red channel for the small rectangle in (RGB) image 2.3(b) (c) discrete value
table of green channel for the small rectangle in (RGB) image 2.3(b) (d) discrete value
table of blue channel for the small rectangle in (RGB) image 2.3(b).

8
2.5.2 Splitting of Matrix:
The matrix M in eq (2.3) in different situations can be split into various ways. The meth-
ods like Jacobi, Weighted-Jacobi, Gauss-Seidel and Successive over Relaxation(SOR) split
the matrix into different forms. The splitting of a matrix is generally given below:

M = M1 + M2 , (2.4)

where M1 is a non-singular square matrix of the same order as that of M , we construct


the system of equations so that the reconstructed co-efficient matrix M1 is easy to solve.
As a result the system eq(2.3) takes the following form:

x = M1−1 (M2 x + b),

this analysis mainly depends on the spectral radius of the iterative matrix:

R = M1−1 M2 ,

for more detail the readers can consult [20, 22, 32, 45]. We first here discuss the Jacobi
iteration.

2.5.3 Jacobi Iteration:


In numerical linear algebra, the simplest among all iterative methods is the Jacobi
method. It splits the matrix M in the following way:

M = M1 + M2 where M1 = D and M2 = R = M − D, (2.5)

using eq (2.5), eq (2.3) take the following form:

(D + R)x = b
⇒ Dx + Rx = b
⇒ Dx = −Rx + b
⇒ x = −D−1 Rx + D−1 b
⇒ x = Qjac x + cjac , (2.6)

so it will take the iteration form as under:

x(k) = Qjac x(k−1) + cjac , (2.7)

where Qjac = −D−1 R and cjac = D−1 b. Using the idea of eq (2.6) the ith equation for
xi has the following equivalent form:
n1  
X −aı x bı
xı = +
aıı aıı
=1
6=ı

9
we generate the k th iteration of the ıth equation, using the previous (k − 1)th iteration
for k > 1
n1
!
(k−1)
1 X −aı x
x(k)
ı = + b , for all ı = 1, 2, 3, · · · , n1 (2.8)
aıı aıı
=1
6=ı

Theorem 2.5.1

For any square matrix M of order n1 × n1 having the property,


X
0< |aı | < |aıı | for all ı,  = 1, 2, 3, · · · , n1 (2.9)
6=ı

then any square matrix M having the above property (2.9) is called a nonsingular matrix
and the Jacobian converges to its solution for any choice of the matrix B.

2.5.4 Weighted Jacobi Iteration:


In weighted Jacobi iteration, we compute x ei first using Jacobi method:
n1
!
(k−1)
1 X −aı x
x
eı = + b , for all ı = 1, 2, 3, · · · , n1
aıı aıı
=1
6=ı

We use this value just an intermediate value and the new approximation can be secured
using the equation as follows:

x(k) (k−1)
ı = (1 − ω)xı + ωe
xı (2.10)

here ω is a constant value called weighting parameter that can be selected. Eq (2.10)
becomes Jacobi iteration for ω = 1. In matrix it takes the following form:

x(k) = ((1 − ω)I + ωQjac ) x(k−1) + ωcjac


= Qω x(k−1) + cω

where Qω = (1 − ω)I + ωQjac and cω = ωcjac . If one of the aıı entries is zero and the
matrix is nonsingular, a reordering of the equations can be performed so that aıı 6= 0.
To speed up the convergence, the equations should be arranged so that aıı is as large as
possible.

2.5.5 Gauss-Seidel Iteration


In numerical linear algebra, the Liebmann iteration which is commonly known as the
Gauss-Seidel iteration, is an iterative method that is used for the solution of a linear
(k)
system of equations. Unlike the Jacobi method, when we compute xı in Gauss-Seidel
(k)
method, we use the new values of k th iteration i.e., xm for m = 1, 2, 3, · · · , ı − 1 and old
values of the (k − 1)th iteration for m = ı + 1, ı + 2, ı + 3, · · · , n1 . As a result the splitting

10
of matrix eq (2.4) will take the following form:

M1 = L, M2 = U (2.11)

where L is said to be a lower triangular matrix that contains non-zero diagonal entries
and U is an upper triangular matrix.
Using eq (2.11), eq (2.3) can be written in the following way:

(L + U )x = b
⇒ Lx + U x = b
⇒ Lx = −U x + b
⇒ x = −L−1 U x + L−1 b
⇒ x = Qgs x + cgs , (2.12)

so it will take the iteration form as under:

x(k) = Qgs x(k−1) + cgs , (2.13)

where Qgs = −L−1 U1 and cgs = L−1 b. Using the idea of eq(2.11) the ıth equation for xı
has the following equivalent form:
!
1 X X
xı = (−aı x ) + (−aı x ) + bı for all ı = 1, 2, 3, · · · , n1
aıı <ı >ı

we generate the k th iteration of the ıth equation using fresh values i.e., k th iteration of
the first (ı − 1) equations and the last (n1 − ı) equations of the previous (k − 1)th iteration
for k > 1
!
1 X  X 
x(k)
ı = −aı x(k)
 + −aı x(k−1)
 + bı for all ı = 1, 2, 3, · · · , n1
aıı <ı >ı
(2.14)
L−1 can easily be computed due to the triangularity of the matrix M1 = L. The Gauss-
seidel method is also known as forward gauss-seidel one. In oppose to jacobi iteration, in
the cited method, the ordering of the unknown play basic role in iteration. It updates the
unknown matrix x from the first coordinate. In the same way, we can flourish backward
gauss-seidel iteration (BGS) which begins to update the unknown matrix x from the
(n1 )th coordinate.
The splitting of M in the backward gauss-seidel iteration takes the following form:

M1 = U, M2 = L

11
in BGS, U become an upper triangular matrix that contains the diagonal entries as well
where as the matrix L is a strictly lower triangular one.

Msgs = Mbgs Mgs = U −1 LL−1 U (2.15)

equation (2.15) is known as a symmetric gauss-seidel (SGS) iteration which is the joint
adventure of the two iterations. SGS is actually a FGS followed by a BGS iteration.

2.5.6 Successive Over Relaxation (SOR) Iteration


Successive over relaxation (SOR) method iso get comparatively faster result with respect
to convergence. Here we introduce a constant term ω and as a result we get a fast
convergence. Considering system of n equations with unknown matrix x (2.3) takes the
following form using ω:
ωM x = ωb (2.16)

while the splitting of matrix ωM will take the following form:

ωM = ω(D + L + U )
= (D + ωL) + (ωU + (ω − 1) D) (2.17)

where D, L&U are as defined above. Using eq (2.18), eq (2.16) can be arranged as follows:

ω(D + L + U )x = ωb
⇒ (D + ωL)x + (ωU + (ω − 1)D)x = ωb
⇒ (D + ωL)x = −(ωU + (ω − 1)D)x + ωb
⇒ x = −(D + ωL)−1 (ωU + (ω − 1)D)x
+ (D + ωL)−1 ωb
⇒ x = Qsor x + csor , (2.18)

so it will take the iteration form as under:

x(k) = Qsor x(k−1) + csor , (2.19)

where Qgs = −(D + ωL)−1 (ωU + (ω − 1)D) and cgs = (D + ωL)−1 ωb. Using the idea
of eq (2.18), we generate the k th iteration of the ıth equation, using fresh values i.e., k th
iteration of the first (ı − 1) equations and the last (n1 − ı) equations of the previous
(k − 1)th iteration for k > 1
!
ω X  X 
x(k)
ı = (1−ω)x(k−1)
ı + −aı x(k)
 + −aı x(k−1) + bı ∀ ı = 1, 2, 3, · · · , n1
aıı <ı >ı
(2.20)
where ω is a constant value called weighting factor, if ω ∈ (0, 1) then SOR is called under-
relaxation and it converges in those cases where Gauss-Seidel fails to converge. If ω > 1,

12
the role of SOR is to speed up the convergence of the system which is also convergent by
the Gauss-Seidel iteration. The merits of the SOR iteration is its dramatic improvement
in convergence with a good choice of ω whose selection by itself is a very difficult task.
Keeping ω = 1, eq(2.18) reduces to eq(2.12) position i.e., SOR becomes Gauss-Seidel in
this case.

2.6 Time Marching Iteration Schemes


In this section, we review some numerical methods for solutions of partial differential
equations that can not be solved by any analytical method directly, e.g,

vt + vvx = µvxx ,

similarly,
vxy + αvx + βvy + γvxy = 0,

we need numerical methods for the solution of such problems. Before discussing the time
marching iterative schemes, we want to describe some of the differential operators that
are written below:
vı+1, − vı,
Forward difference-operator (vx )ı, =
h
vı, − vı−1,
Backward difference-operator (vx )ı, =
h
vı+1, − vı−1,
Central difference-operator (vx )ı, =
2h
The above difference operators are taken along x-direction and similarly they can be
taken along y-direction. We can present second order derivatives in the following way,
Second order difference operator along x-axis
vı+1, − 2vı, + vı−1,
(vxx )ı, =
h2
Second order difference operator along y-axis,
vı,+1 − 2vı, + vı,−1
(vyy )ı, =
h2
Second order mixed derivative,
vı+1,+1 − vı−1,+1 − vı+1,−1 + vı−1,−1
(vxy )ı, =
4h2
Now we present the time-marching iterative schemes that can be used for parabolic
differential equations. We’ll also bring their stability under discussion. For further detail
one can consult [26, 31, 36, 50].

13
2.6.1 Explicit Scheme
Explicit scheme calculates approximation of a system at a later time on the basis of
approximation of the system at the current time. In order to explain Explicit Scheme,
we present 1-dimensional heat equation,

∂ ∂2
v(x, t) = β 2 v(x, t), x ∈ [0, π], t>0 (2.21)
∂t ∂x
where β is a positive constant, the initial and boundary conditions for the above equation
(2.21) are as under:
(
v(x, 0) = φ(x)
(2.22)
v(0, t) = 0, v(π, t) = 0 for t > 0,

we use Fourier series method in order to find a solution and defining

φ(x) as − φ(−x) in the interval −π 6x60 (2.23)

otherwise, we use the complex fourier series i.e.,



2 βt)
X
v(x, t) = Am e(ιmx−m (2.24)
m=−∞

where Am is given as under,



1 √
Am = φ(x)e(−ιmx) dx, ι= −1 (2.25)

−π

the discretized form of eq(2.21) (starting from the initial time and space value t0 = 0,
x0 = 0 and taking the increments ∆t & ∆x of the variables t and x respectively) will be:

(k+1) (k) (k) (k) (k)


v − v v+1 − 2v + v−1
= β
∆t (∆x)2
∆tβ  (k) (k)

⇒ v(k+1) = v(k) + v +i − 2v (k)
 + v −1
(4x)2
 
∆tβ (k) ∆tβ ∆tβ (k)
⇒ v(k+1) = v+i + 1 − 2 v(k) + v (2.26)
(4x) 2 (4x) 2 (4x)2 −1

where  = 1, 2, · · · , J − 1, k = 0, 1, 2, · · · . The boundary conditions (2.22) will take the


following form:
(k) (k)
v0 = 0, vJ = 0, k = 0, 1, 2, · · · (2.27)

and the initial conditions will become:

v0 = φ(∆x),  = 1, 2, · · · , J (2.28)

eq (2.26) is known as explicit scheme because a new approximation v (k+1) is acquired


explicitly from the previous approximation v (k) . This scheme is simple and also not time

14
consuming computationally. However, it is conditionally stable and its stability has been
discussed in the next section.

2.6.2 Stability of the Explicit Scheme


(k)
Let vl is an approximation after k iterations to the exact solution v(x, t) of IBVP. The
error equation can be defined as,

r(k) = v(k) − v(4x, k4t), ∀ k = 0, 1, 2, · · · , and  = 1, 2, · · · , J (2.29)

then we check the behavior of the absolute value of eq(2.29) i.e,

|v(k) − v(4x, k4t)|, as k → ∞ keeping 4x & 4t fixed


(k)
also its behavior when 4x, 4t → 0 keeping tk = k4t fixed. Now expressing vj in terms
of Fourier series,

X
v(k) = Ac eιc4x (ξ(c))(k) (2.30)
c=−∞
(k)
where ξ and Ac (as given in eq (2.25)) are unknowns. To find ξ, we replace v by
Ac eιc4x ξ (k) , c ∈ Z in eq (2.26) we get the following form:

β4t 
Ac eιc4x ξ (k+1) = Ac eιc4x ξ (k) + Ac eιc(+1)4x ξ (k) − 2Ac eιc4x ξ (k)
(4x)2

+ Ac eιc(−1)4x ξ (k)
β4t  ιc4x −ιc4x

⇒ξ = 1+ e − 2 + e
(4x)2
2β4t  eιc4x + e−ιc4x 
⇒ξ = 1− 1 −
(4x)2 2
2β4t  
⇒ξ = 1− 1 − cos(c4x)
(4x)2
 
4β4t 2 c4x
⇒ξ = 1− sin . (2.31)
(4x)2 2
(k)
Now we check whether the function v (as shown in 2.30) gives us the exact solution
of the eq(2.26). For this purpose, we check IBVP given in (2.27) and (2.28) using Ac as
2β4t
given in (2.25). (4x)2
> 0 because all the three variables are +ve. So we get the following
relations:
 
4β4t 4β4t c4x
1− 2
61− sin2 6 ξ(c) 6 1
(4x) (4x)2 2
4β4t
⇒ |ξ(c)| 6 1 iff 1− > −1 (2.32)
(4x)2

therefore, we conclude that Explicit Scheme conditionally stable and eq (2.32) is a sta-
bility condition for the scheme.

15
2.6.3 Implicit Scheme
In the Implicit Scheme, we calculate approximation of a system of equations at a later
time on the basis of both the approximation i.e., current and a later one. Here we present
1-dimensional Heat equation in order to explain the implicit scheme (2.21).
The discretized form of eq (2.21) with t0 = 0, x0 = 0 and taking the increments ∆t &
∆x of the variables t and x respectively, we get the following form:
(k+1) (k) (k+1) (k+1) (k+1)
v − v v+1 − 2v + v−1
= β
∆t (∆x)2
∆tβ 
(k+1) (k+1)

(k)
⇒ v(k+1) − v (k+1)
− 2v + v−1 = vl (2.33)
(4x)2 +1

where  = 1, 2, · · · , J − 1, k = 0, 1, 2, · · · . The initial and boundary conditions will be as


in (2.27) and (2.28). Eq (2.33) is known as implicit scheme because a new approximation
v (k+1) is acquired implicitly using the previous and current approximation v (k) and v (k+1) .
This scheme is complex and also time consuming computationally.

2.6.4 Stability of the Implicit Scheme


(k)
Replace v by Ac eιc4x ξ (k) , c ∈ Z in eq (2.33), the following form is obtained:

β4t  
Ac eιc4x ξ (k+1) − A c eιc(+1)4x (k+1)
ξ − 2A c eιc4x (k+1)
ξ + Ac e ιc(−1)4x (k+1)
ξ
(4x)2
= Ac eιc4x ξ (k)

 
β4t  ιc4x −ιc4x
⇒ξ 1− e −2+e =1
(4x)2
  
4β4t 2 c4x
⇒ ξ 1+ sin =1
(4x)2 2
1
⇒ ξ=   (2.34)
4β4t 2 c4x
1 + (4x)2 sin 2

In eq (2.34) we see that RHS is +ve therefore, it can also be written as under:
1
⇒ |ξ| =   (2.35)
4β4t c4x
1+ (4x)2
sin2 2
 
4β4t c4x 1
since 1 + sin2 > 1 therefore,  6 1 consequently eq(2.35)
(4x)2

2 1+ 4β4t
sin2 c4x
(4x)2 2
can be written as:
|ξ| 6 1, (2.36)

for each selection of 4x and 4t eq (2.36) is satisfied, therefore, we conclude that Implicit
scheme is unconditionally stable.

16
2.6.5 Crank-Nicolson Scheme
As we have noticed in section (2.6.2) that explicit scheme is although simple but stable for
short interval of time i.e., conditionally stable. While in section (2.6.4), we have noticed
that implicit scheme is unconditionally stable for all choices of time interval but it is time
consuming. Therefore a new type of scheme is introduced called Crank-Nicolson scheme
which is the combination of explicit and implicit schemes. The Crank-Nicolson scheme
for 1-D heat equation (2.21) can be shown as:

(k+1) (k)
( (k+1) (k+1) (k+1) ! (k) (k) (k) !)
v − v v+1 − 2v + v−1 v+1 − 2v + v−1
= β α + (1 − α)
4t (4x)2 (4x)2
4tαβ  (k+1) (k+1)
 4t(1 − α)β 
(k) (k)

⇒ v(k+1) = v(k) + v − 2v (k+1)
 + v + v − 2v (k)
 + v
(4x)2 +1 −1
(4x)2 +1 −1

(2.37)

where α ∈ [0, 1], for α = 0 the above scheme reduces to the explicit scheme (2.26) and
for α = 1, the scheme reduces to implicit form (2.33) [7, 46].

2.6.6 Stability of the Crank-Nicolson Scheme


(k)
Replacing v by Ac eιc4x ξ (k) , c ∈ Z in eq (2.37), an equation of the following form is
obtained:
4tαβ  
Ac eιc4x ξ (k+1) = Ac eιc4x ξ (k) + A c eιc(+1)4x (k+1)
ξ − 2A c e ιc4x (k+1)
ξ + Ac e ιc(−1)4x (k+1)
ξ
(4x)2
4t(1 − α)β  ιc(+1)4x (k) ιc4x (k) ιc(−1)4x (k)

+ A c e ξ − 2A c e ξ + A c e ξ
(4x)2

4tαβ  
⇒ Ac eιcj4x ξ (k+1) − A c e ιc(+1)4x (k+1)
ξ − 2A c e ιc4x (k+1)
ξ + A c e ιc(−1)4x (k+1)
ξ
(4x)2
4t(1 − α)β  
= Ac eιcj4x ξ (k) + A c e ιc(+1)4x (k)
ξ − 2A c e ιc4x (k)
ξ + A c e ιc(−1)4x (k)
ξ
(4x)2
   
4tαβ  ιc4x −ιc4x
 4t(1 − α)β  ιc4x −ιc4x
⇒ ξ 1− e −2+e = 1+ e −2+e
(4x)2 (4x)2
    
44tαβ 2 c4x 44t(1 − α)β 2 c4x
⇒ ξ 1+ sin = 1 − sin
(4x)2 2 (4x)2 2
 
1 − 44t(1−α)β
(4x)2
sin2 c4x2
⇒ ξ=   (2.38)
1 + 44tαβ
(4x) 2 sin 2 c4x
2

since    
44t(1 − α)β c4x 44tαβ c4x
1− sin2 61+ sin2
(4x)2 2 (4x)2 2
therefore,  
44t(1−α)β c4x
1− (4x)2
sin2 2
  6 1,
44tαβ c4x
1+ (4x)2
sin2 2

17
consequently it can be deduced from eq (2.38) that:

|ξ| 6 1, (2.39)

for each choice of 4x and 4t eq (2.39) is satisfied, therefore, we conclude that Crank-
Nicolson scheme is also unconditionally stable.

2.6.7 Additive Operator Splitting (AOS) Scheme


AOS scheme was initially introduced by Tai [25] and weickert [50] and is implemnted for
the discretization of PDE’s of the form:

vt = ∇. (G∇v) + g (2.40)

1
where G = |∇v| , 0 6 t 6 T and Ω ⊆ <d with initial and boundary conditions:
(
v(0, .) = v0 in Ω
∂v
(2.41)
∂−

n
=0 on ∂Ω

considering the 1st -term on RHS of eq (2.40) and vanishing the reaction term g. Thus
we get an Euler equation that is implicit w.r.t. discretization and semi-implicit w.r.t.
spatial differences:
 −1
(k+1) 4t
v = I− M v (k) , k = 1, 2, · · · (2.42)
4x2

here v (k) is nd1 -dimensional vector. Matrices I and M are of size nd1 × nd1 . Using Tensor
product, Matrix M can be split in the following way:

M = M1 ⊗ M2 ⊗ · · · ⊗ M d

where Mı is a tridiagonal matrix obtained from the discretization of eq (2.40) with


respect to the ıth version xı . The convolution of two matrices Mı and M are shown in
the following way:
   
α11 α12 α13 β11 β12 β13
Mı =  α21 α22 α23  & M =  β21 β22 β23 
   

α31 α32 α33 β31 β32 β33

then  
α11 M α12 M α13 M
Mı ⊗ M =  α21 M α22 M α23 M  .
 

α31 M α32 M α33 M


Since the matrix M have 2d + 1 non-zero diagonals with band width dn1 . So it is quite
difficult to solve the iterative block-tridigonal matrix M of the system rather than solving
one-dimensional system having tridiagonal iterative matrix.

18
In AOS scheme we do additive decomposition of the evolution matrix M . The process
is shown in in the following equation:
d  −1
(k+1) 1X 4t
v = I −d Mı v (k) , k = 1, 2, · · · (2.43)
d 4x2
ı=1

since in AOS scheme we split m-dimensional system into m 1-dimensional system due
to which an evolution matrix M arisen from the system is a tridiagonal matrix. which
makes the method speedy as compared to the already existing explicit, implicit and
semi-implicit methods.

2.6.8 Additive Multiplicative Operator Splitting (AMOS) Scheme


Here we are going to discuss another form of splitting scheme called AMOS. Like AOS
scheme (2.6.7) in this scheme also, we split n-dimensional spatial operator into n 1-
dimensional spatial operators. After solving the system in each direction, we first multi-
ply inverse of the iterative matrices obtained from the discretization of spatial operators
in each direction in all possible ways. We take average of all the products and then
use the resultant iterative matrix in finding the next iteration. For more detail one can
consult [8].
Considering two dimensional problem, an iterative matrix obtained from the discretiza-
tion of eq (2.40) along one direction (say x-direction) is shown in the following equation:
 
4t
I− M1 (v ) v (k+1) = v (k) ,
(k)
k =, 2, · · · (2.44)
4x2

Similarly discretization of eq (2.40) along the other direction (say y-direction) is shown
in the following equation:
 
4t
I− M2 (v ) v (k+1) = v (k) ,
(k)
k = 1, 2, · · · (2.45)
4x2

Combining eq (2.44) and (2.45) we get the following form:


  
4t (k) 4t
I− M1 (v ) I− M2 (v ) v (k+1) = v k ,
(k)
4x2 4x2
 −1  −1
(k+1) 4t (k) 4t (k)
⇒ v = I− M2 (v ) I− M1 (v ) v (k) , k = 1, 2, · · ·
4x2 4x2
(2.46)

changing order of eq (2.46) i.e., it can also be written in the following way:
  
4t (k) 4t
I− M2 (v ) I− M1 (v ) v (k+1) = v (k) ,
(k)
4x2 4x2
 −1  −1
(k+1) 4t (k) 4t (k)
⇒ v = I− M1 (v ) I− M2 (v ) v (k) , k = 1, 2, · · ·
4x2 4x2
(2.47)

19
taking average of eq (2.46) and eq (2.47) we get the following form:
( −1  −1
(k+1) 1 4t (k) 4t (k)
v = I− M2 (v ) I− M1 (v )
2 4x2 4x2
 −1  −1 )
4t 4t
+ I− M1 (v (k) ) I− M2 (v (k) ) v (k) , (2.48)
4x2 4x2

eq (2.48) is known as AMOS scheme. The advantage of AMOS scheme is that it gives bet-
ter 1st order accuracy using semi-implicit scheme than the respective AOS scheme. The
AMOS scheme is more advantageous than the corresponding AOS scheme in achieving
the 2nd order accuracy in each direction while using crank-nicolson scheme.

20
21
Chapter 3

Variational Scalar and


Vector-Valued Models in Image
Segmentation

In this chapter, we include some of the models which are commonly use in image seg-
mentation. All these model use image as its input and output data. The cornerstone of
us in this chapter will be the type of variational (i.e., scalar and vector-valued) models
that are used in image segmentation.

3.1 Variational Image Segmentation Models


The role of Image segmentation is highly important in image processing. The main
function of image segmentation is that it divides image into different regions or parts
regarding their homogeneous properties like color, intensity etc., in other words we can
say that theough image segmentation, one can select specific features out of the image
[2, 15, 29]. There are many models using for segmentation of images out of which the
threshold techniques, watershed segmenting techniques, region emerging algorithms are
the most popular approaches. All these approaches are non-equation based methods.
Most of the above discussed methods based on discrete setting and as a result it depends
on parametrization, for more detail one can consult [41].
A number of image segmentation models have been introduced in the near past that are
based on variational approach. The well known among these models are the Mumford-
Shah model [30] and the active contour model [16]. As for as the Mumford-Shah model
is concerned, the purpose of the model is to have a partition of the given image I in
different regions. Mumford-Shah model of n-dimension can be defined as:
Z Z
MS 2
E (u, C) = β |u − I| dx1 dx2 · · · dxn + |∇u|2 dx1 dx2 · · · dxn + αH(n−1) (C), (3.1)
Ω Ω\C

α and β are +ve parameters, H(n−1) is a Hausdorff measure of (n − 1)-dimension. In


equation (3.1) on RHS, the first term is called fidelity (fitting) term w.r. to the given
image I. Its work is to ask that u approximate I. The second term is known as regu-

22
larization term and its work is to keep u smooth inside the Ω \ C. The last term is the
constraint on the discontinuous (edges) C and its work is to keep the boundary as short
as possible. The existence, regularity of the minimizers and theoretical results of eq(3.1)
can also be found in detail in [18, 28, 29] and [30].
Model (3.1) is reduced to another form by assuming u as a constant piecewise function
inside different connected closed region Ωq i.e., u = aq in each and every closed region
Ωq . Where, no
[ \
Ω= Ωq , and Ωq = (3.2)
q q

where q denote number of different connected closed regions, Ωq stands for the interior
of Ωq [44]. Using the idea of eq (3.2), the Mumford-Shah model (3.1) tekes the following
form:
XZ
MS
ECR (u, C) = β |I − aq |2 dxdy + α|C|, (3.3)
q Ωq

we consider the problem (3.3) mostly in 2-dimension. Keeping C fixed and minimization
of eq (3.3) w.r.t. aq , the mean intensity value aq can be expressed in the following way:
R
Ω Idxdy
aq = R q
Ωq dxdy

thus the minimization problem (3.3) take the following form:


XZ
MS
ECR (u, C) = β (I − mean(I))2 dxdy + α|C|, (3.4)
q=1 Ωq

despite the fact that the segmentation model (3.1) extract all the noticeable parts in the
given image. But there can be some more important parts than the other depending on
the application as in medical imaging. Thus we feel need of another type of segmentation
models. In this regard we use the variational models whose work is to detect edges in
the image.

3.2 Geodesic Active Contours Model


V. Caselles [11] proposed a new energy model based on Kass [24] that is invarient with
a curve parameterizations [11, 19, 35, 38]. The energy functonal can be shown in the
following way: Z 1
EGAC (C(℘)) = g (|∇I(C(℘))|) C 0 (℘) d℘ (3.5)
0
Z 1 Z L(C)
as L(C) = C 0 (℘) d℘ = ds, L(C) is said to be the Euclidean length of the curve
0 0
C where as ds stands for its length element.
So, eq (3.5) take the form:
Z L(C)
EGAC (C(℘)) = g (|∇I(C(℘))|) ds, (3.6)
0

23
the functional (3.6) is a length term weighting the length element ds of Euclidean through
the function g that gives information in connection with the edges of the object [3]. Where
g is a function known as edge detector which is defined below:
1
g (∇(I ∗ Gσ )) = ,
1 + γ |∇(I ∗ Gσ )|2
1
exp (x − µx )2 + (y − µy )2 /2σ 2 is a Gaussian where as σ is a stan-
 
here, Gσ = 2
2πσ
dard deviation, χ is a positive constant, µx , µy are mean values and Gσ is smooth sort
of the image I.
In the next section, we present another type of active model that does not depend on the
edge function [16].

3.3 Active Contours Without Edges


For the segmentation of images, a new type of energy based model was floated by Chan-
Vese [16] that does not utilize gradient of the image I(x, y) as a stopping source. In this
model, the stopping source depends on the Mumford-Shah model [30]. The fitting or
fidelity term of the model is as given below:
Z Z
E1 (C) = (I − a)2 dxdy + (I − b)2 dxdy, (3.7)
in(C) out(C)

here a and b have been taken as the average intensity values inner and outer of the contour
C respectively whereas the unknown quantity C is an evolving curve,. To minimize (3.7),
the regularization term consist of the length term of C and the area term inside the
contour C is as used in [16] is added and consequently the following energy equation is
obtained:

E(C, a, b) = µ (length(C))p + ν.area (inside(C))


Z Z
2
+ λ1 (I − a) dxdy + λ2 (I − b)2 dxdy, (3.8)
in(C) out(C)

where µ, ν > 0, λ1 , λ2 > 0 are constant coefficients, a and b are the unknown inner
and outer mean intensities respectively. C is n-D hyper surface where as length(C) is
a (n-1)-D Hausdorff Hn−1 (C). Chan-Vese model [16] is said to be a piecewise constant
Mumford-Shah segmentation model [30]. Chan-Vese model is restricted to divide image
into two regions only.

3.3.1 Level Set Formulation of Chan-Vese Model:


Here we propose a level set function Ψ : R2 → R in order to denote different regions.
Such as: 
2
 in(C) = {(x, y) ∈ R |
 Ψ > 0};
out(C) = {(x, y) ∈ R2 | Ψ < 0};

on(C) = {(x, y) ∈ R2 | Ψ = 0};

24
in order to replace the variable curve by the whole region Ω ∈ R2 , we define Heaviside
and Dirac delta functions respectively in the following equations:
(
1 if x > 0 dH(x)
H(x) = , and δ(x) = ,
0 if x < 0 dx

expressing each term of the energy functional in terms of level set function Ψ, we get the
following number of equations:
Z Z
length(C) = length(Ψ = 0) = |∇H(Ψ)|dxdy = δ(Ψ)|∇Ψ|dxdy
ZΩ Ω

area(inside(C)) = area(Ψ > 0) = H(Ψ)dxdy


Z ZΩ
2
(I − a) dxdy = (I − a)2 H(Ψ)dxdy
in(C) Ω
Z Z
(I − b)2 dxdy = (I − b)2 H(−Ψ)dxdy,
out(C) Ω

thus by using the level set formulation Ψ, eq (3.8) becomes:


Z Z Z
E(Ψ, a, b) = µ δ(Ψ)|∇Ψ|dxdy + ν H(Ψ)dxdy + λ1 (I − a)2 H(Ψ)dxdy
ZΩ Ω Ω

+ λ2 (I − b)2 H(−Ψ)dxdy (3.9)


where as the segmented image u can be expressed in the following way:

u = a · H(Ψ) + b · H(−Ψ),

keeping Ψ fixed and minimizing eq (3.9) w.r.t the two unknown quantities a, b, we have:
R
I · H(Ψ)dxdy
a = ΩR ,
Ω H(Ψ)dxdy
R
in the case when Ω H(Ψ)dxdy > 0, which means that the curve has a non-empty interior
in Ω otherwise, we’ll have to reconstruct level set formulation.
Similarly, R
Ω I · H(−Ψ)dxdy
b= R ,
Ω H(−Ψ)dxdy
R
in the case when Ω H(−Ψ)dxdy > 0, which means that the curve has a non-empty
exterior in Ω otherwise, we’ll have to reconstruct level set function Ψ.
so that to get an Euler-Lagrange’s equation in φ, we take the regularization form of the
Heaviside H and Dirac Delta δ functions that are represented by H and δ respectively

25
because H is not differentiable at point 0. We take H and δ as in [9, 16, 17] i.e.,
  
1 2 −1 y
H (y) = 1 + tan
2 π 
 
1 
δ (y) = H0 (y) = ,
π 2 + y 2

thus the regularization form of eq (3.9) take the following shape:


Z Z Z
E (Ψ, a, b) = µ δ (Ψ)|∇Ψ|dxdy + ν H (Ψ)dxdy + λ1 (I − a)2 H (Ψ)dxdy
ZΩ Ω Ω
2
+ λ2 (I − b) H (−Ψ)dxdy, (3.10)

setting a, b fixed and minimizing eq (3.10) w.r.t Ψ in order to find its Euler Lagrange’s
and this purpose we take help of Gâteaux derivative of E i.e.,
1n o
lim E (Ψ + τ Φ, a, b) − E (Ψ, a, b) = 0, (3.11)
τ →0 τ

by applying the above limit (3.11), we get the following equation:


 
∇Ψ · ∇Φ
Z
0
µ δ (Ψ)|∇Ψ|Φ + δ (Ψ) dxdy
|∇Ψ|
ZΩ
δ (Ψ) ν + λ1 (I − a)2 − λ2 (I − b)2 Φdxdy = 0

+
ZΩ
∇Ψ · ∇Φ
Z
⇒ µδ0 (Ψ)|∇Ψ|Φdxdy + µδ (Ψ) dxdy
Ω Ω |∇Ψ|
Z
δ (Ψ) ν + λ1 (I − a)2 − λ2 (I − b)2 Φdxdy = 0,

+ (3.12)

since for any vector −


→ and any scalar w we get the following equation using green’s
w v s
theorem: Z Z Z
ws ∇ · −
→dxdy = −
w v ∇ws · −
→dxdy +
w v ws −
→·→
w −
v n ds,
Ω Ω Ω
∇Ψ
putting Φ = ws and δ (Ψ) =−
w→, we get,
v
|∇Ψ|
   
∇Ψ ∇Ψ ∇Ψ →
Z Z Z
Φ∇ · δ (Ψ) dxdy = − ∇Φ · δ (Ψ) dxdy + Φδ (Ψ) ·−n ds
Ω |∇Ψ| Ω |∇Ψ| Ω |∇Ψ|
 
∇Ψ · ∇Φ ∇Ψ ∇Ψ →
Z Z Z
⇒ δ (Ψ) dxdy = − Φ∇ · δ (Ψ) dxdy + Φδ (Ψ) ·−n ds
Ω |∇Ψ| Ω |∇Ψ| Ω |∇Ψ|
∇Ψ · ∇Φ ∇Ψ ∇Ψ
Z Z Z
0
⇒ δ (Ψ) dxdy = − δ (Ψ)∇Ψ Φdxdy − δ (Ψ)∇ · Φdxdy
Ω |∇Ψ| Ω |∇Ψ| Ω |∇Ψ|
Z
δ (Ψ) ∂Ψ
+ Φ ds, (3.13)
Ω |∇Ψ| ∂n

26
using eq (3.13), eq (3.12) take the following form:
 
∇Ψ ∇Ψ
Z Z
0 0
µδ (Ψ)|∇Ψ|Φdxdy + µ −δ (Ψ)∇Ψ Φ − δ (Ψ)∇ · Φ dxdy
Ω Ω |∇Ψ| |∇Ψ|
Z Z
δ (Ψ) ∂Ψ n o
+µ Φ ds + δ (Ψ) ν + λ1 (I − a)2 − λ2 (I − b)2 Φdxdy = 0,
Ω |∇Ψ| ∂n Ω

 
∇Ψ
Z Z Z
δ (Ψ) ∂Ψ n
⇒− µδ (Ψ)∇ · Φdxdy + µ Φ ds + δ (Ψ) ν + λ1 (I − a)2
Ω |∇Ψ| Ω |∇Ψ| ∂n Ω
o
2
− λ2 (I − b) Φdxdy = 0,

where Φ is a test function, we take here an arbitrary choice of Φ and as a result we obtain
Euler-Lagrange’s equation that is given as follows:
 n   o
 δ (Ψ) µ∇ · ∇Ψ − ν − λ (I − a)2 + λ (I − b)2 = 0 in Ω,
 |∇Ψ| 1 2
(3.14)
 δ (Ψ) ∂Ψ
→ =0
− ⇒ ∂Ψ
→ = 0 on the boundary of Ω

|∇Ψ| ∂ n ∂n

the author has considered the steady state solution of the above system (3.14) in [16],
and as a result the following parabolic equation is deduced:
 n   o
∂Ψ ∇Ψ 2 + λ (I − b)2
= δ  (Ψ) µ∇ · |∇Ψ| − ν − λ 1 (I − a) 2 in Ω,
 ∂t


Ψ(t, x, y) = Ψ0 (x, y) in Ω, (3.15)
 t=0
δ (Ψ) ∂Ψ
 ∂Ψ
=0 ⇒ =0 on the boundary of Ω

|∇Ψ| ∂ −→
n ∂−

n

This is something called Evolution equation of the CV model [16] which is discretized
through Semi-Implicit method in the next section.

3.3.2 Semi Implicit Method


The above Euler equation (3.15) considering ν = 0 can be written as,

∂Ψ  ∇Ψ 
= µ · δ (Ψ)∇ · +f (3.16)
∂t |∇Ψ|

where,

f = δ −λ1 (I − a)2 + λ2 (I − b)2 .



(3.17)

in order to discretize eq (3.16) in Ψ, we use semi implicit scheme. We consider that


size of the observed image I is n1 × n1 , size of pixel is h1 × h2 , where h1 = 1/n1 and
h2 = 1/n2 . The value of the (ıth , th ) grid point i.e., (x , y ) is (ı − 21 )h1 , ( − 21 )h2 . Thus


27
the discretization of eq (3.16) take the form,
  

(k+1) (k) (k+1)
4x+ Ψı,

Ψı, − Ψı,   1
= δ Ψ(k) x 
 
ı, µ. 2 4 − r 
∆t h 1

(k)
 2 
(k)
 2 
4x+ Ψı, /h1 + 4y+ Ψı, /h2 + β



 

y (k+1) 
1 y  4 + Ψ ı, 
+ µ. 2 4−  r 
 + fı, (3.18)

h2 x (k)
2 
y (k)
2
4 Ψı, /h1 + 4 Ψı, /h2 + β 

+ +

where the differences 4x+ , 4x− , 4y+ , 4y− are given by,
(k) (k) (k) (k) (k) (k)
4x+ Ψı, = Ψı+1, − Ψı, , 4x− Ψı, = Ψı, − Ψı−1, ,
(k) (k) (k) (k) (k) (k) (3.19)
4y+ Ψı, = Ψı,+1 − Ψı, , 4y− Ψı, = Ψı, − Ψı,−1 ,

putting the step size h1 , h2 = 1 equation (3.18) take the form,


  
(k+1) (k+1)

(k+1) (k)
Ψı+1, − Ψı,

Ψı, − Ψı,  
= µ.δ Ψ(k) x 
 
ı, 4− r 
∆t  
x (k)
2 
y (k)
2 
4+ Ψı, + 4+ Ψı, +β


 
(k+1) (k+1)

Ψ − Ψ

ı,

y 
 ı,+1 
+ 4−  r  + fı, ,
 (3.20)
(k) 2
  2
x y (k)
4 Ψı, + 4 Ψı, +β


+ +

using the following functionals:


(k) 1 (k) 1
Hı, = r , Hı−1, = r ,
(k) 2 (k) 2
   2  2
(k) (k)
4x
+ Ψı, + 4y+ Ψı, +β 4x
+ Ψı−1, + 4y+ Ψı−1, +β
(k) 1
,
Hı,−1 = r 2  2 ,
(k) (k)
4+ Ψı,−1 + 4y+ Ψı,−1 +β
x

equation (3.20) take the form,


(k+1) (k)
Ψı, − Ψı,   n
(k+1)
 
(k+1)

(k)
= µδ Ψ(k)
ı,
(k+1)
Ψı+1, − Ψı, (k)
Hı, − Ψ(k+1)
ı, − Ψı−1, Hı−1,
4t
    o
(k+1) (k+1) (k)
+ Ψı,+1 − Ψ(k+1)
ı, H (k)
ı, − Ψ (k+1)
ı, − Ψı,−1 Hı,−1 + fı, ,

implies that:
 n
(k) (k+1) (k) (k+1)
Ψ(k+1)
ı, = Ψ (k)
ı, + µ4tδ  Ψ(k)
ı, Hı−1, Ψı−1, + Hı,−1 Ψı,−1
  o
(k) (k) (k) (k) (k+1) (k) (k+1)
− Hı−1, + Hı,−1 + 2Hı, Ψ(k+1)
ı, + H ı, Ψ ı+1, + Hı, Ψı,+1 + 4tfı, .
(3.21)

The functionals Hı, , Hı−1, and Hı,−1 has been freezed at k th term and thus equation
(3.21) becomes a linear system of equations which can be resolved by iterative methods.

28
Although the SI method is unconditionally stable for large time steps as well. But at the
same time the main flaw in this method is its computational cost for large sized images.

3.4 The Local Chan-Vese Model (LCV)


X-F. Wang et el [49] proposed LCV model by modifying the CV model for inhomogeneous
intensity images by adding local statistical functional in the CV modeel [16]. Resultantly
the following energy functional of the LCV model is obtained:
Z Z
1
E (Ψ, a, b, a, b) = µ δ (Ψ)|∇Ψ|dxdy + (| ∇Ψ | −1)2 dxdy (3.22)
Ω Ω 2
Z Z
+ λ1 (I − a)2 H (Ψ)dxdy + λ2 (I − b)2 (1 − H (Ψ)) dxdy
ZΩ ZΩ
∗ 2 2
+ λ1 (I − a) H (Ψ)dxdy + λ2 II ∗ − b (1 − H (Ψ)) dxdy,
Ω Ω

where µ, λ1 , λ2 are positive weighting parameters. I ∗ = Aco ∗ I − I, where Aco is average


convolution operator. The operator Aco is used in order to obtain an enhanced version
of the image I. Minimization of E (φ, a, b, a, b) in eq (3.22) w.r. to the mean intensities
a, b, a, b respectively keeping the level set function Ψ(x, y) fixed, leads to the following
solutions:
R R
IH (Ψ)dxdy I(1 − H (Ψ))dxdy
a = RΩ , b = RΩ ,
H (Ψ)dxdy (1 − H (Ψ))dxdy
R Ω∗ RΩ∗
I H (Ψ)dxdy I (1 − H (Ψ))dxdy
a= Ω R , b = ΩR ,
Ω H (Ψ)dxdy Ω (1 − H (Ψ))dxdy

in order to find an Euler’s equation we keep the mean intensities E (φ, a, b, a, b) fixed and
minimizing eq (3.22) in connection with the level set function Ψ using Gâteaux derivative:
1n o
lim E (Ψ + τ Φ, a, b, a, b) − E (Ψ, a, b, a, b) = 0, (3.23)
τ →0 τ

now using same process as done in section (3.3.1) we get the following equation:

∂Ψ h
= δ (Ψ) − λ1 (I − a1 )2 − λ2 (I ∗ − b1 )2
∂t i
+ λ1 (I − a2 )2 + λ2 (I ∗ − b2 )2
h  ∇Ψ   ∇Ψ i
+ µδ (Ψ)∇ · + ∇2 Ψ − ∇ · ,
|∇Ψ| |∇Ψ|
Ψ(x, y, t) = Ψ0 (x, y, 0), in Ω.

The LCV model does well in those images having inhomogeneity problems as well, but
the performance of the model is not satisfactory in those images which are obtained with
low frequencies, unilluminated objects, overlapping regions of homogeneous intensities.
For this purpose we develop coefficient of variation based variational model (see section
5.2).

29
3.5 Active Contour Without Edges (Vector-Valued Case)
This model is the expansion of Chan-Vese model [16]. This model [14] also have some
properties in common with the CV model, like not depending on the gradient of the
Image I and consequently can detect edges with and without gradient. The fidelity term
of the model is as follows:
N Z N Z
1 X 2 1 X
E2 (C, a, b) = λ+
` (I` − a ` ) dxdy + λ− 2
` (I` − b` ) dxdy, (3.24)
N in(C) N out(C)
`=1 `=1

where C is a contour also called evolving curve, a = (a1 , a1 , · · · , aN ) and b = (b1 , b1 , · · · , bN )


are the average intensities in and outside of the contour respectively [14]. By adding the
regularization term consisting of the length and area term terms, we get the following
energy equation:
N Z
1 X
E(C, a, b) = µ (length(C))p + ν.area (inside(C)) + λ+ 2
` (I` − a` ) dxdy
N in(C)
`=1
N Z
1 X
+ λ− 2
` (I` − b` ) dxdy, (3.25)
N out(C)
`=1


where λ+
` , λ` > 0, µ, ν > 0 are constant parameters. Just like Chan-Vese model [16],
this model is also piecewise constant Mumford-Shah segmentation model [30].

3.5.1 Level Set Formulation:


The level set formulation of this model is as like the level set formulation of the Chan-
Vese model [16]. Therefore, we refer the readers to the section (3.3.1). In the vector-
valued case, the inside and outside average intensities i.e., a = (a1 , a1 , · · · , aN ) and
b = (b1 , b1 , · · · , bN ) can be found channel wise by minimizing energy functional with
respect to the average intensities keeping φ constant i.e.,
R
I` H(Ψ)dxdy
a` = RΩ ,
Ω H(Ψ)dxdy
R
where the codition Ω H(Ψ)dxdy > 0 must be satisfied. Otherwise, we’ll reconstruct
proper level set formulation once again.
R
Similarly when Ω H(−Ψ)dxdy > 0, then b` can be found explicitly by the following
equation: R
I` H(−Ψ)dxdy
b` = RΩ ,
Ω H(−Ψ)dxdy
as a result the following Euler-Lagrange equation in Ψ is obtained:

 ∇Ψ  N
1 X +
µδ (Ψ)∇ · − δ (Ψ)ν − δ (Ψ) λ` (I` − a` )2 − λ−
` (I` − b` )
2
= 0,
|∇Ψ| N
`=1
(3.26)

30
keeping ν = 0 the steady state form of (3.26) become:
  N N
∂Ψ n ∇Ψ 1 X + 1 X − o
= δ (Ψ) µ∇. − λ` (I` − a` )2 + λ` (I` − b` )2 in Ω, (3.27)
∂t |∇Ψ| N N
`=1 `=1

with initial and boundary conditions:



 Ψ(t, x, y) = Ψ0 (x, y) in Ω,
t=0 (3.28)
 δ (Ψ) ∂Ψ
→ = 0,
− on ∂Ω
|∇Ψ| ∂ n

where, →

n is a unit normal on the boundary of Ω.
To solve the above evolution problem (3.28) numerically, we use finite difference scheme
as used in [16].
The CV vector-valued model work well in images that have homogeneous regions, but the
performance of the model is not adequate in those images which are obtained with low
frequencies, unilluminated objects, overlapping regions of homogeneous intensities. For
this purpose we develop coefficient of variation based vector-valued model (see section
5.2).

31
Chapter 4

Multigrid method for Active


Contour Vector-Valued Model

In this chapter we present Semi-Implicit (SI) and Additive Operator Splitting (AOS)
methods for discretization of Euler’s Lagrange equation (3.27) of the Chan-Vese vector
valued model [14]. We also propose Multi-Grid (MG) method for the said model and
compare the results of MG with SI and AOS methods.

4.1 Semi-Implicit Method


The non-linear partial diffeential equation (3.27) can be written as follows:

∂Ψ  ∇Ψ 
= µδ (Ψ)∇ · +f (4.1)
∂t |∇Ψ|

Where,
N
1 X n + 2  2 o
f = δ λ` I` − a` + λ−
` I` − b ` (4.2)
N
`=1

Thus in order to discretize the above equation in Ψ, we use a finite implicit scheme. Here
we take the observed image Il in the form of n1 ×n1 pixel of size h1 ×h2 , where h1 = 1/n1
and h2 = 1/n1 . Each pixel represent the average light intensity over a small rectangular
portion. Thus the (ıth , th ) grid point is located as, (xı , y ) = (ı − 12 )h1 , ( − 12 )h2 . Thus


using finite difference scheme, the discretization of the above equation take the form,
  

(k+1) (k) (k+1)
4x+ Ψı,

Ψı, − Ψı,   1 x
= δ Ψ(k)

ı, µ · 2 4−
 r 
∆t h1 
(k)
 2 
(k)
 2 
4x+ Ψı, /h1 + 4y+ Ψı, /h2 + β



 

y (k+1) 
1 y  4 + Ψ ı, 
+ µ · 2 4−   r
 + fı, (4.3)
h2 (k)
2 
y (k)
2 
4x Ψı, /h1 + 4 Ψı, /h2 + β 

+ +

32
Where the differences 4x+ , 4x− , 4y+ , 4y− are given by,

4x+ Ψkı, = Ψkı+1, − Ψkı, , 4x− Ψkı, = Ψkı, − Ψkı−1, ,


(4.4)
4y+ Ψkı, = Ψkı,+1 − Ψkı, , 4y− Ψkı, = Ψkı, − Ψkı,−1 ,

putting the step size h1 , h2 = 1 equation (4.3) take the form,


  
(k+1) (k+1)

(k+1) (k)
Ψı+1, − Ψı,

Ψı, − Ψı,  
= µ · δ Ψ(k) x 
 
ı, 4 − r 
∆t  
x (k)
 2 
y (k)
 2 
4+ Ψı, + 4+ Ψı, +β


 
(k+1) (k+1)

Ψ − Ψ

ı,

y 
 ı,+1 
+ 4−  r  + fı, ,
 (4.5)
(k) 2
  2
x y (k)
4 Ψı, + 4 Ψı, +β


+ +

using the following functionals:


(k) 1 (k) 1
Hı, = r , Hı−1, = r ,
(k) 2 (k) 2
   2  2
(k) (k)
4x
+ Ψı, + 4y+ Ψı, +β 4x
+ Ψı−1, + 4y+ Ψı−1, +β
(k) 1
,
Hı,−1 = r 2  2 ,
(k) (k)
4+ Ψı,−1 + 4y+ Ψı,−1 +β
x

equation (4.5) take the form,


(k+1) (k)
Ψı, − Ψı,   n
(k+1)
 
(k+1)

(k)
= µδ Ψ(k)
ı,
(k+1)
Ψı+1, − Ψı, (k)
Hı, − Ψ(k+1)
ı, − Ψı−1, Hı−1,
4t
    o
(k+1) (k+1) (k)
+ Ψı,+1 − Ψ(k+1)
ı, H (k)
ı, − Ψ (k+1)
ı, − Ψı,−1 Hı,−1 + fı, ,

implies that:
 n
(k) (k+1) (k) (k+1)
Ψ(k+1)
ı, = Ψ (k)
ı, + µ4tδ  Ψ(k)
ı, Hı−1, Ψı−1, + Hı,−1 Ψı,−1
  o
(k) (k) (k) (k) (k+1) (k) (k+1)
− Hı−1, + Hı,−1 + 2Hı, Ψ(k+1)
ı, + Hı, Ψı+1, + Hı, Ψı,+1 + 4tfı, .
(4.6)

The functionals Hı, , Hı−1, and Hı,−1 has been freezed at k th term and thus equation
(4.6) becomes a linear system of equations which can be solved by iterative methods.
Although the Semi-implicit method is unconditionally stable for large time steps as well.
But at the same time the main flaw in this method is its computational cost for large
sized images.
Thus we develop an Additive Operator Splitting (AOS) method as done in [4, 25, 51, 12,
26, 23] to solve the PDE (3.27).

33
4.2 Additive Operator Splitting (AOS) method
Considering equation (3.27) in the form,

∂Ψ   1
= µ δ (Ψ)∇ · G∇Ψ + f where G =
∂t |∇Ψ|
n    o
= µ δ (Ψ) ∂x G∂x Ψ + ∂y G∂y Ψ + f (4.7)

Where, f is as given in equation (4.2).


Since the AOS Scheme [4, 25, 51, 12, 26, 23] splits m-dimensional spatial operator into
m 1-dimensional operators, i.e, the m-dimensional operator can be considered as a sum
of m 1-dimensional discretizations.
Thus the above equation (4.7) can be discretized as,

Ψk+1 − Ψk Fk + Fk k + 2F k + F k


F+1
  −1  −1 k+1
= µδ (Ψk ) Ψk+1
−1 − Ψ
∆t 2 2
Fk + F+1
k 
+ Ψk+1
+1 + f ,
2

 
⇒ Ψk+1
 = Ψk + ∆t F1 Ψk+1
−1 − F Ψk+1
 + F2 Ψk+1
+1 + ∆tf , (4.8)

Where,
Fk +F−1
k k +2F k +F k
F+1
F1 = µδ (Ψk ) 2 , F = µδ (Ψk ) 2
 −1
,
Fk +F+1
k (4.9)
F2 = µδ (Ψk ) 2 ,
Thus the above equation (4.8) take the form,
 
−∆tF1 Ψk+1
−1 + 1 + ∆tF Ψ
k+1
− ∆tF2 Ψk+1 k
+1 = Ψ + ∆tf . (4.10)

The above equation(4.10) can be written in matrix form as,

A (Ψk )Ψk+1
 = Ψk + f k for  = 1, 2,

the system of equations (4.10) is solved in one direction (say x-direction) where A (Ψk )
for  = 1, 2 is a tri-diagonal matrix. In the same way After solving the Euler Lagrange’s
equations in y-direction also, we take average of the two system of equations. We get
next approximation to the exact solution:
1 X k+1
Ψk+1 = Ψ for p = 1, 2. (4.11)
2 p p

AOS method is computationally time consumable than the corresponding SI method


because we avoid the block tri-diagonal matrix in AOS method that comes from SI
method [16, 14].

34
4.3 Multi-Grid Algorithm for the Non-linear PDE of CV
Vector-Valued model
We describe here multi-grid method for the CV vector-valued model. The non-linear PDE
(3.26) without considering the artificial time-step (as included in (3.27) while applying
time marching schemes). Thus without using the time variable t, the approximation
at any pixel (i, j) can be denoted by Ψij = Ψ(xi , yj ). Then the elliptic PDE given in
equation (3.26) can be written as:
  N
∇Ψ 1 Xn + o
µdiv − λ` (I` − a` )2 − λ−
` (I ` − b` )2
= 0, (4.12)
|∇Ψ| N
`=1

equation (4.12) is an Euler’s Lagrange equation of the following functional:


Z N Z N Z
1 X 1 X
µ |∇Ψ|dxdy + + 2
λ` (I` − a` ) Ψdxdy − λ− 2
` (I` − b` ) Ψdxdy = 0,
Ω N Ω N Ω
`=1 `=1
(4.13)

where I`∗ is as defined above. Equation (4.12) and (4.13) possess the same stationary
points [10, 13]. Discretizing eq (4.13) at any grid point (ı, ) is given as:
  
 4x x
4− Ψı, /h1
+ 
µ q 
 h1 x 2 y 2
(4− Ψı, /h1 ) + (4− Ψı, /h2 ) + β
 
4y+ 4y− Ψı, /h2 
+ q 
h2 (4x Ψ /h )2 + (4y Ψ /h )2 + β 
− ı, 1 − ı, 2
N N
1 X + 1 X −
+ λ` (I` (ı, ) − a` )2 dxdy − λ` (I` (ı, ) − b` )2 dxdy = 0, (4.14)
N N
`=1 `=1

we used a positive parameter β in the denominator to avoid the zero value.

Note 1 We have used β for different values in the region (0, 1] in our experiments but
it have no effect on final result.

equation (4.14) can be written as:


  
 4 xΨ
− ı,
µ1 4x+  q 
 (4− Ψı, ) + (γ1 4y− Ψı, )2 + β1
x 2
 
y
4 Ψ
− ı,

+ γ1 4y+  q 
(4x− Ψı, )2 + (γ1 4y− Ψı, )2 + β1 
N N
1 X + 1 X −
+ λ` (I` (ı, ) − a` )2 dxdy − λ` (I` (ı, ) − b` )2 dxdy = 0,
N N
`=1 `=1

35
  
 4 xΨ
− ı,
⇒ µ1 4x+  q 
 (4− Ψı, ) + (γ12 4y− Ψı, )2 + β1
x 2
 
y
4 Ψ
− ı,

+ γ12 4y+  q 
(4x− Ψı, )2 + (γ12 4y− Ψı, )2 + β1 
N N
1 X + 1 X −
= − λ` (I` (ı, ) − a` )2 dxdy + λ` (I` (ı, ) − b` )2 dxdy, (4.15)
N N
`=1 `=1

µ h1
where µ1 = , β1 = h21 β and γ1 = using the Neumann’s boundary conditions:
h1 h2

Ψı,0 = Ψı,1 Ψı,n2 +1 = Ψı,n2 for ı = 1, 2, · · · , n1


Ψ0, = Ψ1, Ψn1 +1, = Ψn1 , for  = 1, 2, · · · , n2 (4.16)
where Ψı, ∈ [0, 1].

The left hand side of the equation (4.14) is as like TV regularization denoising model
[37]. In order to avoid gradient to be zero(0), we use a small positive parameter β as in
[37, 47].

4.3.1 Full Approximation Scheme (FAS) of Multi-Grid Algorithm


Here we discuss the three main parts of the non-linear multi-grid that is known as FAS
[12, 17, 21, 43]. Writing the non-linear system of equations (4.14) and (4.16) in the
following form:
N h (Ψh + eh ) − N h (Ψh ) = rh ,

keeping h1 , h2 both equal to h i.e., h1 = h2 = h. Ψh and f h are considered as grid


functions in the rectangular domain Ωh of size n1 × n2 with (h1 , h2 ) = (h, h) spacing.
n1 n2
By coarsening Ωh , we obtain a rectangular domain Ω2h of size × . Let ψ h be an
2 2
approximation to the solution Ψh , then the error equation take the following form:

Ψh = ψ h + eh , (4.17)

writing eq (4.17) in the form:

Ψh − ψ h = eh
⇒ N h (Ψh ) − N h (ψ h ) = N h (eh )
⇒ f h − N h (ψ h ) = rh , (4.18)

we use the iterative methods to smooth the error on the fine grid. After smoothing the
error on fine grid we come on coarse grid by using the restriction operator. We solve
residual equation using iterative method in order to get approximation of the error on
the coarse grid (As compared to the fine grid, the solution on the coarse grid is less
expensive), then we come back on the fine grid by using interpolation operator to correct

36
approximation ψ h . It is said to be a two-grid method.
We want to discuss here the restriction and interpolation operators on the rectangular
domains Ωh and Ω2h .
Restriction Operator:
Ih2h ψ h = ψ 2h ,

where,

2h 1 h h h h
 n1 n1
ψı, = ψ2ı−1,2−1 + ψ2ı−1,2 + ψ2ı,2−1 + ψ2ı,2 , 16ı6 , 166 .
4 2 2
is full weighting restriction operator [17, 43].
Interpolation Operator:
h h
I2h ψ = ψh,

where,

h 1  2h 2h 2h 2h

ψ2ı,2 = 9ψı, + 3ψı+1, + 3ψı,+1 + ψı+1,+1
16
h 1  2h 2h 2h

ψ2ı−1,2 = 9ψı, + 3ψı−1, + 3ψı,+1 + ψı2h
1 ,+1
16
h 1 
2h 2h 2h 2h

ψ2ı,2−1 = 9ψı, + 3ψı+1, + 3ψı,−1 + ψı+1,−1
16
h 1  2h 2h 2h 2h

ψ2ı−1,2−1 = 9ψı, + 3ψı−1, + 3ψı,−1 + ψı−1,−1
16
n1 n1
for 1 6 ı 6 , 166 .
2 2
It is said to be nonlinear interpolating operator [17, 43].
Now we are going to discuss smoother which is the most basic part of the multi-grid
algorithm. We first discuss here the non-linear smoother.

4.3.2 Local Smoother:


In this smoother we locally linearize the system of non-linear equations (4.14). We
compute D(Ψ) (as given in 4.20) locally at post iteration on each grid point (ı, ). By
keeping it fixed, the system of equations become linear. Thus we apply Gauss-Seidel
method to smooth the error. Equation 4.14 can be written as:
 
 4x− Ψı+1, 4x− Ψı,
µ1  q −q 
 (4x− Ψı+1, )2 + (γ12 4y− Ψı+1, )2 + β1 (4x− Ψı, )2 + (γ12 4y− Ψı, )2 + β1
 
y y
4− Ψı,+1 4− Ψı, 
+ γ12  q −q 
(4x− Ψı,+1 )2 + (γ12 4y− Ψı,+1 )2 + β1 (4x− Ψı, )2 + (γ12 4y− Ψı, )2 + β1 
N N
1 X + 1 X −
= − λ` (I` (ı, ) − a` )2 + λ` (I` (ı, ) − b` )2 , (4.19)
N N
`=1 `=1

37
where,
1
D(Ψ)ı, = q ,
(4− Ψı, )2 + (γ12 4y− Ψı, )2 + β1
x

1
D(Ψ)ı+1, = q , (4.20)
(4x− Ψı+1, )2 + (γ12 4y− Ψı+1, )2 + β1
1
D(Ψ)ı,+1 = q ,
(4x− Ψı,+1 )2 + (γ12 4y− Ψı,+1 )2 + β1

are the denominators that are to be freezed on the post iteration in order to make the
problem linear. So using (4.20), eq (4.19) take the following form:

D(Ψ)ı+1, 4x− Ψı+1, − D(Ψ)ı, 4x− Ψı, + γ12 D(Ψ)ı,+1 4y− Ψı,+1 − D(Ψ)ı, 4y− Ψı,
  
µ1
N N
1 X + 2 1 X −
= − λ` (I` (ı, ) − a` ) + λ` (I` (ı, ) − b` )2 ,
N N
`=1 `=1

implies that

µ1 (D(Ψ)ı+1, (Ψı+1, − Ψı, ) − D(Ψ)ı, (Ψı, − Ψı−1, )) +
γ12 (D(Ψ)ı,+1 (Ψı,+1 − Ψı, ) − D(Ψ)ı, (Ψı, − Ψı,−1 )) = fı, , (4.21)

we compute the coefficients D(Ψ)ı, , D(Ψ)ı+1, and D(Ψ)ı,+1 each containing Ψı, , at the
previous iteration in a freeing process. Let Ψ̌ be the next approximation to the solution
Ψ. Putting the value of Ψ̌ at each grid point except on the grid point (ı, ) of eq (4.21).
As a result a linear equation of the following firm take place:
 
µ1 D(Ψ̌)ı+1, (Ψ̌ı+1, − Ψı, ) − D(Ψ)ı, (Ψı, − Ψ̌ı−1, ) +
γ12

D(Ψ̌)ı,+1 (Ψ̌ı,+1 − Ψı, ) − D(Ψ)ı, (Ψı, − Ψ̌ı,−1 ) = fı, , (4.22)

In this algorithm we solve eq (4.22) for Ψı, in order to update/rebuild approximation at


each pixel.

38
Algorithm 1 (Algorithm for the local smoother:)

ψ h ← Smoother1(ψ h , f˘h , maxit, töl)

f or ı = 1 : n1
f or = 1 : n2
f or itr = 1 : maxit
ψ̌ h← ψh
D(ψ̌ h )ı+1, ψ̌ı+1,
h + D(ψ̌ h )ı, ψ̌ı−1,
h + γ12 D(ψ̌ h )ı,+1 ψ̌ı,+1
h + γ12 D(ψ̌ h )ı, ψ̌ı,−1
h − f˘ı,
ψı, =
D(ψ̌ h )ı, + D(ψ̌ h )ı−1, + γ12 D(ψ̌ h )ı, + D(ψ̌ h )ı,−1

h
if ψı, − ψ̌ı, < töl then stop
end
end
end

4.3.3 Global Smoother


Here we discuss a global smoother as discussed in [39] for various image models. In this
method a non-linear system of equations is linearized globally by computing D(Ψ) at
each step on each grid point (ı, ). Gauss-Seidel method is then applied on the linearized
system of equation. The algorithm of global smoother is given as follows:

Algorithm 2 (Algorithm for the global smoother:)

ψ h ← Smoother2(ψ h , f˘h , maxit, töl)

f or ı = 1 : n1
f or  = 1 : n2
− 12
D(ψ h )ı, = (4x− ψı, )2 + (γ12 4y− ψı, )2 + β1


end
end
Φh = ψ h
f or itr = 1 : maxit
f or ı = 1 : n1
f or  = 1 : n2
Φ̌h ←Φ h
D(ψ)hı+1, Φ̌hı+1, + D(ψ h )ı, Φ̌hı−1, + γ12 D(ψ h )ı,+1 Φ̌hı,+1 + γ12 D(ψ h )ı, Φ̌hı,−1 − f˘ı,

Φı, =
D(ψ h )ı, + D(ψ h )ı−1, + γ12 (D(ψ h )ı, + D(ψ h )ı,−1 )
end
end
end
ψh ← Φ
In global smoother, we update the coefficients (4.20) globally in the start of the smoothing
step and is stored for the relaxation use.

39
4.3.4 Multi-Grid Algorithm
In order to solve eq (4.15), we use the multi-grid algorithm that is given below:
Algorithm 3 (Multi-Grid Algorithm:)

υ1 denote MG algorithm pre-smoothing on each level


υ2 denote MG algorithm pre-smoothing on each level
$ denote number of cycles MG on each level $ = 1, 2 for V and W cycles respectively).
rr = Relative Residual
FAS MG Cycle:
Begin  
h
ψ h ← F ASCycle ψ h , f , itr, υ1 , υ2 , $, töl
ψ0 = ψ h
1. If we are on the Coarsest grid Ωh , then soving eq (4.22) by time marchng scheme.
Otherwise we take help of the smoother. 
h
ψ h → Smootherυ1 ψ h , f , itr, υ1 , υ2 , $ (afore smoothing)
2. Restriction:
2h
Ih2h ψ = ψ 2h
, =ψ 2h ψ
2h h
f = Ih2h f − N h ψ h + N 2h ψ 2h

 
h
ψ 2h → Smootherυ2 ψ h , f , itr, υ1 , υ2 , $ (after smoothing)
h
rebuild f
kψh −ψ0 k
if rr = kpsi0 |k 2 < töl
2
end
Real Image Image Size SI Method AOS Method MG Method
Itr CPU Itr CPU Cycle CPU
256 × 256 150 408.42 125 39.02 2 13.18
512 × 512 168 4016.75 135 160.18 2 24.04
1024 × 1024 194 26767.43 195 938.00 2 69.23
2048 × 2048 – – 500 9948.70 2 261.11
4096 × 4096 – – – – 2 1086.83

Table 4.1: Comparison table of the SI, AOS and Multi-Grid methods considering a real
image using CV Vector-Valued model in connection with to the number of iterations and
CPU time in seconds.

4.4 Conclusion
In this chapter, we propose Multi-Grid method for solving Chan-Vese vector valued model
[14]. The method gives best results regarding its efficiency in edge detection and CPU
time as compared to the SI and AOS methods. The method is also effective for images
of large sizes where SI and AOS methods fails in obtaining the desired result (one can
consult table (4.1) as reference).

40
(b)
(a) (d)
(c)
Re-
Re- Seg-
Re-
sult
sult mented
sult
ofofofRe-
sult
chan-
chan-
chan-
nel
nel of
nel
123the
three
chan-
nels

(e)
(f)
(g)
(h)
Re-
Re-Re-
Seg-
sult
sult
sult
mented
ofofofRe-
chan-
chan-
chan-
sult
nel
nel
nel
of
123the
three
chan-
nels

(i)
(j)
(k)
(l)
Re-
Re-
Re-
Seg-
sult
sult
sult
mented
ofofofRe-
chan-
chan-
chan-
sult
nel
nel
nel
of
123the
three
chan-
nels

Figure 4.1: The results of SI, AOS and MG methods have been given in row 1, 2 and 3
respectively. Where as the results of channel 1, 2, 3 and segmented result of the three
channels can be found in column 1, 2, 3 and 4 respectively.

41
(a)(c)
(b)
Re-Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nelnel
nel
111
us-us-
us-
inging
ing
SI MG
AOS
method
method
method

(d)(f)
(e)
Re-Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nelnel
nel
222
us-us-
us-
inging
ing
SI MG
AOS
method
method
method

(g)
(h)
(i)
Re-
Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nel
nel
nel
333
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method

(j)
(k)
(l)
Re-
Re-
Re-
sult
sult
sult
ofofof
chan-
chan-
chan-
nel
nel
nel
444
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method

(m)
(n)
(o)
Re-
Re-
Re-
cov-
cov-
cov-
ered
ered
ered
ob-
ob-
ob-
ject
ject
ject
us-
us-
us-
ing
ing
ing
SI
AOS
MG
method
method
method

Figure 4.2: The results of SI, AOS and MG methods have been given in columns 1, 2
and 3 respectively. Where as the results of channel 1, 2, 3, 4 and result of the recovered
object can be found in rows 1, 2, 3, 4 and 5 respectively.
42
(c)
(b)
(a)
Seg-
Re-
Ini-
mented
sult
tial
Re-
af-
con-
sult
ter
tour
194
it-
er-
a-
tions

(d)
(e)
(f)
Ini-
Re-
Seg-
tial
sult
mented
con-
af-
Re-
tour
ter
sult
195
it-
er-
a-
tions

(g)
(h)
(i)
Ini-
Re-
Seg-
tial
sult
mented
con-
af-
Re-
tour
ter
sult
2
cy-
cles

Figure 4.3: The results of SI, AOS and MG methods have been given in row 1, 2 and
3 respectively. Where as the initial contour, result of respective no. of iterations and
segmented results can be found in column 1, 2 and 3 respectively.

43
Chapter 5

Co-efficient of Variation based


Variational Model

In this chapter we propose a new variational model that have co-efficient of variation
based fidelity term and also having local statistical function rather than existing models
[16, 49]. Section (5.3) shows its best results in terms of its detection.

5.1 Introduction
Image segmentation play a key role in the application of image processing and com-
puter vision. The image segmentation is to divide image in foreground and background.
Inn the end each pixel of the image will belong to any class i.e., either foreground or
background. In this chapter we have discussed energy functionals of variational models.
Image segmentation consists of usually two functionals i.e., internal energy functional
and external energy functional. The work of external energy functional is to bend active
contour to the edges of the object while the work of internal energy functional is to keep
the contour smooth. In most variational models [5, 16, 49] the fidelity term is covariance
based. While in this chapter wee have proposed a model whose fidelity term is based
on co-efficient of variation. Experimental results also shows that the performance of the
model is more better than the already existing Chan-Vese Vector-Valued model. The
Chan-Vese (Both Scalar and Vector-Valued) model work well in those images that are
free of inhomogeneity problems. Because in homogeneous problems the average inside
and outside intensities i.e., a, a and b, b respectively approximate I(x, y) in a better way
(see section 3.3 and 3.5). However these models (Both Scalar and Vector-Valued) be-
come weak in case when there is unilluminated objects, low contrast images, overlapping
homogeneous regions or images with low frequencies.

5.2 The Coefficient of Variaton based Vector-Valued Model


(CoVVV)
In almost all the segmentation models [5, 16, 48, 49] variance is used in the fidelity or
fittting term. But in some cases coefficient of variation (CoV ) is a best choice as compared

44
to variance. As we see in eq (5.1) also, the value of Coefficient of Variation is smaller
in the uniform regions as compared to in the regions where edges of the object located
[27, 40]. Thus the smaller values shows that pixel is in the uniform region and the larger
value shows that the pixel is about to locate on the edge of the object. The attributes
of CoV [6, 27, 40] shows that this method can also be used as best region descriptor.
Before defining CoV , we want to define variance first. Denoting image intensity at any
point (ı, ) as Iı, , the covariance is defined as follows:

1 X 2
V ar(I) = Iı, − M ean(I) ,
N ı,

where M ean(I) denote mean intensity of the given image I. Covariance can be found as
the fitting term in many variational models [5, 16, 49]. The Coefficient of Variation CoV
can be defined as,
V ar(I)
CoV 2 =  2 . (5.1)
M ean(I)

Using the motif of coefficient of variation CoV [1, 27], the fidelity term of our proposed
model in image segmentation take the following form:
N  N 
1 X (I` − a`)2 1 X (I` − b` )2
Z  Z 
E1 (C, a, b) = dxdy + dxdy, (5.2)
in(C) N a2` out(C) N b2`
`=1 `=1

where the unknown quantity C is an evolving curve, a = (a1 , a2 , ..., aN ) and b =


(b1 , b2 , ...bN ) are the mean intensities of the image inside and outside the contour C
respectively.
For Local information, we use the local fidelity term as proposed in [49], which is given
in the following equation:
N  N
!
1 X (I`∗ − a` )2 (I`∗ − b` )2
Z  Z
1 X
E2 (C, a, b) = dxdy + dxdy, (5.3)
in(C) N a2` out(C) N b`
2
`=1 `=1

where a = (a1 , a2 , ..., aN ) and b = (b1 , b2 , ...bN ), the regularization term that consist of
the length term of the contour C and the area term inside the contour C as used in [16]
is added with global and local fidelity terms as in eq (5.2) and (5.3) respectively and as
a result the following energy equation is obtained:

E(C, a, b, a, b) = µlength(C) + ν.area (inside(C))


N Z 2 (I`∗ − a` )2
 
1 X + (I` − a` )
+ λ` + dxdy
N in(C) a2` a2`
`=1
N Z
!
2 ∗ − b )2
1 X (I ` − b` ) (I `
+ λ−` + ` 2 dxdy, (5.4)
N out(C) b2` b`
`=1

45
5.2.1 Level Set Formulation
The level set formulation of our proposed model is also that of CV model [16]. Therefore
can consult section (3.3.1). In order to express each term of eq (5.4) in terms of level set
function φ we get the following number of equations:
Z Z
length(C) = length(Ψ = 0) = |∇H(Ψ)|dxdy = δ(Ψ)|∇Ψ|dxdy
Ω Z Ω

area(inside(C)) = area(Ψ > 0) = H(Ψ)dxdy


N Z
(I` − a` )2 (I`∗ − a` )2
 
1 X
λ+` + dxdy
N in(C) a2` a2`
`=1
N Z 2 (I`∗ − a` )2
 
1 + (I` − a` )
X
= λ` + H(Ψ)dxdy
N a2` a2`
`=1 Ω
N Z
!
2 ∗ − b )2
1 X (I ` − b` ) (I `
λ− ` + ` 2 dxdy
N out(C) b2` b`
`=1
N Z
!
1 X − (I` − b` )2 (I`∗ − b` )2
= λ` + (1 − H(Ψ))dxdy,
N Ω b2` b
2
`=1 `

thus using level set function φ eq (5.4) take the following form:
Z Z
E(Ψ, a, b, a, b) = µ δ(Ψ)|∇Ψ|dxdy + ν H(Ψ)dxdy
Ω Ω
N Z 2 (I`∗ − a` )2
 
1 X + (I` − a` )
+ λ` + H(Ψ)dxdy (5.5)
N a2` a2`
`=1 Ω
N Z
!
1 X − (I` − b` )2 (I`∗ − b` )2
+ λ` + (1 − H(Ψ))dxdy,
N b2` 2
b`
`=1 Ω

where I`∗ = Aco ∗ I` and Aco is an average Convolution operator. We take the regularized
form of he Heaviside function H and Delta function δ i.e., H and δ respectively as the
Heaviside function H is not differentiable at point 0. We use H and δ as in [9, 16, 17]
i.e.,
  
1 2 −1 x
H (x) = 1 + tan
2 π 
 
1 
δ (x) = H0 (x) = ,
π 2 + x2

46
thus the regularization form of eq (5.5) take the following shape:
Z Z
E (Ψ, a, b, a, b) = µ δ (Ψ)|∇Ψ|dxdy + ν H (Ψ)dxdy
Ω Ω
N Z
)2 (I ∗ − a` )2
 
1 X (I` − a`
+ λ+
` + ` 2 H (Ψ)dxdy (5.6)
N a2` a`
`=1 Ω
N Z
!
1 X (I` − b` )2 (I`∗ − b` )2
+ λ−
` + (1 − H (Ψ))dxdy,
N Ω b2` 2
b`
`=1

here we consider ν = 0. The values of a` , b` , a` and b` for ` = 1, 2, 3, · · · , N can be


obtained by minimizing eq (5.6) w.r.t the constants a` , b` , a` and b` respectively keeping
Ψ fixed. We get the following number of equations:
R 2
I H (Ψ)dxdy
a` = R Ω ` ,
Ω I` (x, y)H (Ψ)dxdy
R 2 
I
Ω ` 1 − H  (Ψ) dxdy
b` = R  
Ω I` 1 − H (Ψ), dxdy
R
where Ω (1 − H (Ψ))dxdy > 0 in the case, when exterior of the contour is not empty.

(I ∗ )2 H (Ψ)dxdy
R
a` = Ω R `∗  ,
Ω I` H (Ψ)dxdy
 
∗ )2 1 − H (Ψ) dxdy
R
Ω (I ` 
b` = R   ,
I ∗ 1 − H (Ψ) dxdy
Ω ` 

in order to minimize eq (5.6) with respect to the level-set function Ψ. For this minimiza-
tion, we use Gâteaux derivative of the functional E while keeping a` , b` , a` and b` fixed.
we get an equation of the following form:
1 
lim E (Ψ + tΦ, a, b, a, b) − E (Ψ, a, b, a, b) = 0
t→0 t

∇Ψ · ∇Φ 
Z 
⇒ µ δ0 (Ψ(x, y)) · |∇Ψ(x, y)|.Φ + δ (Ψ) · dxdy
Ω |∇Ψ|
N Z 2 (I`∗ − a` )2
 
+ (I` − a` )
Z
1 X
+ δ (Ψ) λ` + Φdxdy (5.7)
Ω N Ω a2` a2`
`=1
N Z
!
2 ∗ − b )2

Z
1 X (I` b` ) (I `
− δ (Ψ) λ−
` + ` 2 Φdxdy = 0.
Ω N Ω b2` b`
`=1

Since for any vector →



v and any scalar s, green’s theorem stated as follows:
Z Z Z

− →

ws ∇ · wdxdy = − ∇s · v dxdy + s→
−v ·→
−n ds,
Ω Ω Ω

47
∇Ψ
now using the Green’s theorem and putting Φ = s and δ (Ψ) |∇Ψ| =→

v , we have:
Z  ∇Ψ  Z
δ (Ψ) ∂Ψ
−µ δ (Ψ)∇ · · Φdxdy + µ · →
− Φds
Ω |∇Ψ| Ω |∇Ψ| ∂ n
N Z 2 (I`∗ − a` )2
 
+ (I` − a` )
Z
1 X
+ δ (Ψ) λ` + Φdxdy (5.8)
Ω N Ω a2` a2`
`=1
N Z
!
2 ∗ − b )2

Z
1 X (I ` b` ) (I `
− δ (Ψ) λ−
` + ` 2 Φdxdy = 0.
Ω N Ω b2` b`
`=1

An Euler equation in Ψ can be obtained from the equation (5.8):


N  2 (I`∗ − a` )2
 ∇Ψ   
1 X + (I` − a` )
µδ (Ψ)∇ · − δ (Ψ) λ` +
|∇Ψ| N a2` a2`
`=1
!)
− (I` − b` )2 (I`∗ − b` )2
− λ` + = 0, (5.9)
b2` b
2
`

where,
δ (Ψ) ∂Ψ
. − = 0, on ∂Ω.
|∇Ψ| ∂ →n
Thus a steady state evolution equation take the following form:
N
(
∂Ψ  ∇Ψ  1 X +  (I` − a` )2 (I`∗ − a` )2 
= δ (Ψ). µ∇. − λ` +
∂t |∇Ψ| N a2` a2`
`=1
N
)
1 X −  (I` − b` )2 (I`∗ − b` )2 
+ λ` + . (5.10)
N b2` b
2
`=1 `

The above Euler-Lagrange’s equation (5.10) of our proposed model (5.6) is discretized
through AOS method in the following section.

5.2.2 Additive Operator Splitting Method (AOS)


In order to discretize (5.10) we use Additive Operator Splitting (AOS) method. Writing
equation (5.10) in the following form:

∂Ψ   1
= µδ (Ψ)∇ · G∇Ψ + f where G =
∂t |∇Ψ|
n    o
= µδ (Ψ) ∂x G∂x Ψ + ∂y G∂y Ψ + f

where,
N   (I − a )2 (I ∗ − a )2 
1 X ` ` `
f = δ (Ψ) −λ+
` 2 + ` 2
N a` a`
`=1
)
 (I − b )2 (I ∗ − b )2 
` ` `
+ λ−` + ` 2 .
b2` b `

48
We split n-dimensional spatial operator into n 1-dimensional operators in AOS Scheme
[4, 12, 23, 26, 50]. In the end the n-dimensional operator can be found by taking the sum
of n 1-dimensional discretizations.
Thus the above equation can be discretized as:

Ψk+1 − Ψk Fk + Fk k + 2F k + F k


F+1
  −1  −1 k+1
= µδ (Ψk ) Ψk+1
−1 − Ψ
∆t 2 2
Fk + F+1
k 
+ Ψk+1
+1 + f ,
2

 
k+1 k+1
⇒ Ψk+1
 = Ψk
 + ∆t F1 Ψ−1 − F Ψk+1
 + F2 Ψ+1 + ∆tf , (5.11)

where,
Fk +F−1
k k +2F k +F k
F+1
F1 = µδ (Ψk ) 2 , F = µδ (Ψk ) 2
 −1
,
Fk +F+1
k (5.12)
F2 = µδ (Ψk ) 2 ,
equation (5.11) take the following form:
 
−∆tF1 Ψk+1
−1 + 1 + ∆tF Ψk+1
 − ∆tF2 Ψk+1 k
+1 = Ψ + ∆tf . (5.13)

The matrix form of the equation(5.13) can be written as:

A (Ψk )Ψk+1
 = Ψk + f k for  = 1, 2, (5.14)

the system of equations (5.14) is solved in one direction (say x-direction) where A (Ψk )
for  = 1, 2 is a tri-diagonal matrix. In the same way After solving the Euler Lagrange’s
equations in y-direction also, we take average of the two system of equations. We get
next approximation to the exact solution:
1 X k+1
Ψk+1 = Ψ for p = 1, 2. (5.15)
2 p p

AOS method is computationally time consumable than the corresponding SI method


because we avoid the block tri-diagonal matrix in AOS method that comes from SI
method [16, 14].

5.3 Experimental Results


We present here the results of our proposed (CoVVV) model in secction (5.2) comparing
with CV vector-valued (CVVV) model in section (3.5).
In figures (5.2) and (5.3) we have considered an image with initial contour that
contains homogeneous regions of different intensity levels. We observe in fig (5.2) that
CVVV model fails in segmenting regions of low contrast while our proposed (CoVVV)
model detect the regions of low intensity level also.
We presented second RGB image with initial contour in figures (5.4) and (5.5). We

49
see the segmented result of the CVVV model (see fig 5.4(c)) that fails in segmenting
low contrast regions and fuzzy edges. On the other hand the segmented result of our
proposed (CoVVV) model (5.4(c)) is quit better in segmenting the low contrast regions
and fuzzy edges.
Our third RGB image as in figures (5.6) and (5.7) contains homogeneous overlapping
regions. The segmented result of CVVV model ( see fig 5.6(c)) shows that CVVV model
fail to segment those edges where two homogeneous regions are overlapped. While the
segmented results our model (see fig 5.7(c)) shows that our model is able to segment the
overlapping regions also.
On the basis of these experiments we can say that the result of our Coefficient of
Variation based variation model (CoVVV) comparatively better than the existing Chan-
Vese Vector-Valued variational model (CVVV) in those images that have overlapping
regions, low contrast regions or fuzzy edges.

Figure 5.1: Images that are used in our experiments

(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions

Figure 5.2: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius r0 = 40,
size=256 × 256. (b) Result of VVCV model after 700 iterrations. (c) Segmented result
of VVCV model

(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions

Figure 5.3: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius
r0 = 40, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model

50
(c)
(b)
(a)
Seg-
Re-
Ini-
mented
sult
tial
Re-
af-
Con-
sult
ter
tour
700
iter-
ra-
tions

Figure 5.4: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 115 & y0 = 130 and radius r0 = 40,
size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented result of
VVCV model

(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions

Figure 5.5: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 130 & y0 = 150 and radius
r0 = 40, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model

(c)
(a)
(b)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions

Figure 5.6: Segmenting RGB image, using Chen-Vese Vector Valued model (VVCV):
(a) For initial Contour we have taken points x0 = 130 & y0 = 115 and radius r0 = 40,
size=256 × 256. (b)Result of VVCV model after 58 iterrations. (c)Segmented result of
VVCV model

51
(a)
(b)
(c)
Ini-
Re-
Seg-
tial
sult
mented
Con-
af-
Re-
tour
ter
sult
700
iter-
ra-
tions

Figure 5.7: Segmenting RGB image, using the proposed Coefficient of Variation model
(CoVVV): (a) For initial Contour we have taken points x0 = 100 & y0 = 100 and radius
r0 = 45, size=256 × 256. (b)Result of VVCV model after 700 iterrations. (c)Segmented
result of VVCV model

52
Chapter 6

Conclusion and Future Work

In this chapter, we present conclusion of our proposed model (sec 5.2) on the basis of
experimental results (sec 5.3). We also present further work to be done in future in image
processing.

6.1 Conclusion
Semi-Implicit (SI) method for Euler’s Lagrange (EL) equation arisen from the minimiza-
tion of CV vector-valued (VV) model is used which is unconditionally stable, but for
images of large sizes it may not work. Thus we propose, multi-grid (MG) method for the
solution EL equation.
We have also developed new active contour VV model for segmentation of vector-
valued (VV) images. Our proposed VV variational image segmentation model is equipped
with an efficient global and local fidelity terms, due to which it gives better results as
compared to the Chan-Vese VV mode.
The use of coefficient of variation in place of variance in both global and local fidelity
terms more strengthen the results of the model and as a result segment valuable details
and detect more regions of interest.

6.2 Future Work


We want to continue our work in future in the area of image segmentation and image
denoising. Following is our future work plan.

• Along with AOS scheme, we will also use AMOS scheme –as given in section 2.6.8–
for our model. This scheme will improve the efficiency of the model.

• We will also develop multigrid algorithm for our proposed model (5.2).

• We will also work to modify the regularizing term of the model.

• We will also use our model in the area of image denoising as well.

53
Bibliography

[1] S. E. Ahmad, A pooling methodology for coefficient of variation, The Indian Journal
of Statistics 32 (1995), no. B1, 235–238.

[2] T. Asano, D. Z. Chen, N. Katoh, and T. Tokuyama, Polynomial-time solutions


to image segmentation, Proc. of the 7th Ann. SIAM-ACM Conference on Discrete
Algorithms (1996).

[3] G. Aubert and P. Kornprobst, Mathematical problems in image processing, Partial


Dierential Equations and the Calculus of Variations. Springer (2002).

[4] N. Badshah and K. Chen, Multigrid method for the chan-vese model in variational
segmentation, Communication and Computational Physics 4 (2008), no. 2.

[5] , Image selective segmentation under geometrical constraints using an active


contour approach, Commun. Comput. Phys. 7 (2009), no. 3, 759–778.

[6] N. Badshah, K. Chen, H. Ali, and G. Murtaza, A coefficient of variation based image
selective segmentation model using active contours, East Asian Journal on Applied
Mathematics 4 (2012).

[7] E. Bae, Efficient global minimization methods for variational problems in imaging
and vision, PhD thesis, Department of Mathematics, University of Bergen (2011).

[8] D. Barash and R. Kimmel, An accurate operator splitting scheme for non-linear
diffusion filtering, Scale-space and Morphology in computer Vision 21 (2001), no. 06,
281–289.

[9] R. P. Beyer and R. J. Leveque, Analysis of a one-dimensional model for the immersed
boundary method, SIAM Journal of Numerical Analysis 29 (1992), no. 2, 332–364.

[10] X. Bresson, S. Esedoglu, P. Vandergheynst, J.P. Thiran, and S. Osher, Global min-
imizers of the active contour/snake model, CAM Report (2005), 04–05.

[11] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, International Jour-
nal of Computer Vision 22 (1997), no. 1, 61–79.

[12] T. F. Chan, K. Chen, and X-C. Tai, Non-linear multilevel schemes for solving the
total variation image minimization problem, Second edition. Interscience Tracts in
Pure and Applied Mathematics, No. 4. Springer Berlin Heidelberg (2007).

54
[13] T. F. Chan, S. Esedoglu, and M. Nikolov, Algorithm for finding global minimizers
of image segmentation and denoising models, UCLA CAM Report (2004), 04–54.

[14] T. F. Chan, B. Y. Sandberg, and L. A. Vese, Active contours without edges for
vector valued images, Journal of Visual Communication and Image Representation
11 (2000).

[15] T. F. Chan and J. Shen, Image processing and analysis, Society for Industrial and
Applied Mathematics SIAM, 3600 University City Science Center, Philadelphia, PA,
USA (2005), 1904–2688.

[16] T. F. Chan and L. A. Vese, Active contours without edges, IEEE Transaction on
Image Processing 10 (2001), no. 2.

[17] K. Chen, Matrix preconditioning techniques and applications, Cambridge University


Press, The Edinburgh Building, Cambridge CB2 2RU,UK, firrst edition (2005).

[18] D. E. Giorgi, M. Carriero, and A. Leaci, Existence theorem for a minimum problem
with free discontinuity set, Arch. Rational Mech. Anal. 108 (1989), no. 3, 195–218.

[19] R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky, Fast geodesic active contours,
IEEE Transaction on image processing 10 (2001), no. 10, 14671475.

[20] G. H. Golub and C. G. VanLoan, Matrix computations, Johns Hopkins University-


Press, Baltimore (1983).

[21] V. E. Henson, Multigrid methods nonlinear problems: an overview, Computational


Imaging 5016 (2003), 36–48.

[22] E. Isaacson and H. B. Keller, Analysis of numerical methods, John Wiley, New York
(1966).

[23] M. Jeon, M. Alexander, W. Pedrycz, and N. Pizzi, Unsupervised hierarchical im-


age segmentation with level set and additive operator splitting, Pattern Recognition
Letters 26 (2005), 1461–1469.

[24] M. Kass, A. Witkin, and A. Terzzopoulos, Snakes: Active contour models, Interna-
tional Journal of Computer Vision 6 (1987), no. 4, 321–331.

[25] T. Lu, P. Neittaanmaki, and X-C. Tai, A parallel splitting up method and its appli-
cation to navier-stokes equations, Applied Mathematics Lett. 4 (1991), no. 2, 25–29.

[26] , A parallel splitting-up method for partial differential equations and its appli-
cations to navier-stokes equations, RAIRO Model. Math. Anal. Numer. 26 (1992),
no. 6, 673–708.

[27] M. Mora, C. Tauber, and H. Batatia, Robust level set for heart cavities detection in
ultrasound images, Computers in Cardiology 32 (2005), 235238.

55
[28] J. M. Morel and S. Solimini, Variational methods in image segmentation: A construc-
tive approach, Revista Matematica Universidad Complutense de Madrid 1 (1988),
169–182.

[29] , Variational methods in image segmentation, Birkhauser Boston Inc., Cam-


bridge, MA, USA (1995).

[30] D. Mumford and J. Shah, Optimal approximation by piecewise smooth functions and
associated variational problems, Communications on Pure Applied Mathematics 42
(1989).

[31] M. V. Oehsen, Multiscale methods for variational image processing, Logos Verlag
Berlin, Comeniushof, Gubener str., 47, 10243 Berlin, first edition (2002).

[32] J. M. Ortega, Numerical analysis a second course, classics in appl. math., no. 3,
Societyfor Industrial and Applied Mathematics, Philadelphia, PA (1990).

[33] S. Osher, L. I. Rudin, and E. Fatemi, Non-linear total variation based noise removal
algorithms, Physica D 60 (1992), 259–268.

[34] S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed, algo-
rithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79 (1988), no. 1,
12–49.

[35] N. Paragios and R. Deriche, Geodesic active contours and level sets for the detection
and tracking of moving objects, IEEE Transactions on Pattern Analysis and Machine
Intelligence 22 (2000), no. 3, 266–280.

[36] R. D. Richtmyer and K. W. Morton, Difference methods for initial-value problems,


Second edition. Interscience Tracts in Pure and Applied Mathematics, No. 4. Inter-
science Publishers John Wiley & Sons, Inc., New York-London-Sydney (1967).

[37] L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal
algorithm, Physica D North Holland 60 (1992), no. 14, 259–268.

[38] M. Rudzsky, E. Rivlin, R. Kimmel, and R. Goldenberg, Fast geodesic active contours,
Scale-Space Theories in Computer Vision 16 (1999), no. 82, 34–45.

[39] J. Savage and K. Chen, An improved and accelerated non-linear multigrid method
for total-variation denoising, International Journal of Computer Mathematics 82
(2005), no. 8, 1001–1015.

[40] M. A. Schulze and Q. X. Wu, Nonlinear edge-preserving smoothing of synthetic aper-


ture radar images, Proceedings of the New Zealand Image and Vision Computing95
Workshop 43 (1995), 65–70.

[41] M. Sonka, V. Hlavac, and R. Boyle, Image processing, analysis, and machine vision,
Chapman & Hall 2 (1998).

56
[42] G. W. Stewart, Introduction to matrix computations, Academic Press, New York
(1993).

[43] U. Trottenberg and A. Schuller, Multigrid, Academic Press, Inc., Orlando, FL, USA
(2001).

[44] Y. H. R. Tsai and S. Osher, Total variation and level set based methods in image
science, Acta Numerica, Cambridge University Press, UK (2005), 01–61.

[45] R. S. Varga, Matrix iterative analysis, Prentice Hall, Englewood Cliffs, NJ (1962).

[46] M. A. Viergever, J. Weickert, and B. M. Romeny, Efficient and reliable schemes


for non-linear diffusion filtering, IEEE Transactions on image processing 7 (1998),
no. 3, 398–410.

[47] C. R. Vogel, Computational methods for inverse problems, Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA (2002).

[48] L. Wang, C. Li, D. Xia, and C. Y. Kao, Active contour driven by local and global in-
tensity fitting energy with aplication to brain mr image segmentation, Computerized
Medical Imaging and Graphics 33 (2009), 520–531.

[49] X. F. Wang, D. S. Huang, and H. Xu, An efficient local chan-vese model for image
segmentation, Pattern Recognition 43 (2010).

[50] J. Weickert, B. M. ter Haar Romeny, and M. A. Viergever, Efficient and reliable
schemes for nonlinear diffusion filtering, Scale-Space theory in computer vision,
Lecture Notes in Computer Science 12 (1997), no. 52, 260–271.

[51] , Efficient and reliable schemes for nonlinear diffusion filtering, 7 (1998),
no. 3, 398–410.

57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy