Application of Fixed Point Theorem For Digital Images
Application of Fixed Point Theorem For Digital Images
10(01), 1110-1126
Article DOI:10.21474/IJAR01/14152
DOI URL: http://dx.doi.org/10.21474/IJAR01/14152
RESEARCH ARTICLE
APPLICATION OF FIXED POINT THEOREM FOR DIGITAL IMAGES
Rita Pal
Thakur Shyamnarayan Degree College, Mumbai University, Maharashtra.
……………………………………………………………………………………………………....
Manuscript Info Abstract
……………………. ………………………………………………………………
Manuscript History In this research paper, we prove some fixed point theorems for digital
Received: 30 November 2021 images. The paper's main goal is to present another generalisation of
Final Accepted: 31 December 2021 the well-known Banach contraction principle for digital images. The
Published: January 2022 fundamental concepts of digital images are discussed, as is an
application of the fixed point theorem to digital image compression and
Key words:-
Banach Contraction Principle, Digital fractal image compression. The digital contractive type mappings in
Metric Space, Digital Image, Increasing digital metric space are introduced, and the uniqueness of fixed points
Sequence, Complete Digital Metric in digital metric space is proved.
Space, Finite Sequence , Continuity,
Contractive Type Mapping
Copy Right, IJAR, 2022,. All rights reserved.
……………………………………………………………………………………………………....
Introduction:-
The basic concept of metric fixed point theory is the Banach contraction principle, which states that "If S is a
mapping from a complete metric space (X, d) into itself satisfying d(S x, S y) ≤ αd(x, y) for all x, y ∈ X, where 0 ≤
α < 1 ,Then S has a unique fixed point.” [12].The Banach contraction principle also provides the existence and
uniqueness of fixed points, as well as methods for obtaining approximate fixed points. It was generalised several
times by using various types of minimal commutative along with continuity as one of the mappings. We include the
following steps for common fixed point: (a) Commutative type condition, (b) Completeness of one or more
mappings' range space, (c) Relationship between mapping ranges, (iv) Continuity of one or more mappings, (v)
Contractive type condition.The study of spaces with the fixed point property is central to topological fixed point
theory. The study of spaces with the fixed point property is central to topological fixed point theory. Furthermore,
topology is the study of geometric problems that do not rely solely on the exact shape of the objects, but rather on
their interaction with a space. In topology, we generally consider an infinite number of points in a point's arbitrary
small neighbourhood. Rosenfeld introduced the concept of digital topology to consider a finite number of points in a
neighbourhood. [13]. In fact, digital topology is the study of geometric and topological properties of digital images
through the use of geometric and algebraic topology. Digital topology is the study of the topological properties of
image arrays. The findings lay the groundwork for image processing operations such as image thinning, border
following, contour filling, and object counting. Digital images have been used in a variety of applications, including
image processing and computer graphics. Furthermore, digital topology serves as the mathematical foundation for
image processing operations. In addition, digital topology is a developing field in both 2D and 3D digital images.
Ege and Karaca defined a digital metric space and applied the well-known Banach Contraction Principle to digital
images. In this paper, we introduce contractions and contractive mappings in digital metric spaces. We prove the
existence and uniqueness of fixed points in digital metric space. Hutchinson [17] initiated the theory of iterated
function systems (IFS). Barnsley first recognized the potential of the fractals for the image compression and applied
the theory of IFS. Barnsley published his book Fractals Everywhere [18]. He also published a paper on fractal image
compression [19]. This research activity attracted many researchers in applied mathematics and computers towards
fractals.
Preliminaries
Let X be a subset of Zn for a positive integer n where Zn is the set of lattice points in the n–dimensional Euclidean
Space and β represent an adjacency relation for the members of X. A digital image consists of (X, β).
Definition 2.1: Let β, n be positive integers, 1≤ β≤ n and p, q be two distinct points p = (p1,p2,,…,pn), q =
(q1,q2,,…,qn) Z n p and q are β adjacent if there are at most β indices i such that׀pi - qi| < 1 and for all other indices j
such that ׀pj – qj ≠ ׀1 , pj = qj . The following statements can be obtained from definition 2.1 For a given p∈ Zn
,the number of points q ∈Zn which are β adjacent to p is denoted by k = k(β, n). It may be noted that k(β, n) is
independent of p6.
2.1.1 If p ∈Z (i.e. n = 1) then β can take only one value β=1. In this case, k(1,1) = 2, since p-1 & p+1 are the only
points 1- adjacent to p in Z. Thus, k = k (1, 1) = 2 and q is 1-adjacent to p if and only if |p-q| = 1.
2.1.2If p∈ Z2 (i.e. n = 2) then β can take values β = 1, 2. When β = 1, the points 1-adjacent to p = (p1, p2) are (p±1,
p2), (p1, p2±1) Thus, the number of points 1-adjacent to p = (p1, p2) is 4, so that k = k (1, 2) = 4. (fig. (a))
When β = 2, the points 2-adjacent to p = (p1, p2) are (p1±1, p2), (p1, p2 ±1), (p1±1, p2±1) Thus, the number of
points 2-adjacent to p = (p1, p2) is 8, so that k = k (2, 2) = 8. (fig. (b))
2.1.3 If p∈Zn (i.e. n = 3) then β can take values β = 1, 2, 3. When β = 1, the points 1-adjacent to p = (p1, p2, p3) are
(p1±1, p2, p3), (p1, p2±1,p3), (p1, p2±1, p3±1)
Thus, the number of points 1-adjacent to p is 6, so that k = k (1, 3) = 6. (Fig. (a))
1111
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
When β = 2, the points 2-adjacent to p = (p1, p2, p3) are (p1±1, p2, p3), (p1, p2±1, p3), (p1, p2, p3±1), (p1±1,p2, p3),
(p1±1 , p2, p3±1 ), (p1, p2±1, p3±1 ) Thus, number of points 2-adjacent to p is 18, so that k = k(2, 3) =18. (fig. (b))
When β = 3, the points 3-adjacent to p = (p1, p2, p3) are (p1±1 , p2, p3), (p1, p2±1 , p3), (p1, p2, p3±1),(p1±1, p2±1 ,
p3),(p1±1 , p2, p3±1 ), (p1, p2±1 , p3±1 ),(p 1±1, p2±1, p3±1) Thus, the number of points 3-adjacent to p is 26, so that k
= k(3, 3) =26. (fig. (c))
In general to study n-D digital image, if 1 ≤ β ≤ n then k = k(β, n) is given by the following formula [12]
k (β, n) = n−1
i=n−β 2
n-i
cin
Wherecin = n! / (n-i)! i!
1112
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
Definition 2.2: Let X ⊂ Zn, d is the Euclidean metric on Zn . (X, d) is a metric space. Suppose (X, β) is a digital
image with β−adjacency, then (X, d, β) is called a digital metric space. [14]
Definition 2.3:A sequence {xn} of points of the digital metric space (X, d, β) is a Cauchy sequence if there is M ∈N
such that, d(xn, xn) < 1 for all n, m > M.
Theorem 2.1: For a digital metric space (X, d, β), if a sequence {xm}⊂ X ⊂ Z n is a Cauchy sequence, there is M, N
such that for all n, m > M, we have xn = xm.
Definition 2.4: A sequence {xn} of points of a digital metric space (X, d, β) converges to a limit L ∈X, if for all ∈>
0, there is M N such that d(xn, L) <∈for all n > M
Proposition 2.1 : A sequence {xn} of points of a digital metric space (X, d, β) converges to a limit L∈X if there is
M∈ N such that xn = L, for all n > M. (i.e. xn = xn+1 = xn+2 = ⋯ = L).
Definition 2.5: A digital metric space (X, d, β) is complete if any Cauchy sequence {xn} converges to a point L of
(X, d, β).
Definition 2.6 : Let (X, d, β) be a digital metric space and T : (X, d, β) ⟶ (X, d, β) be a self-map. If there exists λ
[0, 1) such that, d (Tx, Ty) ≤ λd(x, y),
for all x, y∈ X, then T is called a contraction map.
Proposition 2.2:Every digital contraction map T ∶ (X, d, β) ⟶ (X, d, β) is β−continuous (Digital continuous).
Lemma 2.1: Let X∈ Z n and (X, d, β) be digital metric space. Then there does not exist a sequence {xm} of distinct
elements in X, such that d (xm+1, xm) <d (xm, xm+1), for m = 1, 2, 3, ….. i.e. there exist a finite sequence {xm}. [15]
Main Result
First we introduce a notion.
Notion 3.1: Let Θ = {θ: [0, ∞) ⟶ [0,∞)} be such that θ is increasing, θ (t) <√t for t > 0, θ (t) = 0 iff t = 0
Definition 3.1:Suppose (X, d, β) is a digital metric space, T : X ⟶ X and θ ∈Θ. Suppose d (Tx, Ty) ≤ θ (d(x, y)), x,
y∈ X. Then T is called a digital θ-contraction.
Theorem 3.1 Suppose (X, d, β) is a digital metric space, T: X ⟶ X and θ∈ Θ. Suppose d (Tx, Ty) ≤ θ (d(x,y)),for
all x, y∈X. T is called a digital θ-contraction. Then T has unique fixed point.
Proof: Let x0∈X and suppose xn+1 = Txn, for n = 0, 1, 2, 3… We may suppose that xn≠ xn+1, for n = 0, 1, 2,…….
Otherwise xn is a fixed point.
Now, d (xn+1, xn) = d(Txn, Txn-1))
≤ θ(d(xn, xn-1))
= θ(d(xn-1, xn))
<√d(xn-1, xn)
≤ d(xn-1, xn)
Therefore d(Xn+1, Xn) < d(xn-1, xn), ( ... xn≠ xn-1)
Therefore d(xn+1, xn) is strictly decreasing sequence. Therefore xn = xn+1 , for large n. By
Lemma (2.1)
Therefore xn is a fixed point of T, for large n. Uniqueness of fixed point of T .
Suppose u and v are fixed points of T.
Then, d(u, v) = d(Tu, Tv)
≤ θ(d(u, v))
<√d(u, v)
1113
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
≤ d(u, v)
It is a contradiction, if u≠ v.
Therefore u = v.
Hence T has unique fixed point.
Theorem 3.2Let (X, d, β) be a digital metric space and T ∶ X⟶X such that d(Tx, Ty) <𝝁(x, y), for all x, yϵ X, x ≠y
1 1+d(x,Tx ) 1+d(y,Ty )
Where𝝁(x, y) = max{2 [ d( y,Ty) 1+d(x,y) + d(Tx, Ty ) + d(x ,y )], d(x, Tx) 1+d(Tx ,Ty ) } Then T has unique fixed
point.
1
≤ max{2[ d( xn, xn+1)+d(xn-1 ,xn+1) ], d(xn-1,xn)}
1
≤ max{2[ d( xn-1, xn) , d(xn-1 ,xn)}
= d(xn-1, xn)
1
=max { {2[ d( u, v) + d(u, v) , 0}
= max {d(u,v) , 0}
= d (u,v)
= d (u,v)<d (u,v), for u ≠ v which is contradiction.
Therefore u = v
Hence there exists unique fixed point.
This completes the proof.
1114
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
Fractals are used to approximate many real-world objects, such as coastlines, mountains, trees, and clouds.
Mandelbrot's book, The Fractal Geometry of Nature [16], sparked widespread interest.
Fractal image compression techniques are varied, but they represent only a small portion of all compression
methods available. If the basic concept of an image compression technique is to utilise the self-similarities that
naturally occur in many photos, it is considered fractal. A part of an image can frequently be found that, if altered in
some way, would fit into another part of the same image.
Fractals are often described as self-similar objects; that is where the "fractal" part comes from.Benoit B. Mandelbrot
developed the term "fractal" from the Latin word fract or fractus, which means "broken" or "uneven." Instead than
attempting to define a fractal precisely, one may take an alternative approach to the problem. We might build a list
of features that characterise fractals, as Falconer states in "Fractal geometry mathematical foundation and
application"[20] . A fractal set F may not have all of the following features, but at least some of them:
1. F is self-similar on some scales;
2. F is detailed on all scales; and
3. F's fractal dimension (specified in some way) does not have to be an integer in general.
4. F is usually described in terms of a simple algorithm.
A typical example of a fractal is the Sierpinski triangle. It's made by starting with an equilateral triangle (T 0) and
locating the triangle's midpoints on each side. A new triangle is made by drawing lines between these locations, and
this new triangle is removed by removing the lines.Inside the original triangle, we now have three additional
equilateral triangles (see Figure 3).T1 stands for triangles. The next iteration is formed by repeating this process for
each of the new triangles, and so on.TheSierpinski triangle is constructed by repeating it an unlimited number of
times. The set is similar to the Cantor set.
1115
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
The area and circumference of the Sierpinski triangle is quite interesting as well. We start of by investigating the
area of Tn.
12 3
If T0 has side length 1 it has height 12 − 2 = .
2
3
1. 3
2
n=0: The area a0 of the first triangle T0 then is 2 = 4 .
n=1: Since we removed a fourth of the triangle T 0 to create T1 the area of T1 is
1 3 3
a1 = a0− 4 a0 = 16 .
n=2: T2 was created by removing a fourth of each triangle in T1, therefore the area of T2 is
32 9 3
a2 = 4 2 a0 = .
64
3n 3n 3
In general the area of Tn is an = 4 n a0 = 4 n +1 for n ≥ 0.
Hence, the area of the Sierpinski triangle is limn→∞ an = 0
The circumference of Tn, where we also count the circumference of each hole, can be found by:
1. n=0 Each side of T0 is of length 1, which yields the circumference is l0 = 3.
1
2. n=1 T1consists of three equilateral triangles each with 2the side length of T0. This gives us thecircumference l1= 3
1 9
· 3 · 2= 2
1
3. n=2 Since it is a repeating pattern T2 consists of three times as many triangles as T 1 and each have the side
2
1 33
length of a triangle inT1. Hence the circumference of T 2=3·2 . l1 = 22 .
1 3 .3n 3n +1
Here T consists of 3n triangles with side length 2n . The circumference of Tn then is ln = 2n = 2n ,which tends to
+∞ as n → +∞. Therefore, the Sierpinski triangle has area = 0 but infinite circumference.
This seemingly strange phenomenon is explained by the third property of fractals mentioned above. A common
illustrative notion of fractal dimension is the so-called box dimension of a set. It is a scaling relationship in which
the number of boxes required to cover a set scales with the side length of the boxes.Figure 4 shows that the
1 1
Sierpinski triangle is covered by 4 boxes with side length 2 in (a) and by 12 boxes with side length4(b) In general if
∈
N boxes with side length ∈ is needed to cover the Sierpinski triangle then it takes 3N boxes with side length 2 to
log 3
cover it. Since the number of boxes increase by 2 d, where d = , we say that this number d is the box dimension
log 2
of the Sierpinski triangle.
In many cases, the box dimension is equal to many other notions of dimension, so d is referred to as the set's fractal
dimension. With this in mind, it appears more likely that the Sierpinski triangle has no area but is more than just a
curve, as its dimension is a number between 1 and 2. There are other definitions of fractal dimensions besides the
box dimension, such as the Hausdorff dimension, which is more mathematically convenient. The fractal dimension
is an intriguing topic, but it is not the subject of this thesis. More information, however, can be found in [20] for
those who are interested.
1116
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
To begin, we allow X to represent any set. By defining X abstractly, we can discuss a wide range of different sets.
We'll work with sets of sets and, to a smaller extent, sets of images later on.However, for readers who are unfamiliar
with the concepts described below, it may be helpful to think of X as R or R 2.
Definition4.1. A metric space (X, d) is a set X together with a real-valued function d: X×X→R. such that for any
x, y, z ∈ X, the following holds:
1. d(x,y)=0 ⇔x=y;
2. 0<d(x,y)<∞, if x≠y;
3. d(x,y)=d(y,x);
4. d(x,y)≤ d(x,z)+ d(z,y).
It's important to note that a space doesn't have to have its own metric. For example, we may measure the distance
between two pointsx = (x1, x2) and y = (y1, y2) in the space R 2 by,
d1(x, y) = (x1 − y1 )2 + (x2 − y2 ) 2
This real-valued function clearly meets the requirements of Definition 4.1 (i)-(iv), indicating that it is a metric.
On the other hand, we have the "Taxicab metric," which is occasionally referred to as such.
d2(x, y) = | x1 − y1 | + | x2 − y2 |
It satisfies the properties as well. As a result, both d 1 and d2 are metrics in the R2 space. d1 in its most general form
is
de(x, y) = ((x1 − y1 ) + (x2 − y2 ) + (x3 − y3 ) + … … … . . + (xn − yn ))
Definition 4.3.A sequence {xn }∞ n=1 in a metric space (X, d) is called a Cauchy sequence if, for any ∈> 0 there is an
N ∈N such that d(xn, xm) <∈∀n, m > N.
In other words, when one proceeds further down a Cauchy sequence, the points become closer and closer together.
They do not, however, need to arrive at a specific location in space X.Consider the metric space (Q, d), where Q
1
denotes rational numbers and d denotes the Euclidean metric. The integer e is not in Q, so the sequence a = (1 + n )n
converges to e . As a result, the following definition is appropriate:
Definition 4.4. A metric space (X, d) is complete if every Cauchy sequence in X converges in X.
The concept of compact subsets is another crucial topic in the development of fractal theory. To grasp the concept
of compact subsets, one must first recall what a subsequence is. Consider a sequence {xn}; a subsequence {xnk}can
be created from {xn} by deleting some or all of the components while keeping the order of the remaining elements
the same. For example, the sequence 1/2, 1/4, 1/6,... is the even denominator subsequence of the sequence {1/
n }∞
n=1 , where n= 1 = 1, 1/2, 1/3,...
Definition 4.5.Let E⊂ X be a subset of the metric spaceX in (X, d). If every infinite sequence in E possesses a
subsequence that converges to an element in E, then E is said to be compact.
The concept of compact subsets might be a bit odd to the reader who has not encounter it before. Therefore we will
state some other definitions regarding subsets of metric spaces, together with a theorem, that will help with the
intuition of compact subsets.
Definition 4.6.A subset E of a metric space (X, d) is open if, for each point p ∈ E there is some r > 0 such that {q ∈
X : d(p, q) < r} is contained in E.
Definition 3.6. A subset E of a metric space (X, d) is closed if the complement of E, denoted E c , is open.
Remark. A closed set contains all its limit points.
1117
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
Definition 4.7.A subset E of a metric space (X, d) is bounded if there exists a real number M > 0 and a point q ∈ X
such that d(p, q) < M for all p ∈ E.
Theorem 4.1. Any compact subset K ⊂ X in a metric space (X, d) is closed and bounded.
Converse of Theorem 4.1 is not true in general, but we do obtain equivalence when X = R k and the Euclidean
metric are used. The Heine–Borel Theorem is the form of the theorem that holds when X = R k. The fractals
outlined above, as well as those built later, can be regarded as closed and bounded subsets of R2. (or R in the case of
the Cantor set).
Definition 4.8. An Iterated Function System (IFS) is a finite set of contraction mappings on a complete metric space
(X, d),
{wi : X → X | i = 1, 2, . . . , N}.
Each contraction mapping wi has a corresponding contractivity factor ci . An alternative notation for the same IFS
is,
{X; w1, w2, . . . , wN }.
The two mappings w1 and w2, which in fact are contraction mappings, could be used to construct the Cantor set.
We will now state a theorem that generalises this concept for any given IFS.
Recall that if E is a subset of X and f : X → X is a function, then we define f(E) = {f(x) : x ∈ E}.
Theorem 4.2. Let {X; wi , i = 1, 2, . . . , N} be an IFS with contractivity factor c = max{c1, c2, . . . , cN }.
Define the transformation W : H(X) → H(X) by
N
W(B) = i=1 U wi(B) for all B ∈ H(X). Then:
(i) W(B) is a contraction mapping with contractivity factor c with respect to the Hausdorff metric;
N
(ii) Its unique fixed point A ∈ H(X) with A = i=1 U W(A) = wi(A) is given by
A = limn→∞W◦n(B) for any B ∈ H(X). The fixed point A ∈ H(X) is called the attractor of the IFS.
Example 4.1 (Sierpinski triangle). Earlier, we built the Sierpinski triangle by removing the middle part of a
triangle and repeating the process. However, the Sierpinski triangle can also be represented using an iterative
function system.Startof with a solid triangle T0. Then T1 is constructed with the use of three functions, these three
functions are affine transformations . Each transformation scales the triangle by a half and places each scaled down
triangle (in form of translations) in the corners of T 0.
1118
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
The corresponding IFS is given by {R 2 : w1, w2, w3} where the contractive transformations w1,w2 and w3 are given
1/2 0 x1
by w1(x1, x2) =
0 1/2 x2
1/2 0 x1 1/2
W2(x1, x2) = x2 + 0
0 1/2
1/2 0 x1 1/4
w3(x1, x2) = x2 + 3
0 1/2
2
The attractor T of this IFS is the Sierpinski triangle and is given by
T = limn→∞W◦n(T0).
Theorem 4.2 states that the attractor of an IFS is unique and given an IFS it does not matter what the initial set is, in
the end iterates of W will tend to its attractor. To answer the inverse, how one should go about finding an IFS for a
given attractor the Collage Theorem proves useful.
Theorem 4.3 (The Collage Theorem, (Barnsley)). Let (X, d) be a complete metric space. Let L ∈ H(X) and ∈ ≥ 0
be given. Choose an IFS {X : w1, w2, . . . , wn} with contractivity factor 0 ≤ c < 1,
such that
h(L, ∪ni=1wi(L)) ≤ ∈
This results follows from the contraction mapping theorem. If one considers f to be a contractionmapping with
unique fixed point xf, and let x ∈ X be such that
d(x, f(x)) <∈ for a given > 0. Then
d(x,f(x)) ∈
d(x, xf ) ≤ ≤ 1−c
1−c
In Example 4.1 above we identify the IFS by analysing which transformations are
needed for constructing the Sierpinski triangle.
1119
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
Despite the fact that high-speed Internet access is spreading around the world and connection speeds are increasing,
it is still limited. The time it takes to send a data file is determined not only by the connection speed, but also by the
file's memory size. As a result, sending a high-resolution image or a collection of images may still take some time.
Compressing the images reduces the amount of data that must be transferred, as well as the time it will take.But how
can images be compressed? The human eye's sensitivity to a variety of information losses is an important
characteristic. In other words, an image can be altered in ways that the human eye is incapable of detecting. If there
is a lot of redundant data that doesn't affect the "big picture," the data can be greatly compressed. Lossy
compression methods are those that lose some information during the compression process, whereas lossless
compression methods do not lose any original data .
The basic idea behind fractal image compression is to store (also known as encode) images as a set of
transformations. To be useful, there must be a method to decompress the image, i.e., a method to reconstruct the
image from the stored information. The decompression (or decoding) process entails repeatedly applying the
transform to an arbitrary starting image, resulting in an image that is either the original or, in most cases, very
similar to it. Instead of pixel values, each image in Figure 7 can be saved as a collection of affine transformations. If
the numbers in the transformations are of the commonly used data type "float," then each number has a memory size
of 32 bits. Storing the tree as a collection of transformations, for example, only requires 4 transformations x 6
numbers x 32 bits per number = 768 bits. Storing it as a collection of pixels, on the other hand, necessitates 512 .
512 .1= 262.144 bits for the resolution of 512 x 512. (Since it is only black and white, we only need 1 bit to store
the color).With this in mind, one might wonder if it is possible to find a small number of affine transformations that
represent any given image. The answer is simply no, because a natural image is not exactly self-similar, but it is also
not completely devoid of self-similarity. As previously stated, when looking at an image, one may notice a portion
of it that, when scaled and rotated, fits into another portion of the same image. Self-similarities of this type can be
found in most images of faces, cars, mountains, and so on. To make use of these similarities, we must partition the
image in some way and compare the bits and pieces.
If we define F to be the space of real-valued square-integrable functions f : S → G, then F together with the metric
d∗ forms a complete metric space. [6]
RecallthatW :F →F isacontractionmappingifforsomeconstantc,0≤c<1
d∗(W(f),W(g)) ≤ c·d∗(f,g),
where c is called the contractivity factor of W . Then, by the Contraction mapping theorem, there exists a unique
fixed point fW∈F satisfying W(fW)=fW.
We can state the Collage Theorem for grayscale images in the following way. Let f be a grayscale image and
assume that W : F → F is a contraction mapping such that
d∗(f, W(f)) ≤ ∈.
Then
d∗(f, f ) ≤
∈
w
1−c
where c is the contractivity factor of W, fW is its fixed point and W◦n(f0) → fW ≈ f for any initial
image f0.
1120
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
m non-overlapping range blocks Ri (1 ≤ i ≤ m). These range blocks can be thought of as functions Ri : Ri → G from
the ”spatial part” Ri ⊂ S of the range block Ri to G. Then another partition of the image is made, this time into n
non-overlapping domain blocks Dj (1 ≤ j ≤ n). In the same way Dj : Dj → G are functions from the ”spatial part” Dj
of Dj to G. Each domain block has in general twice the side length of the range blocks . An illustrative example can
be seen in Figure 8.
Given these two partitions the fractal block coding algorithm search, for each range block Ri, the best match
amongst the domain blocks. Since it is unlikely all range blocks have a good match amongst the domain blocks we
are allowed to modify said domain blocks. This can be done by shifting the grayscale value of the entire domain
block by a constant β, and scaling each grayscale value by a constant α. In the end we have, for each range block Ri
a matching domain block Dj(i) together with the αi and βi values for the match. The list of triples (”index of domain
block”,α,β) will form the encoding of the image.
Figure 6:- Example of 64 range blocks of size B × B and 16 Domain Blocks of size 2B × 2B.
Let αiand βidenote the best grayscale scaling and shift respectively and let v i: Dj(i) → Ridenote the unique affine
1
map of the form vi(x,y) = 2(x,y)+(a,b), mapping D j(i) onto Rifor 1 ≤ i ≤ m. Then we have the following theorem:
Theorem 5.1. Let Wi:F →F, for 1≤ i ≤m, be defined as
αi f vi−1 x, y + βi if x, y ∈ R i
Wi(f)(x,y)=
0 if x, y ∈ S\ R i
α2
This means that if 4i < 1, then Wiis a contraction.
Since the Ri’s (1≤ i≤ m) forms a partition of S, we can define W :F →F by
m
W(f)(x,y)= i=1 Wi f x, y =αi f vi−1 x, y + βi if x, y ∈ R i
1121
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
If we choose αisuch that Wiis a contraction for all i∈ {1, 2, . . . , m}, then by the Contraction Mapping theorem,
through iteratively applying Wito any starting image f we will recover the fixed point fWi. If we now define fWas
the sum of all fWi, we have the following theorem.
Theorem 5.2. Suppose c .= max |αi/2| < 1. Let f denote the unique fixed point of the contraction mapping W (i =
Wi i
1,... m), and let fW= m i=1 fwi . If W(f) =
m
i=1 wi f for any f ∈ F, then there exists a constant γ depending on f such
that
d∗(W ◦j (f ), f ) ≤ γ · cj
W
Proof :
◦j
d∗(W ◦j (f ), fW) ≤ m ∗
i=1 d (W i (f ), fWi)
max |α i | j
≤ m
i=1
i
d∗ (f, fWi)
2
max αi j
m
∗
= i
| | i=1 d (f, fwi)
2
This is accomplished by averaging the domain block's pixel values and reducing its size to that of a range block.
The fractal block coding algorithm for grayscale digital images can now be stated as follows:
The image is partitioned in two ways, just like the original algorithm. The first partition is made up of non-
overlapping domain blocks, while the second is made up of non-overlapping range blocks (see Figure 6).
Then for each range block Ri we find the domain block together with a transformation that is closest to Ri . The
transformation tested for each domain block includes: • Flipping; • Rotating; • Changing contrast and brightness.
Flipping is simply a reflection of the scaled down domain block and the rotations includes rotating the block 0 ◦ ,90◦
,180◦ or 270◦ . We have eight variants of each domain block to compare each range block with because we can flip
or not flip the domain block and then rotate it in four different ways.Then for each range block Ri we find the
1122
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
version of the down scaled domain block Dˆj(i) , together with a contrast scaling constant α and a brightness
controlling constant β, which have the lowest root mean square error. i.e. finding the function and domain block
wi(Dj(i)) = α × rotate( flip( Dˆj(i))) + β
that minimizes with ˜f = Ri and g˜ = wi(Dj (i)). The algorithm can be stated as following:
Algorithm 2
Enhanced Fractal Block Coding
1: for Ri (1 ≤ i ≤ m) do
2: for Dj (1 ≤ j ≤ n) do
3: Downscale Dj to match the size of Ri and call it Dˆj .
4: Generate Dˆj,k (k = 1, . . . , 8), the different rotations and flipping of Dˆj .
5: Find best α and β for the pair (Ri , Dˆj,k) using rms.
6: Compute the error using rms, and if the error is smaller then for any other Dˆj,k,
remember the pair Ri Dj along with the rotation, flipping, α and β.
7: end for
8: end for
The decoding for both algorithms is nearly identical, with the only differences being the saved parameters. We
begin by generating an arbitrary image of the same size as the original image, and then apply the transformations
corresponding to the saved parameters a fixed number of times iteratively. If we have the restriction |α| < 2 we
ensure that each transformation is a contraction, and then by Theorem 5.2 the decoded image will be close to the
original.
When working with lossy image compression (such as fractal image compression), it can be useful to be able to
measure the quality of the decompressed image. One common method is to compute the peak signal-to-noise ratio
(PSNR). To do so, we must first calculate the mean square error (MSE). For the original m × n image f and the
lossy compressed image f ∗ the MSE is defined as:
1
MSE = m .n ni=1 . m
j=1.|f (i, j) − f ∗ (i, j)|
2
Since we are working with 8-bit grayscale images the largest pixel value is 255, given this together with the MSE
255 2
the PSNR is defined as: PSNR = 10 · log10 MSE
It is important to note that the PSNR only measures the overall difference between two images' pixel values. In
other words, it says nothing about how the human eye will perceive image quality.. A high PSNR is considered
better as we aim to have a small mean square error between the original and the approximated image.
Implementation and Results the two fractal compression algorithms presented above were implemented in Python.
To test and compare the algorithms, experiments on two different types of images were conducted. The image is a
QR code with the message” Fractal image compression”. The original images can be seen in Figure 7. By Theorem
5.1 we need to restrict |α| < 2 to ensure a contraction. However, in practice it is useful to restrict |α| further to reduce
the number of iterations needed before reaching the fixed point. This might affect the quality of the image a bit but
it guarantees that the sequence of images converge faster and only a few decoding steps are necessary.
1123
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
The memory size of an image compressed by either of the presented algorithms does not depend on the original
image per se. What determines the memory size is the number of range- and domain blocks we choose together with
the choices of α and β. For the enhanced method we also need to take the rotation and flipping into account. The test
images in Figure 9 both have resolution 512 × 512, so the memory size after compression will be the same when
using the same range block size. Two different sizes of the range blocks were tested and in the first case they were
16 × 16 with the corresponding domain block size of 32×32. Since the resolution of the test images are 512×512 we
have a total of 256 domain blocks of size 32 × 32, so the index of the domain block requires 8 bits. In the second
case we have range blocks of size 8 × 8 and domain blocks of size 16 × 16 which result in a total of 1024 domain
1 1 1 1
blocks, so here 10 bits are required for the index. The α’s were chosen to be any combination of c 12 +c24+c38 +c416 ,
where ci ∈ {0, 1} for i = 1, 2, 3, 4. Thus, 4 bits were used representing them. The β’s on the other hand were chosen
to take any integer value between −255 and 255, which is a total of 2 9 − 1 different values. Therefore, 9 bits were
needed in the representation of the β’s.
The image created by the standard fractal block coding algorithm is stored as 1024(8 + 4 + 9) = 21504 bits = 2688
bytes for the larger block sizes and 4096(10 + 4 + 9) = 94208 bits = 11776 bytes for the smaller block sizes. As
mentioned, the enhanced method also needs to store information about the rotation and flipping of the domain
block. 1 bit is used for the flipping and 2 bits are used for the rotation. This results in the total memory size of the
images with the larger block sizes as 1024(8 + 1 + 2 + 4 + 9) = 24576 bits = 3072 bytes and for the smaller block
sizes 4096(10 + 1 + 2 + 4 + 9) = 106496 bits = 13312 bytes. By dividing the memory size of the original images
(which is 512 · 512 · 8 = 2097152 bits = 262144 bytes) with the memory size of the compressed images, we get the
compression ratio. For example, the compression ratio of the standard fractal block coding algorithm with range
262144
block of size 16 × 16 is 2688 = 97.5.
The original image is segmented into parts such that each part is nearly the same as a reduced copy of the original
image. The union of all the segments is then close enough to the original image. Thus the images with global self
similarity are encoded with extreme efficiency . Unfortunately, a general image is not always globally self similar.
In such images, self similarity exists only locally amongst different small parts of it. See the following image.
It has been observed that all the images in nature contain a considerable amount of affine redundancy. The affine
redundancy means large segments of the image look like the small segments of the same image. Large segments are
known as domain blocks whereas small segments as range blocks. We can find an affine transformation (a
combination of rotation, reflection, scaling and shifting transformation) that transforms a domain block to the
suitable range block. The parameters of the transformation constitute a fractal code. Thus a range block is
approximated by applying an affine transformation on a suitably chosen domain block. Since the mapping reduces
the size of the domain block, it is a contractive mapping. Fractal image compression works as follows:
1. The image is partitioned into non-overlapping range blocks. Generally, the partition of an image may have any
arbitrary shape (square, rectangles, triangles, quadrilaterals or any polygon.
2. The same image is partitioned into overlapping domain blocks. Domain blocks are larger in size than the range
blocks in order to maintain contractive condition.
3. Finally the image is encoded by using a suitable affine transformation which maps a domain block to a best fitted
range block.
4. To achieve the decompression, exactly the opposite is done. Inverse affine transform is applied to recover the
image. Usually 8 to 9 inverse iterations are applied on the encoded image to decode the image. The iteration starts
with any arbitrary image. Successive application of the affine map gives the sequence of images that ultimately
converge to a fixed image (by fixed point theorem of Banach).
Conclusion:-
QR code consist of mainly very dark or very bright grayscale colours each ”error” is quite notable, especially for
the larger block sizes. The 4 fractal images of the QR code in Figure 8 are successfully readable by a QR code
scanner.Our purpose is to give the digital version of Banach fixed point theorem by introducing θ-contractive type
mapping. These results are the applications of fixed point theory in digital metric space. It will be useful for digital
topology and fixed point theory. In the future, we will also use the fixed point theory to solve some problems in
digital images.
1124
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
(a)Standard algorithmwith
range block size 16 × 16.
(b)Standard algorithm with
PSNR: 17.6 range block size 8 × 8.
PSNR: 26.9
Memory size: 2688 bytes. Memory size: 11776 bytes.
Compression ratio: 22.3
Compression ratio: 97.5
(c)Enhanced algorithm with range (d) Enhanced algorithm with range block
size 8 × 8.
block size 16 × 16. PSNR: 28.0
Memory size: 13312 bytes.
PSNR: 19.2 Compression ratio: 19. 7
Memory size: 3072 bytes.
Compression ratio: 85.3
1125
ISSN: 2320-5407 Int. J. Adv. Res. 10(01), 1110-1126
Fig 8 :- Results of both fractal compression methods for the QR code with two different range block sizes. The
PSNR, memory size and compression ratio is presented for each compressed image.
Finding the best match for each range block is the most computationally intensive part of the algorithms. Because
the enhanced method includes 8 variants of each domain block, the encoding part's run-time is approximately 8
times longer than with the current implementation. This is significant because encoding is already a lengthy
procedure, especially for smaller block sizes. In theory, the improved method should produce better (or at least
comparable) results than the standard fractal block algorithm. The PSNR results back this up, but the memory trade-
off may not be worth it in most cases.
Fractal image compression, as previously stated, is a lossy compression method. The Joint Photographic Experts
Group, or JPEG, is a more well-known lossy compression method. Even though the fractal block algorithm has a
high compression ratio at times, the long encoding time is a significant disadvantage. Other fractal image
compression methods attempt to address this shortcoming in order to make fractal compression a more competitive
option. However, existing fractal methods are still regarded as time-consuming compression methods when
compared to, say, JPEG.
Thus the contractive mappings and fixed point theorem is at the core of the fractal image compression. Important
aspect of the fractal image decoding is resolution independence. That means we may compress a 128 × 128 image,
and decompress it to any size, say 64 × 64 or 256 × 256. Fractal image .compression produces better reconstructed
images than that of JPEG (Joint Photographic Expert Group) technique.
Acknowledgements:-
Authors are thankful to the anonymous referees for their critical remarks. Their valuable suggestions is a key to
improved the article, especially the last section.
References:-
1 Banach S,“Sur les operations dans les ensembles abstraits et leurs applications equation integrals”,Fund. Math.,
No. 3, pp. 133– 181, 1922.
2 Brouwer LEJ, “ubereineindeutige,stetiger Transformation envonFlaachen in Sich”, Math. Ann., 69, pp. 176-180,
1910.
3Brouwer LEJ,“uberAbbildungen Von Mannigfaltigkeiton”, Math. Ann., 77,pp. 97-115,1912.
4 Kong TY, “A Digital Fundamental Group, Computers and Graphics”, 13, pp. 159-166, 1989.
5 Boxer L, “Digitally Continuous Functions”,Pattern Recognition Letters, 15, pp. 833- 839, 1994.
6Boxer L, “A Classical Constructions for The Digital Fundamental Group”, J.Math. Imaging Vis., 10, pp. 51-
62,1999.
7Boxer L, “Properties of Digital Homotopy”, J. Math. Imaging Vis., 22,pp. 19-26, 2005.
8Rosenfeld A,“Continuous functions on digital pictures”, Pattern Recognition Letters 4, pp. 177-184, 1986.
9Ege O, Karaca I, “Applications of The Lefschetz Number to Digital Images”,Bull. Belg. Math. Soc. Simon
Stevin,21,pp.823- 839,20
10Ege O, Karaca I, “Banach Fixed Point Theorem forr Digital Images”, J. Nonlinear Sci. Appl. 8, pp. 237-245,
2015.
11Ege O, Karaca I, “Lefschetz Fixed Point Theorem for Digital Images”,Fixed Point Theory, Appl., 13 pages, 2013.
12Han S.E,“Non-Product property of the digital fundamental group”, Inform. Sci., 171, pp. 73-91, 2005.
13 Rosenfeld A, “Digital Topology”,Amer. Math. Monthly, 86, pp. 76-87,1979.
14 Han S. E, “Banach fixed point Theorem from the viewpoint of digital topology”, J. Nonlinear sci. Appl., 9, pp.
895-905, 2016.
15Sridevi K., M. V. R. Kameshwari And D. M. K. Kiran,“Fixed Point Theorem for Digital Contractive Type
Mappings in Digital Metric Space” International Journal of Mathematics Trends and Technology, Vol. 48, No. 3,
pp. 159-167, 201
16Mandelbrot B. The fractal geometry of nature. 2nd edition. WH. Freeman and Co. San Francisco; 1982.
17 Hutchinson J. Fractals and self similarity. Indiana University Journal of Mathematics. 1981;30:713-747.
18 Barnsley MF. Fractals everywhere. 2nd edition. Academic Press. San Diego; 1993.
19 Barnsley MF, Sloan AD. A better way to compress images. Byte. 1988;215-223.
20 Kenneth Falconer. Fractal Geometry, Mathematical foundations and applications. Wiley , 1990.
1126