Papers Publised-1
Papers Publised-1
______________________________________________________________________________________
Abstract—In this paper, the curve fitting by cubic B-spline Curve fitting plays an important role in computer aided
curves is used to compress gray-level images. The fitting tool geometry design (CAGD), image processing, shape modeling
is the progressive and iterative approximation method for least and data mining. There are some works involving with curve
square fitting (LSPIA), which is used to approximate scanned
image data. Different from the existing methods by using fitting in image compression [4], [5], [6], [7], [8], [9], [11].
piecewise curves, the image data are fitted by a single curve. It is known that the data of a scanned image are very
Hence it can well preserve the relative information between large, it may bring a large amount of computational cost
neighborhood pixels. In particular, to reduce the compression and cause computational instability when fitting by higher
ratio, we further exploit some techniques to save storage order polynomials or non-polynomials. Hence we should
space. Numerical implements show that the proposed method
outperforms the existing methods by using fitting curves. find suitable fitting curves and efficient solvers to fit these
data. Due to these reasons, piecewise curves are suggested in
Index Terms—image compression, LSPIA, cubic B-spline, fitting these scanned data. For example, piecewise Bernstein
Hilbert scan, curve fitting.
polynomials of degree 2 [8], [9]; piecewise quasi cubic ra-
tional Bézier curve [10]; trigonometric Bézier curve [11] and
I. I NTRODUCTION so on. Despite the fact that fitting by piecewise curves has
good fitting performance and is easy to operate, there are also
The aim of curve fitting is to find a cubic B-spline curve In curve fitting, we need to measure the fitting error.
n Let {Qj }m
j=0 be the points to be fitted and tj be their
corresponding parameters. Then we use
X
r(t) = Pi Bi,3 (t), (1)
v
i=0 u m
u 1 X
that approximates the points {Qj }m n ε=t kQ − r(tj )k2 . (6)
j=0 best. Here the {Pi }i=0 m + 1 j=0 j
in (1) are the control points to be determined, Bi,3 (t) are
the cubic B-spline basic functions defined at the knot vector to represent the fitting error of the fitting curve r(t).
{0 = u0 = u1 = u2 = u3 < u4 < u5 < . . . < un < un+1 =
un+2 = un+3 = un+4 = 1}, in details B. LSPIA by using cubic B-spline
Given an ordered points set {Qj }mj=0 to be fitted, and tj be
0 if ui ≤ t ≤ ui+1 ;
Bi,0 (t) =
1 otherwise. the parameters of Qj such that 0 = t0 < t1 < . . . < tm < 1.
t − ui ui+p+1 − t (0)
Bi,p (t) = Bi,p−1 (t) + Bi+1,p−1 (t), Firstly, we select {Pi }ni=0 from {Qj }m j=0 as the initial
up+1 − ui ui+p+1 − ui+1 control points and construct the initial approximate fitting
p = 1, 2, 3. curve n
(0)
X
For simplicity, we denote the cubic B-spline basic functions r(0) (t) = Pi Bi (t).
Bi,3 (t) by Bi (t) in this paper. Oftentimes, the number of the i=0
control points is less than that of the points to be fitted, i.e., (0) (0)
Let δj = Qj − r (tj ), j = 0, 1, . . . , m. Then the first
n < m.
adjusting vector for the i-th (i = 0, 1, . . . , n) control point
The main idea of the least square fitting (LSF) method is is given by
to find an optimal control polygon {Pi }ni=0 that minimizes m
(0) (0)
X
the distances between r(t) and {Qj }m j=0 , i.e., ∆i = µ Bi (tj )δj ,
j=0
m
X
min f (P0 , P1 , . . . , Pn ) = min kQj − r(tj )k2 where µ ∈ (0, 2/λ0 ) is a constant, λ0 represents the largest
Pi
j=0 eigenvalue of B T B.
m
X n
X Next, we can generate a new approximate fitting curve
= min kQj − Pi Bi (tj )k2 . n
Pi X (1)
j=0 i=0 r(1) (t) = Pi Bi (t),
(2) i=0
(1) (0) (0)
The norm in (2) is the Euclidean norm. where Pi = Pi + ∆i , i= 0, 1, . . . , n.
The optimal curve r(t) obtained by solving (2) is said to be Suppose that we have obtained (k − 1)-th (k = 1, 2, . . .)
the LSF curve of {Qj }m j=0 . To minimize f (P0 , P1 , . . . , Pn ), curve r(k−1) (t), then the k-th approximate fitting curve can
set the gradient of f to zero, i.e., be generated by
m n n
∂f (k)
X
r(k) (t) = Pi Bi (t), (7)
X X
= −2 Bi (tj )kQj − Pi Bi (tj )k = 0,
∂Pi j=0 i=0 i=0
i = 0, 1, . . . , n. where
(k) (k−1) (k−1)
P =P + ∆i ,
i (k−1) i P
m
Hence, we have (k−1)
∆i =µ Bi (tj )δj , (8)
n
X
j=0
(k−1)
Qj − Pi Bi (tj ) = 0, j = 0, 1, . . . , m. (3) δj = Qj − r(k−1) (tj ).
i=0
Therefore, we get a sequence of curves r(k) (t), k =
T T
Let P = [P0 , P1 , . . . , Pn ] and Q = [Q0 , Q1 , . . . , Qm ] . 0, 1, . . .. The initial curve is said to have the LSPIA property
Then the equations (3) can be written in the matrix form if r(k) (t) is convergent. The limit curve of r(k) (t) is the LSF
curve of {Qj }m j=0 . Deng et al. proved that the B-spline curves
BP = Q, (4)
have the LSPIA property [14].
(k) (k)
j=0,1,...,m
where the matrix B = (Bi (tj ))i=0,1,...,n is the so-called Let P(k) = [P0 , P1 , . . . , P(k) T
n ] . Then the euqations (8)
collocation matrix resulting from the cubic B-spline basis. can be written in the matrix form
Therefore the control polygon {Pi }ni=0 can be obtained by P(k+1) = B T Q + (I − µB T B)P(k) . (9)
solving the linear system (4). Since n < m, the system (4)
is over-determined and can be solved by solving the related The LSPIA property means that the sequence of control
(k)
system of normal equations, i.e., polygons {Pi }ni=0 converges to the control polygon of the
LSF curve.
B T BP = B T Q. (5) For LSPIA, the fitting error of the k-th approximate fitting
curve r(k) (t) is given by
As mentioned earlier, the system (5) can be solved by di- v
m
rect solvers or iterative methods. In the following subsection,
u
u 1 X
(k)
Qj − r(k) (tj )
2 .
we will introduce an iterative method for curve fitting with ε = t (10)
m + 1 j=0
clear geometric meaning.
120
Fitting the Sample at
Original Hilbert Compressed 100
scanned data the fitting
image scan image
by LSPIA curve 80
60
Fig. 1. The main process of image compression.
40
20
A. Construction of Hilbert curve 2000 4000 6000 8000 10000 12000 14000 16000
image. There are many methods to scan images [9], e.g., 200
raster scan, Z-scan and Hilbert scan, etc. Recently, re-
180
searchers are more likely to use Hilbert scan in the area of
image processing because the Hilbert scan can preserve the 160
20
0
2000 4000 6000 8000 10000 12000 14000 16000
(b) Girl
l m
(m+1)i
where f (i) = n . Finally, we summarize the image compression algorithm
into the following Algorithm 3.2.
Algorithm 3.2: (Image compression algorithm)
C. Compression quality metrics
Input: a gray-level image.
Very often, the peak signal to noise ratio (PSNR) and the Output: a compressed image.
CR are used to measure the quality of the compressed image. 1) Scan the image to obtain the Hilbert sequence
We review the definitions of CR and PSNR [1]. {Qj }m j=0 .
For the original data Qj (j = 0, 1, . . . , m) and the recon- 2) Parameterize Qj with tj and select the initial control
structed data Q̃j = r(k) (tj ), the PSNR is defined as (0)
points {Pi }ni=0 according to (13).
2552 3) Compute the knot vector for cubic B-spline according
PSNR = 10 × log10 , (14) to (12).
MSE
m
4) Compute the collocation matrix B and the optimal
2
1
Qj − r(k) (tj ) . The PSNR is value of µ.
P
where MSE = m+1
i=0 5) For k = 1, 2, . . . , kmax
usually termed as bit per pixel (bpp). Generally speaking,
the bigger the PSNR, the higher the quality of the image, (a) Update P(k) = µB T Q + (I − µB T B)P(k−1) .
and vice versa. (b) Compute the fitting error ε(k) according to (10).
The CR is defined as (c) If ε(k) ≤ θε(k−1) , break for.
End for.
Bc
CR = , (15) 6) Compute the approximate fitting curve r(k) (t) accord-
Buc
ing to (7).
where Bc represents the number of bits in compressed data, 7) Sample at tj and obtain the recovered data Q̃j =
Buc represents the number of bits in uncompressed data. The r(k) (tj ), j = 0, 1, . . . , m.
CR is termed as bit per pixel (bpp) in image compression. 8) Calculate the PSNR and the CR according to (14) and
Next we discuss the CR of the proposed method. Given an (15), respectively.
image of size M ×M pixels with gray levels {0, 1, . . . , 255}. 9) Display the compressed image.
Since the corresponding Hilbert curve is approximated by
a cubic B-spline curve r(k) (t), which can be stored by IV. I MAGE COMPRESSION EXAMPLES
(k)
saving the control points Pi (i = 0, 1, . . . , n) as well as In this section we employed Algorithm 3.2 to test the
the knot vector ui (i = 0, 1, . . . , n). Besides, having obtained following two well-known images, which are often used to
the fitting curve r(k) (t), we need to save the parameters tj illustrate the effectiveness of image compression methods.
when regenerating the approximate image data by sampling All the numerical experiments were done by Matlab R2012b
at tj , i.e., r(k) (tj )(j = 0, 1, . . . , m). Clearly, it will bloats on a PC with Intel(R) Core(TM) i5-5200U CPU @2.20 GHz
the repository unnecessarily, making it difficult to compress and RAM 6GB.
image efficiently. It should be pointed out that when the Firstly, we select n = 6200 control points when we
uniform parametrization is used, we never need to worry compress the Lena image and select n = 4745 control points
about saving tj . In addition, we can obtain the knot vector when we compress the Girl image. The fitting errors of
ui according to (12). This means that we can save massive the approximate fitting curves r(k) (t) and the PSNR of the
storage space. compressed images are shown in Fig. 4(a) and Fig. 4(b),
According to (7), we remark that the coordinates of the respectively. We observe that the fitting error decreases fast
(k)
control points Pi do not always to be integers. It is well in the first several iterations, and then decreases slowly. Sim-
known that the decimals require larger storage space than ilarly, the PSNR increases fast in the first several iterations,
the integers. To save storage space, we only take the integer and then slows down. Therefore, there is no need to iterate
parts of the coordinates as the control points of the cubic many times because it is time-consuming. Hence we add a
B-spine curve. This can be desirable in image compression terminal condition (16) and we set θ = 0.98 in our tests.
for the following two reasons. On one hand, since Bi (t) ≤ By using two different parametrization methods, we list
1 (t ∈ [0, 1]), the decimal parts of the coordinates have a in Table I the number of iterations required, the CR and
little influence on the results. More exactly, the impact of the the PSNR of the compressed images obtained by Algorithm
ignored decimal parts on the recovered data is no more than 3.2. We denote by UP the uniform parametrization, by ACP
1. On the other hand, the image data are in the gray levels the accumulated chord parametrization, by n the number of
{0, 1, . . . , 255}. We have to round off the decimal parts of the control points of cubic B-spline curve, and by k the iteration
recovered data. The rationality of integration of the control number required, respectively. In our tests, we test different
points is also verified by numerical experiments. n.
From Table I, we observe that the quality of the com-
D. Image compression algorithm pressed images is improved as the number of control points
increases. Consequently, the CR increases. Besides, despite
To ensure the computational efficiency, the iteration (9) the fact that the accumulated chord parametrization can
can also be terminated, if the following stopping criterion provide good results for compression of images. As stated in
ε(k) ≤ θε(k−1) , θ ∈ (0, 1), (16) Section III-C, it is not advisable in practice, because it will
take up lots of storage space to save the knot vectors and
is satisfied. parameters.
300
Lena
Girl
250 200
200
150
Fitter error
150
100
100
50
50
0
0 5 10 15 20 25 30 35 40 0
2000 4000 6000 8000 10000 12000 14000 16000
Iteration number
(a) Fitting error vs iteration (a) n = 5000
32
220
31 200
30 180
160
29
140
28
PSNR
120
27
100
26 80
60
25
40
24 Lena
Girl 20
23 2000 4000 6000 8000 10000 12000 14000 16000
0 5 10 15 20 25 30 35 40
Iteration number (b) n = 8000
(b) PSNR vs iteration
Fig. 5. Fitting the scanned Hilbert curves of Lena in Fig. 3(a).
Fig. 4. The fitting error and PSNR of the compressed image versus the
iteration.
TABLE I
256 gray images. In Table II, we list the numerical results
N UMERICAL RESULTS OF COMPRESSED 128 × 128 IMAGES obtained by Algorithm 3.2. The numerical results obtained
by Biswas’s methods [9] are also listed for comparison. We
UP ACP can observe that with the same CR, the PSNR of Algorithm
Image n CR
k PSNR k PSNR 3.2 is bigger than those of Biswas’s methods. This means that
4000 0.4883 8 26.3166 8 27.4178
our method performs much better than Biswas’s methods.
5000 0.6104 7 27.2148 7 28.3812 The compressed Lena and Girl images are shown in
Lena
6000 0.7324 9 28.1842 9 29.8114 Fig. 8 and 9, respectively. We can see that the compressed
7000 0.8545 10 29.1810 10 31.4707
8000 0.9766 10 30.0596 10 32.8733
images obtained by Biswas’s methods suffer from visible
9000 1.0986 12 31.1216 12 34.0487 blocking artifacts, while the compressed images obtained by
4000 0.4883 7 29.2893 7 29.9472
Algorithm 3.2 could avoid blocking artifacts satisfactorily.
5000 0.6104 7 30.0564 7 30.7359 Furthermore, it can be found from the detailed figures in
6000 0.7324 9 31.0910 9 32.4577 Fig. 8 and 9 that there exists sawtooth effect at the edges,
Girl
7000 0.8545 10 32.0776 10 34.1655
8000 0.9766 9 33.0083 9 35.7128
no matter whether Algorithm 3.2 or Biswas’s methods are
9000 1.0986 11 34.0185 11 36.6352 used. But the images compressed by Algorithm 3.2 have
the weaker sawtooth effect than those by Biswas’s methods.
These results indicate that the proposed method not only
In Fig. 5 and 6, we show the cubic B-spline curves with reduces blocking artifacts and sawtooth effect but also has
different n when fitting the scanned Hilbert curves in Fig. more compression effect than Biswas’s methods.
3. In Fig. 7, we show the compressed images obtained by
Algorithm 3.2. All the numerical results demonstrate that V. D ISCUSSION AND IMPROVEMENT
the proposed method achieves a good performance in image Oftentimes, compressed images are polluted by many
compression. kinds of noises during the process of image compression,
Secondly, we employ Algorithm 3.2 to compress 256 × especially in lossy compression. Consequently, it will bring
200
180
160
140
120
100
80
40
20
0
2000 4000 6000 8000 10000 12000 14000 16000
(a) n = 5000
200
180
160
100
80
60
40
20
0
2000 4000 6000 8000 10000 12000 14000 16000
(b) n = 8000
TABLE II
C OMPARISON OF A LGORITHM 3.2 WITH B ISWAS ’ S METHODS [9]
unbearable block artifacts, sawtooth effect at the edges and Fig. 8. Comparison of compressed Lena images.
other defects. At this time, we can improve the quality of
the compressed images by employing some pre-processing
or post-processing techniques, such as transform coding [3], VI. C ONCLUSIONS
image filtering [15] and so on. In this paper, we have exploited an image compression
Here we use some image filtering algorithms to reduce the algorithm by using LSPIA due to its efficient and reliable
sawtooth effect at the edges. The median filtering [15] and performance in data fitting. Compared with the other curve
the Gauss filtering [16] algorithms are employed to filtering fitting methods, the proposed method can well preserve the
the compressed image in Fig. 7(c). The filtered images are relative information between neighborhood pixels. Numerical
illustrated in Fig. 10. From these two examples we conclude experiments also show that the proposed method outperforms
that the image filtering algorithms can reduce the sawtooth the similar image compression methods, in terms of the
effect of the compressed images but can not eliminate it CR, the PSNR and the blocking artifacts of the compressed
entirely. images.
Fig. 10. Post-processing the compressed image in Fig. 8(c) with image
filter.
R EFERENCES