0% found this document useful (0 votes)
58 views12 pages

Image Denoising With Block-Matching and 3D Filtering

BM3D

Uploaded by

Salman Javaid
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views12 pages

Image Denoising With Block-Matching and 3D Filtering

BM3D

Uploaded by

Salman Javaid
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Image denoising with block-matching and 3D ltering

Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian


Institute of Signal Processing, Tampere University of Technology, Finland
PO BOX 553, 33101 Tampere, Finland
rstname.lastname@tut.
ABSTRACT
We present a novel approach to still image denoising based on eective ltering in 3D transform domain by
combining sliding-window transform processing with block-matching. We process blocks within the image in a
sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently
processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between
them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D
decorrelating unitary transform and eectively attenuate the noise by shrinkage of the transform coecients.
The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for
all image blocks in sliding manner, the nal estimate is computed as weighed average of all overlapping block-
estimates. A fast and ecient algorithm implementing the proposed approach is developed. The experimental
results show that the proposed method delivers state-of-art denoising performance, both in terms of objective
criteria and visual quality.
Keywords: image denoising, block-matching, 3D transforms
1. INTRODUCTION
Much of the recent research on image denoising has been focused on methods that reduce noise in transform
domain. Starting with the milestone work of Donoho,
1, 2
many of the later techniques
37
performed denoising in
wavelet transform domain. Of these methods, the most successful proved to be the ones
4, 5, 7
based on rather so-
phisticated modeling of the noise impact on the transform coecients of overcomplete multiscale decompositions.
Not limited to multiscale techniques, the overcomplete representations have traditionally played a signicant role
in improving the restoration abilities of even the most basic transform-based methods. This is manifested by
the sliding-window transform denoising,
8, 9
where the basic idea is to successively denoise overlapping blocks
by coecient shrinkage in local 2D transform domain (e.g. DCT, DFT, etc.). Although the transform-based
approaches deliver very good overall performance in terms of objective criteria, they fail to preserve details which
are not suitably represented by the used transform and often introduce artifacts that are characteristic of this
transform.
A dierent denoising strategy based on non-local estimation appeared recently,
10, 11
where a pixel of the true
image is estimated from regions which are found similar to the region centered at the estimated pixel. These
methods, unlike the transform-based ones, introduce very few artifacts in the estimates but often oversmooth
image details. Based on an elaborate adaptive weighting scheme, the exemplar-based denoising
10
appears to be
the best of them and achieves results competitive to the ones produced by the best transform-based techniques.
The concept of employing similar data patches from dierent locations is popular in the video processing eld
under the term of block-matching, where it is used to improve the coding eciency by exploiting similarity
among blocks which follow the motion of objects in consecutive frames. Traditionally, block-matching has
found successful application in conjunction with transform-based techniques. Such applications include video
compression (MPEG standards) and also video denoising,
12
where noise is attenuated in 3D DCT domain.
We propose an original image denoising method based on eective ltering in 3D transform domain by
combining sliding-window transform processing with block-matching. We undertake the block-matching concept
for a single noisy image; as we process image blocks in a sliding manner, we search for blocks that exhibit similarity
to the currently-processed one. The matched blocks are stacked together to form a 3D array. In this manner,
Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning
edited by E.R. Dougherty, J.T. Astola, K.O. Egiazarian, N.M. Nasrabadi, S.A. Rizvi
Proc. of SPIE-IS&T Electronic Imaging, SPIE Vol. 6064, 606414, 2006 SPIE-IS&T 0277-786X/06/$15
SPIE-IS&T/ Vol. 6064 606414-1
we induce high correlation along the dimension of the array in which the blocks are stacked. We exploit this
correlation by applying a 3D decorrelating unitary transform which produces a sparse representation of the true
signal in 3D transform domain. Ecient noise attenuation is done by applying a shrinkage operator (e.g. hard-
thresholding or Wiener ltering) on the transform coecients. This results in improved denoising performance
and eective detail preservation in the local estimates of the matched blocks, which are reconstructed by an
inverse 3D transform of the ltered coecients. After processing all blocks, the nal estimate is the weighted
average of all overlapping local block-estimates. Because of overcompleteness which is due to the overlap, we
avoid blocking artifacts and further improve the estimation ability.
Although the proposed approach is general with respect to the type of noise, for simplicity of exposition, we
restrict our attention to the problem of attenuating additive white Gaussian noise (AWGN).
The basic approach and its extension to Wiener ltering are presented in Sections 2 and 3, respectively.
An ecient algorithm which implements the proposed approach is developed in Section 4. Finally, Section 5 is
devoted to demonstration and discussion of experimental results.
2. DENOISING BY SHRINKAGE IN 3D TRANSFORM DOMAIN WITH
BLOCK-MATCHING
Let us introduce the observation model and notation used throughout the paper. We consider noisy observations
z : X R of the form z (x) = y (x) + (x), where x X is a 2D spatial coordinate that belongs to the image
domain X Z
2
, y is the true image, and (x) ~ N
_
0,
2
_
is white Gaussian noise of variance
2
. By Z
x
we
denote a block of xed size N
1
N
1
extracted from z, which has z (x) as its upper-left element; alternatively, we
say that Z
x
is located at x. With y we designate the nal estimate of the true image.
Let us state the used assumptions. We assume that some of the blocks (of xed size N
1
N
1
) of the true image
exhibit mutual correlation. We also assume that the selected unitary transform is able to represent sparsely these
blocks. However, the diversity of such blocks in natural images often makes the latter assumption unsatised
in 2D transform domain and fullled only in 3D transform domain due to the correlation introduced by block-
matching. The standard deviation of the AWGN can be accurately estimated (e.g.
1
), therefore we assume its
a-priori knowledge.
2.1. Local Estimates
We successively process all overlapping blocks of xed size in a sliding manner, where "process" stands for the
consecutive application of block-matching and denoising in local 3D transform domain. For the sub-subsections
to follow, we x the currently processed block as Z
xR
, where x
R
X, and denominate it as "reference block".
2.1.1. Block-matching
Block-matching is employed to nd blocks that exhibit high correlation to Z
xR
. Because its accuracy is signif-
icantly impaired by the presence of noise, we utilize a block-similarity measure which performs a coarse initial
denoising in local 2D transform domain. Hence, we dene a block-distance measure (inversely proportional to
similarity) as
d(Z
x1
, Z
x2
) = N
1
1
_
_
_
_

_
T
2D
(Z
x1
) ,
thr2D

_
2 log (N
2
1
)
_

_
T
2D
(Z
x2
) ,
thr2D

_
2log (N
2
1
)
__
_
_
_
2
, (1)
where x
1
, x
2
X, T
2D
is a 2D linear unitary transform operator (e.g. DCT, DFT, etc.), is a hard-threshold
operator,
thr2D
is xed threshold parameter, and ||
2
denotes the L
2
-norm. Naturally, is dened as
(,
thr
) =
_
, if || >
thr
0, otherwise.
The result of the block-matching is a set S
xR
_ X of the coordinates of the blocks that are similar to Z
xR
according to our d-distance (1); thus, S
xR
is dened as
S
xR
= {x X | d(Z
xR
, Z
x
) <
match
} , (2)
SPIE-IS&T/ Vol. 6064 606414-2
Figure 1. Fragments of Lena, House, Boats and Barbara corrupted by AWGN of = 15. For each fragment block-
matching is illustrated by showing a reference block marked with R and a few of its matched ones.
where
match
is the maximumd-distance for which two blocks are considered similar. Obviously d (Z
xR
, Z
xR
) = 0,
which implies that |S
xR
| _ 1, where |S
xR
| denotes the cardinality of S
xR
.
The matching procedure in presence of noise is demonstrated on Figure 1, where we show a few reference
blocks and the ones matched as similar to them.
2.1.2. Denoising in 3D transform domain
We stack the matched noisy blocks Z
xSx
R
(ordering them by increasing d-distance to Z
xR
) to form a 3D array
of size N
1
N
1
|S
x
R
|, which is denoted by Z
S
x
R
. We apply a unitary 3D transform T
3D
on Z
S
x
R
in order
to attain sparse representation of the true signal. The noise is attenuated by hard-thresholding the transform
coecients. Subsequently, the inverse transform operator T
1
3D
yields a 3D array of reconstructed estimates

Y
Sx
R
= T
1
3D
_

_
T
3D
_
Z
Sx
R
_
,
thr3D

_
2 log (N
2
1
)
__
, (3)
where
thr3D
is a xed threshold parameter. The array

Y
Sx
R
comprises of |S
xR
| stacked local block estimates

Y
xR
xSx
R
of the true image blocks located at x S
xR
. We dene a weight for these local estimates as

x
R
=
_
1
N
har
, if N
har
_ 1
1, otherwise,
(4)
where N
har
is the number of non-zero transform coecients after hard-thresholding. Observe that
2
N
har
is
equal

to the total variance of



Y
Sx
R
. Thus, sparser decompositions of Z
Sx
R
result in less noisy estimates which
are awarded greater weights by (4).
2.2. Estimate Aggregation
After processing all reference blocks, we have a set of local block estimates

Y
xR
xS
x
R
, \x
R
X (and their corre-
sponding weights
xR
, \x
R
X), which constitute an overcomplete representation of the estimated image due to
the overlap between the blocks. It is worth mentioning that a few local block estimates might be located at the
same coordinate (e.g.

Y
x
a
xb
and

Y
x
b
xb
are both located at x
b
but obtained while processing the reference blocks at
x
a
and x
b
, respectively). Let

Y
xR
xm
(x) be an estimate of y (x), where x, x
R
X, and x
m
S
xR
. We zero-extend

Y
xR
xm
(x) outside its square support in order to simplify the formulation. The nal estimate y is computed as a
weighted average of all local ones as given by
y(x) =

xRX

xmSx
R

x
R

Y
xR
xm
(x)

xRX

xmSx
R

xR

xm
(x)
, \x X, (5)

Equality holds only if the matched blocks that build ZS


x
R
are non-overlapping; otherwise, a certain amount of
correlation is introduced in the noise.
SPIE-IS&T/ Vol. 6064 606414-3
where
xm
: X {0, 1} is the characteristic function of the square support of a block located at x
m
X.
One can expect substantially overcomplete representation of the signal in regions where a block is matched
to many others. On the other hand, if a match is not found for a given reference block, the method reduces to
denoising in 2D transform domain. Thus, the overcomplete nature of the method is highly dependent on the
block-matching and therefore also on the particular noisy image.
3. WIENER FILTER EXTENSION
Provided that an estimate of the true image is available (e.g. it can be obtained from the method given in the
previous section), we can construct an empirical Wiener lter as a natural extension of the above thresholding
technique. Because it follows the same approach, we only give the fewfundamental modications that are required
for its development and thus omitting repetition of the concept. Let us denote the initial image estimate by
e : X R. In accordance with our established notation, E
x
designates a square block of xed size N
1
N
1
,
extracted from e and located at x X.
3.1. Modication to Block-Matching
In order to improve the accuracy of block-matching, it is performed within the initial estimate e rather than the
noisy image. Accordingly, we replace the thresholding-based d-distance measure from (1) with the normalized
L
2
-norm of the dierence of two blocks with subtracted means. Hence, the denition (2) of S
xR
becomes
S
x
R
=
_
x X | N
1
1
_
_
_
E
x
R
E
x
R
_

_
E
x
E
x
__
_
2
<
match
_
, (6)
where E
x
R
and E
x
are the mean values of the blocks E
x
R
and E
x
, respectively. The mean subtraction allows for
improved matching of blocks with similar structures but dierent mean values.
3.2. Modication to Denoising in 3D Transform Domain
The linear Wiener lter replaces the nonlinear hard-thresholding operator. The attenuating coecients for the
Wiener lter are computed in 3D transform domain as
W
Sx
R
=

T
3D
_
E
Sx
R
_

T
3D
_
E
Sx
R
_

2
+
2
,
where E
S
x
R
is a 3D array built by stacking the matched blocks E
xS
x
R
(in the same manner as Z
S
x
R
is built by
stacking Z
xSx
R
). We lter the 3D array of noisy observations Z
Sx
R
in T
3D
-transform domain by an elementwise
multiplication with W
Sx
R
. The subsequent inverse transform gives

Y
Sx
R
= T
1
3D
_
W
Sx
R
T
3D
_
Z
Sx
R
__
, (7)
where

Y
Sx
R
comprises of stacked local block estimates

Y
x
R
xSx
R
of the true image blocks located at the matched
locations x S
xR
. As in (4), the weight assigned to the estimates is inversely proportional to the total variance
of

Y
Sx
R
and dened as

xR
=
_
_
_
N1

i=1
N1

j=1
|Sx
R
|

t=1

W
Sx
R
(i, j, t)

2
_
_
_
1
. (8)
SPIE-IS&T/ Vol. 6064 606414-4
Figure 2. Flowchart for denoising by hard-thresholding in 3D transform domain with block-matching.
4. ALGORITHM
We present an algorithm which employs the hard-thresholding approach (from Section 2) to deliver an initial
estimate for the Wiener ltering part (from Section 3) that produces the nal estimate. A straightforward
implementation of this general approach is computationally demanding. Thus, in order to realize a practical
and ecient algorithm, we impose constraints and exploit certain expedients. In this section we introduce these
aspects and develop an ecient implementation of the proposed approach.
The choice of the transforms T
2D
and T
3D
is governed by their energy compaction (sparsity) ability for noise-
free image blocks (2D) and stacked blocks (3D), respectively. It is often assumed that neighboring pixels in small
blocks extracted from natural images exhibit high correlation; thus, such blocks can be sparsely represented
by well-established decorrelating transforms, such as the DCT, the DFT, wavelets, etc. From computational
eciency point of view, however, very important characteristics are the separability and the availability of fast
algorithms. Hence, the most natural choice for T
2D
and T
3D
is a fast separable transform which allows for sparse
representation of the true-image signal in each dimension of the input array.
4.1. Ecient Image Denoising Algorithm with Block-Matching and 3D Filtering
Let us introduce constraints for the complexity of the algorithm. First, we x the maximum number of matched
blocks by setting an integer N
2
to be the upper bound for the cardinality of the sets S
xRX
. Second, we do
block-matching within a local neighborhood of xed size N
S
N
S
centered about each reference block, instead
of doing it in the whole image. Finally, we use N
step
as a step by which we slide to every next reference block.
Accordingly, we introduce X
R
_ X as the set of the reference blocks coordinates, where |X
R
| -
|X|
N
2
step
(e.g.,
N
step
= 1 implies X
R
= X).
In order to reduce the impact of artifacts on the borders of blocks (border eects), we use a Kaiser window
W
win2D
(with a single parameter ) as part of the weights of the local estimates. These artifacts are inherent of
SPIE-IS&T/ Vol. 6064 606414-5
many transforms (e.g. DFT) in presence of sharp intensity dierences across the borders of a block.
Let the input noisy image be of size M N, thus |X| = MN. We use two buers of the same sizeebu for
estimates and wbu for weightsto represent the summations in the numerator and denominator, respectively,
in (5). For simplicity, we extend our notation so that ebu (x) denotes a single pixel at coordinate x X and
ebu
x
designates a block located at x in ebu (the same notation is to be used for wbu ).
A owchart of the hard-thresholding part of the algorithm is given in Figure 2 (but we do not give such for
the Wiener ltering part since it requires only the few changes given in Section 3). Following are the steps of
the image denoising algorithm with block-matching and 3D ltering.
1. Initialization. Initialize ebu (x) = 0 and wbu (x) = 0, for all x X.
2. Local hard-thresholding estimates. For each x
R
X
R
, do the following sub-steps.
(a) Block-matching. Compute S
x
R
as given in Equation (2) but restrict the search to a local neighborhood
of xed size N
S
N
S
centered about x
R
. If |S
xR
| > N
2
, then let only the coordinates of the N
2
blocks
with smallest d-distance to Z
xR
remain in S
xR
and exclude the others.
(b) Denoising by hard-thresholding in local 3D transform domain. Compute the local estimate blocks

Y
x
R
xSx
R
and their corresponding weight
xR
as given in (3) and (4), respectively.
(c) Aggregation. Scale each reconstructed local block estimate

Y
xR
x
, where x S
xR
, by a block of weights
W (x
R
) =
xR
W
win2D
and accumulate to the estimate buer: ebu
x
= ebu
x
+W (x
R
)

Y
xR
x
, for all
x S
x
R
. Accordingly, the weight block is accumulated to same locations as the estimates but in the
weights buer: wbu
x
= wbu
x
+W (x
R
), for all x S
xR
.
3. Intermediate estimate. Produce the intermediate estimate e (x) =
ebu (x)
wbu(x)
for all x X, which is to be
used as initial estimate for the Wiener counterpart.
4. Local Wiener ltering estimates. Use e as initial estimate. The buers are re-initialized: ebu (x) = 0
and wbu (x) = 0, for all x X. For each x
R
X
R
, do the following sub-steps.
(a) Block-matching. Compute S
xR
as given in (6) but restrict the search to a local neighborhood of xed
size N
S
N
S
centered about x
R
. If |S
xR
| > N
2
, then let only the coordinates of the N
2
blocks with
smallest distance (as dened in Subsection 3.1) to E
xR
remain in S
xR
and exclude the others.
(b) Denoising by Wiener ltering in local 3D transform domain. The local block estimates

Y
x
R
xSx
R
and
their weight
xR
are computed as given in (7) and (8), respectively.
(c) Aggregation. It is identical to step 2c.
5. Final estimate. The nal estimate is given by y(x) =
ebu (x)
wbu(x)
, for all x X.
4.2. Complexity
The time complexity order of the algorithm as a function of its parameters is given by
O(MNO
T2D
(N
1
, N
1
)) + O
_
MN
_
N
2
1
+N
2
_
N
2
S
N
2
step
_
+O
_
MN
O
T
3D
(N
1
, N
1
, N
2
)
N
2
step
_
,
where the rst two addends are due to block-matching and the third is due to T
3D
used for denoising and where
O
T2D
(N
1
, N
1
) and O
T3D
(N
1
, N
1
, N
2
) denote the complexity orders of the transforms T
2D
and T
3D
, respectively.
Both O
T2D
and O
T3D
depend on properties of the adopted transforms such as separability and availability of
fast algorithms. For example, the DFT has an ecient implementation by means of fast Fourier transform
(FFT). The 2D FFT, in particular, has complexity O(N
1
N
2
log (N
1
N
2
)) as opposed to O
_
N
2
1
N
2
2
_
of a custom
non-separable transform. Moreover, an eective trade-o between complexity and denoising performance can be
achieved by varying N
step
.
SPIE-IS&T/ Vol. 6064 606414-6
Table 1. Results in output PSNR (dB) of the denoising algorithm with block-matching and ltering in 3D DFT domain.
Image
Lena Barbara House Peppers Boats Couple Hill
/ PSNR 512 512 512 512 256 256 256 256 512 512 512 512 512 512
5/ 34.15 38.63 38.18 39.54 37.84 37.20 37.40 37.11
10/ 28.13 35.83 34.87 36.37 34.38 33.79 33.88 33.57
15/ 24.61 34.21 33.08 34.75 32.31 31.96 31.93 31.79
20/ 22.11 33.03 31.77 33.54 30.87 30.65 30.58 30.60
25/ 20.17 32.08 30.75 32.67 29.80 29.68 29.57 29.74
30/ 18.59 31.29 29.90 31.95 28.97 28.90 28.75 29.04
35/ 17.25 30.61 29.13 31.21 28.14 28.20 28.03 28.46
50/ 14.16 29.08 27.51 29.65 26.46 26.71 26.46 27.21
100/ 8.13 26.04 24.14 25.92 23.11 24.00 23.60 24.77
5. RESULTS AND DISCUSSION
We present experiments conducted with the algorithm introduced in Section 4, where the transforms T
2D
and T
3D
are the 2D DFT and the 3D DFT, respectively. All results are produced with the same xed parametersbut
dierent for the hard-thresholding and Wiener ltering parts. For the hard-thresholding, N
1
is automatically
selected in the range 7 _ N
1
_ 13 based on ,
match
= 0.233, N
2
= 28, N
step
= 4, N
S
= 73, = 4,
th2D
= 0.82,
and
th3D
= 0.75. For the Wiener ltering, N
1
is automatically selected in the range 7 _ N
1
_ 11 based on ,

match
=

4000
+ 0.0105, N
2
= 72, N
step
= 3, N
S
= 35, and = 3. In Table 1, we summarize the results of the
proposed technique in terms of output peak signal-to-noise ratio (PSNR) in decibels (dB), which is dened as
PSNR = 10 log
10
_
255
2
|X|
1

xX
(y (x) y (x))
2
_
.
At http://www.cs.tut.fi/~foi/3D-DFT, we provide a collection of the original and denoised test images that
were used in our experiments, together with the algorithm implementation (as C++ and MATLAB functions)
which produced all reported results. With the mentioned parameters, the execution time of the whole algorithm
is less than 9 seconds for an input image of size 256 256 on a 3 GHz Pentium machine.
In Figure 3, we compare the output PSNR of our method with the reported ones of three
6, 7, 10
state-of-art
techniques known to the authors as best. However, for standard deviations 30 and 35 we could neither nd nor
reproduce the results of both the FSP+TUP
7
and the exemplar-based
10
techniques, thus they are omitted.
In Figure 4, we show noisy ( = 35) House image and the corresponding denoised one. For this test
image, similarity among neighboring blocks is easy to perceive in the uniform regions and in the regular-shaped
structures. Hence, those details are well-preserved in our estimate. It is worth referring to Figure 1, where
block-matching is illustrated for a fragment of House.
Pairs of noisy ( = 35) and denoised Lena and Hill images are shown in Figures 5 and 6, respectively. The
enlarged fragments in each gure help to demonstrate the good quality of the denoised images in terms of faithful
detail preservation (stripes on the hat in Lena and the pattern on the roof in Hill ).
We show fragments of noisy ( = 50) and denoised Lena, Barbara, Couple, and Boats images in Figure 7. For
this relatively high level of noise, there are very few disturbing artifacts and the proposed technique attains good
preservation of: sharp details (the table legs in Barbara and the poles in Boats), smooth regions (the cheeks of
Lena and the suit of the man in Couple), and oscillatory patterns (the table cover in Barbara). A fragment of
Couple corrupted by noise of various standard deviations is presented in Figure 8.
In order to demonstrate the capability of the proposed method to preserve textures, we show fragments of
heavily noisy ( = 100) and denoised Barbara in Figure 9. Although the true signal is almost completely buried
under noise, the stripes on the clothes are faithfully restored in the estimate.
SPIE-IS&T/ Vol. 6064 606414-7
Noise standard deviation Noise standard deviation
(a) Barbara (b) Lena
Noise standard deviation Noise standard deviation
(c) Peppers (d) House
Figure 3. Output PSNR as a function of the standard deviation for Barbara (a), Lena (b), Peppers (c), and House (d).
The notation is: proposed method (squares), FSP+TUP
7
(circles), BLS-GSM
6
(stars), and exemplar-based
10
(triangles).
SPIE-IS&T/ Vol. 6064 606414-8
Figure 4. On the left are a noisy ( = 35) House and two enlarged fragments from it; on the right are the denoised
image (PSNR 3121 dB) and the corresponding fragments.
We conclude by remarking that the proposed method outperformsin terms of objective criteriaall tech-
niques known to us. Moreover, our estimates retain good visual quality even for relatively high levels of noise.
Our current research extends the presented approach by the adoption of variable-sized blocks and shape-
adaptive transforms,
13
thus further improving the adaptivity to the structures of the underlying image. Also,
application of the technique to more general restoration problems is being considered.
REFERENCES
1. D. L. Donoho and I. M. Johnstone, "Adapting to unknown smoothness via wavelet shrinkage," J. Amer.
Stat. Assoc., vol. 90, pp. 12001224, 1995.
2. D. L. Donoho, "De-noising by soft-thresholding," IEEE Trans. Inform. Theory, vol. 41, pp. 613627, 1995.
3. S. G. Chang, B. Yu, and M. Vetterli, "Adaptive wavelet thresholding for image denoising and compression,"
IEEE Trans. Image Processing, vol. 9, pp. 15321546, 2000.
4. A. Pizurica, W. Philips, I. Lemahieu, and M. Acheroy, "A joint inter- and intrascale statistical model for
Bayesian wavelet based image denoising," IEEE Trans. Image Processing, vol. 11, pp. 545557, 2002.
5. L. Sendur and I. W. Selesnick, "Bivariate shrinkage with local variance estimation," IEEE Signal Processing
Letters, vol. 9, pp. 438441, 2002.
6. J. Portilla, V. Strela, M. Wainwright, and E. P. Simoncelli, "Image denoising using scale mixtures of Gaus-
sians in the wavelet domain," IEEE Trans. Image Processing, vol. 12, pp. 13381351, 2003.
7. J. A. Guerrero-Colon and J. Portilla, "Two-level adaptive denoising using Gaussian scale mixtures in over-
complete oriented pyramids," in Proc. of IEEE Intl Conf on Image Processing, Genoa, Italy, September
2005.
SPIE-IS&T/ Vol. 6064 606414-9
Figure 5. On the left are noisy ( = 35) Lena and two enlarged fragments from it; on the right are the denoised image
(PSNR 3061 dB) and the corresponding fragments.
Figure 6. On the left are noisy ( = 35) Hill and two fragments from it; on the right are the denoised image (PSNR
2846 dB) and the corresponding fragments from it.
SPIE-IS&T/ Vol. 6064 606414-10
(a) Lena ( = 50, PSNR 2908 dB) (b) Barbara ( = 50, PSNR 2751 dB)
(c) Couple ( = 50, PSNR 2646 dB) (d) Boats ( = 50, PSNR 2671 dB)
Figure 7. Fragments of noisy ( = 50) and denoised test images.
8. L. Yaroslavsky, K. Egiazarian, and J. Astola, "Transform domain image restoration methods: review, com-
parison and interpretation," in Nonlinear Image Processing and Pattern Analysis XII, Proc. SPIE 4304,
pp. 155169, 2001.
9. R. ktem, L. Yaroslavsky and K. Egiazarian, "Signal and image denoising in transform domain and wavelet
shrinkage: a comparative study," in Proc. of EUSIPCO98, Rhodes, Greece, September 1998.
10. C. Kervrann and J. Boulanger, "Local adaptivity to variable smoothness for exemplar-based image denoising
and representation," Research Report INRIA, RR-5624, July 2005.
11. A. Buades, B. Coll, and J. M. Morel, "A review of image denoising algorithms, with a new one," Multiscale
Model. Simul., vol. 4, pp. 490530, 2005.
12. D. Rusanovskyy and K. Egiazarian, "Video denoising algorithm in sliding 3D DCT domain," in Proc. of
ACIVS05, Antwerp, Belgium, September 2005.
13. A. Foi, K. Dabov, V. Katkovnik, and K. Egiazarian, "Shape-Adaptive DCT for Denoising and Image
Reconstruction," in Electronic Imaging06, Proc. SPIE 6064, no. 6064A-18, San Jose, California USA,
2006.
SPIE-IS&T/ Vol. 6064 606414-11
(a) = 25, PSNR 2957 dB (b) = 50, PSNR 2646 dB
(c) = 75, PSNR 2474 dB (d) = 100, PSNR 2360 dB
Figure 8. Pairs of fragments of noisy and denoised Couple for standard deviations: 25 (a), 50 (b), 75 (c), and 100 (d).
Figure 9. Fragments of noisy ( = 100) and denoised (PSNR 2414 dB) Barbara.
SPIE-IS&T/ Vol. 6064 606414-12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy