Lol
Lol
Chapter LIST OF FIGURES ABSTRACT 1 INTRODUCTION 1.1 Image processing 1.2 Image formation 1.3 Image enhancement 1.4 Objective of the thesis 1.5 Organisation of the thesis 2 LINEAR AND NON LINEAR FILTERS 2.1 Linear filter 2.2 Non linear filter 2.3 Spatial domain filters 2.4 Order of filters 3 IMAGE RESTORATION 3.1 Image restoration 3.2 Degradation and blur types 3.3 Noise types 4 IMAGE ENHANCEMENT USING BILATERAL FILTER 4.1 Origin of bilateral filter 4.2 Concept of bilateral filter 4.3 Variants of the bilateral filter 4.4 Applications of bilateral filter 5 INTRODUCTION TO MATLAB
Page | 1
Chapter name
Page No.
5.1 What is MATLAB? 5.2 The MATLAB system 5.3 MATLAB working environment 5.4 Using the MATLAB editor to create M-files 5.5 Getting help 6 7 8 9 10 MATLAB CODE RESULT AND HISTOGRAMS CONCLUSION FUTURE SCOPE BIBLIOGRAPHY
Page | 2
LIST OF FIGURES Figure No. 1.1 1.2 1.3 1.4 1.5 2.1 3.1 3.2 a) 3.2 b) 3.3 a) 3.3 b) 3.4 a) 3.4 b) 3.5 3.6 3.7 Figure Name Pixels Colour or grey level Chain of image formation The mesopic range Functions performed in the spatial domain Example of maximum filter Image restoration process Original image Added shot noise at 10 % 3x3 median filter 5x5 median filter 3x3 median filter applied 3 times 5x5 average filter Image with Gaussian noise and distribution function Image with uniform noise and distribution function Image with salt and pepper noise and distribution function Page No. 8 8 9 10 12 23 24 25 25 26 26 26 26 27 27 28
Page | 3
ABSTRACT
In this project, we present the bilateral filter (BF) for sharpness enhancement and noise removal. A bilateral filter is an edge-preserving and noise reducing smoothing filter. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby pixels. This weight is based off a Gaussian distribution. Crucially the weights depend not only on Euclidean distance but also on the radiometric differences (differences in the range, e.g. color intensity). This preserves sharp edges by systematically looping through each pixel and according weights to the adjacent pixels accordingly. Modified bilateral filter to performs both noise removal and sharpening by adding a high-pass filter to the conventional bilateral filter. ADVANTAGES: Fast Simple Intuitive parameter selection Edges preserving
Page | 4
CHAPTE 1.
Introduction
1.1 Image processing Human are primarily visual creatures and an image is better than any other information form for human beings to perceive. Digital Image Processing is a rapid evolving field with many growing applications in science and engineering. Image processing refers actually to manipulation of images. Image processing is in many cases concerned with taking one of array of pixels as input and producing another array of pixels as output which in some way represents an improvement to the original array. Pixels M Columns and N columns
Page | 5
CHAPTER
Image Acquisition Chain The first stage of any vision system is the image acquisition stage. Digital images are a record of energy (light) emitted from a particular scene in the world stored in computer memory. The whole chain of image formation can be represented in the following way:
Where: World refers to reality; Optics allows light from world to focus onto sensor; Sensor converts light to electrical energy Signal is a representation of incident light as continuous electrical energy; Digitizer converts continuous signal to discrete signal;
Digital Rep. is the final representation of reality in computer memory.
After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks required today. However, if the image has not been acquired satisfactorily then the intended tasks may not be achievable, even with the aid of some form of image enhancement Light Light is an electromagnetic radiation that stimulates our visual system. When discussing wavelengths for visible light, we typically give the measurements in nanometers. A nanometer is meters and is abbreviated nm. The wavelength for the
red, green and blue peaks is about 570-645 nm, 526-535 nm, and 444-445 nm. The visible wavelength range (called the mesopic range) is from 400 to about 700-750 nm
Page | 6
CHAPTER
Figure 1.4 :Mesopic range The image function values correspond to the brightness( intensity) at image points. Throughout this course we call the intensity of a monochrome image f at coordinates (x,y) the gray level (l) of the image at that point. Illumination & Reflectance: 1. Since light is a form of energy, the intensity value ( brightness) is none zero and finite. 0 < f(x,y) < The image we perceive is characterized by two components: Illumination: The amount of source light incident on the scene i(x,y). Reflectance: The amount of the light reflected by the object in the scene r(x,y). f(x,y) = i(x,y) r(x,y) where, 0 < i(x,y) < and 0 < r(x,y) < 1 , where 0 corresponds to total reflectance absorption Intensity (brightness) of a monochrome image f (x,y) at coordinate (x,y) is called gray-level ( l), where rmin i min = L min< l< L max = r max i max [ L min , L max ] is called gray-scale and normally shifted to [0 , L] where 0 corresponds to black and L corresponds to White.
Human Visual System A brief account of the human visual mechanism follows. Page | 7 CHAPTER
and 1 to a total
Based on Corner eye diagram Cornea - major refracting (light bending) component of the eye. Gives the eye about 70% of its power! Crystalline lens - able to change in curvature (accommodation). Gives the additional 30% of the refracting power of the eye. Aqueous humor and vitreous humor - fluids which provide nutrients to the non-
vascular structures within the eye and maintain the shape of the eye. The human eye has two types of photoreceptor: rods and cones. Because rods and cones have different spectral sensitivities and different absolute sensitivities to light, the visual response is not the same over the retina.
rods - 130,000,000 cones - 7,000,000
Cones are concentrated in the fovea and receptive fields are small in this region. Peak sensitivity for cones = 555 nm, but combination of R, B, and G cones. Peak sensitivity for rods = 505 nm. The human visual system isn't equally sensitive to all wavelengths. We do not have a clear understanding how the human perceive, process and store the visual information. We do not even know how the human measures internally the image visual quality and discrimination. Image digitization An image captured by a sensor is expressed as a continuous function f(x,y) of two co-ordinates in the plane. Image digitization means that the function f(x,y) is sampled into a matrix with M rows and N columns. The image quantisation assigns to each continuous sample an integer value. The continuous range of the image function f(x,y) is split into k intervals. The finer the sampling (i.e., the larger M and N) and quantisation (the larger K) the better the approximation of the continuous image function f(x,y).
CHAPTE
1.3 Image enhancement: The principal objective of enhancement is to process a given image so that the result is more suitable than the original image for a specific application. Two broad methods are possible: Spatial domain techniques
Page | 8
Frequency domain techniques Spatial domain techniques These Procedures operate directly on the image's pixels. Functions performed in the spatial domain can be expressed in the following form: g(x, y)=T[f(x, y)]
Figure 1.5: Functions performed in the spatial domain A hierarchy of processing exists as: Pixel: 1 pixel Neighbourhood: a 3X3 image block Global: the whole image When T is only applied to a 1x1 region of pixels, that is the value of g(x,y) depends only on the value of f(x,y) and not any of f(x,y)'s neighboring pixels T is called a gray level or intensity transformation. Frequency domain techniques The convolution theorem is the foundation of frequency domain techniques. Consider the following spatial domain operation: g(x,y) = h(x,y) * f(x,y) The convolution theorem states that the following frequency domain relationship holds: G(u,v) = H(u,v)F(u,v) where G, H, and F are the Fourier transforms of g, h, and f respectively. H is known as the transfer function of the process. Many image enhancement problems can be expressed in the form of the above equation. The goal is to select a transfer function that changes the image in such a way that some feature of the image is
Page | 9 CHAPTE
enhanced. Examples include edge detection, noise removal, emphasis of information is the image. We implement image restoration techniques whose aim is to recover a highquality original image from a degraded version of that image given a specific model for the degradation process. The two most common forms of degradation an image suffers are loss of sharpness or blur, and noise. The degradation model we use consists of a linear, shiftinvariant blur followed by additive noise. The problem we are interested in is twofold. First we seek to develop a sharpening method that is fundamentally different from the unsharp mask filter which sharpens an image by enhancing the high-frequency components of the image. In the spatial domain, the boosted high-frequency components lead to overshoot and undershoot around edges, which causes objectionable ringing or halo artifacts. Our goal is to develop a sharpening algorithm that increases the slope of edges without producing overshoot and undershoot, which renders clean, crisp, and artifactfree edges, thereby improving the overall appearance of the image. The second aspect of the problem we wish to address is noise removal. We want to present a unified solution to both sharpness enhancement and noise removal. In most applications, the degraded image contains both noise and blur. A sharpening algorithm that works well only for noise-free images will not be applicable in these situations. In terms of noise removal, conventional linear filters work well for removing additive Gaussian noise, but they also significantly blur the edge structures of an image. Therefore, a great deal of research has been done on edge-preserving noise reduction. One of the major endeavors in this area has been to utilize rank order information. Due to a lack of the sense of spatial ordering, rank order filters generally do not retain the frequency selective properties of the linear filters and do not suppress Gaussian noise optimally. Hybrid schemes combining both rank order filtering and linear filtering have been proposed in order to take advantage of both approaches, These nonlinear rank order approaches in general improve the edge sharpness, but they are more complex to implement than a spatial linear filter. In more recent years, a new concept in edge-preserving de-noising was developed. Although their algorithms were developed independently, and named the SUSAN filter and the bilateral filter, respectively, the essential idea is the same: enforcing both geometric closeness in the spatial domain and gray value similarity in the range in the de-noising operation.
Page | 10
CHAPTE
The idea of bilateral filtering has since found its way into many applications not only in the area of image de-noising, but also computer graphics, video processing , image interpolation , illumination estimation, as well as relighting and texture manipulation, dynamic range compression, and several others pointed out. Several researchers have provided a theoretical analysis of the bilateral filter and connected it with the classical approaches to noise removal. They demonstrated that the bilateral filter emerges from the well-known Bayesian approach, when a novel penalty function is used. Based on this observation, he proposed methods to speed up the bilateral filtering and to implement a bilateral filter for piece-wise linear signals. The bilateral filter is a discrete version of the short-time effective kernel demonstrated that the nature of the bilateral filter resembles that of anisotropic diffusion, and outlined a common framework for bilateral filtering, nonlinear diffusion, adaptive smoothing, and a mean shift procedure The bilateral filter is essentially a smoothing filter; it does not restore the sharpness of a degraded image. Modified bilateral filter performs both noise removal and sharpening by adding a high-pass filter to the conventional bilateral filter. This filter essentially performs USM sharpening for pixels that are above a preselected highpass threshold. Therefore, it produces halo artifacts as does an USM filter. In contrast to the extensive effort to improve de-noising algorithms, much less has been done for sharpening algorithms. The USM remains the prevalent sharpening tool despite the drawbacks that it has. First, the USM sharpens an image by adding overshoot and undershoot to the edges which produce halo artifacts. Second, when applied to a noisy image, the USM will amplify the noise in smooth regions which significantly impairs the image quality. To address the first problem, slope restoration algorithms have been proposed. Das and Rangayyan developed an edge sharpness enhancement algorithm to improve the slope of edges. In their algorithm, the edge normal direction is first detected and then a 1-D operator is applied to the edge pixels so that the transitions of edges are made steeper. They pointed out that corner artifacts and inferior enhancement for circular regions were limitations of their algorithm as defined in. The testing of their algorithm in was limited to bi-level, synthesized images. Tegenbosch proposed aluminance transient improvement (LTI) algorithm which first sharpens the image with a linear sharpening method such as the USM, then detects the 1-D edge profiles, and finally clips between the start and end level of the edges to get rid of the overshoot and undershoot.
Page | 11
CHAPTE
The 1-D LTI algorithm is implemented in a 2-D image by three alternative methods.
1) Apply LTI in the direction of the edge normal, which involves estimating the
the results by a weighted sum depending on the edge orientation. According to the authors,
It produces the best image quality, but is computationally expensive; It is efficient to implement, but has the drawbacks of inhomogeneous
complexity. Results comparing images enhanced by the three proposed methods are given. However, no comparison of the input and enhanced output images is provided, nor are edge profiles shown to demonstrate the effectiveness of the proposed algorithms in restoring edge slopes. To address the second problem of the USM filter, locally adaptive sharpening and smoothing algorithms have been proposed. Kotera developed an adaptive sharpening algorithm based on the USM and the classification of the pixels. The histogram of the edge strength is used to classify pixels into smooth regions, soft CHAPTERedges, and hard edges, which are subsequently processed with different sharpening strengths. Kim and Allebach developed an adaptive sharpening and de-noising algorithm-the optimal unsharp mask (OUM). Instead of using a fixed gain for the high-pass filter as in the case of the conventional USM filter, the OUM employs a locally adaptive, which has been trained for different regions of the image with pairs of high quality original and corresponding degraded images. By allowing the gain to be negative, the OUM is capable of both sharpening and de-noising. The degraded images were generated by applying a blur point spread function (PSF) to the original image, and adding noise to the blurred image. The blur PSF was modeled after a hybrid analog and digital imaging system, which involves scanning a silver-halide photograph printed from a negative exposed with a low-cost analog camera. A tonedependent noise model was used to simulate noise produced in the imaging pipeline. The parameters of our proposed restoration algorithms are optimized in a trainingPage | 12
based framework similar to that of the OUM. Another adaptive sharpening and smoothing filter is proposed which consists of a low pass filter and a high pass filter. The high pass filter is scaled by a factor adaptive to the sharpness of local edges. The coefficients of the filter masks are recursively updated. 1.4 Objective of the thesis Understanding Sources of Noise in Digital Images Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process that result in pixel values that do not reflect the true intensities of the real scene. There are several ways that noise can be introduced into an image, depending on how the image is created. For example: If the image is scanned from a photograph made on film, the film grain is a source of noise. Noise can also be the result of damage to the film, or be introduced by the scanner itself.
If the image is acquired directly in a digital format, the mechanism for
gathering the data (such as a CCD detector) can introduce noise. Electronic transmission of image data can introduce noise. Filtering an Image Image filtering is useful for many applications, including smoothing, sharpening, removing noise, and edge detection. A filter is defined by a kernel, which is a small array applied to each pixel and its neighbors within an image. In most applications, the center of the kernel is aligned with the current pixel, and is a square with an odd number (3, 5, 7, etc.) of elements in each dimension. The process used to apply filters to an image is known as convolution, and may be applied in either the spatial or frequency domain. Within the spatial domain, the first part of the convolution process multiplies the elements of the kernel by the matching pixel values when the kernel is centered over a pixel. The elements of the resulting array (which is the same size as the kernel) are averaged, and the original pixel value is replaced with this result. The CONVOL function performs this convolution process for an entire image. Using linear filter Images are often corrupted by random variations in intensity values, called noise. Some common types of noise are salt and pepper noise, impulse noise, and
Page | 13 CHAPTE
Gaussian noise. Salt and pepper noise contains random occurences of both black and white intensity values. However, impulse noise contains only random occurrences of white intensity values. Unlike these, Gaussian noise contains variations in intensity that are drawn from a Gaussian or normal distribution and is a very good model for many kinds of sensor noise, such as the noise due to camera electronics. Linear smoothing filters are good filters for removing Gaussian noise and, in most cases, the other types of noise as well. A linear filter is implemented using the weighted sum of the pixels in successive windows. Typically, the same pattern of weights is used in each window, which means that the linear filter is spatially invariant and can be implemented using a convolution mask. If different filter weights are used for different parts of the image, but the filter is still implemented as a weighted sum, then the linear filter is spatially varying. Example: Mean Filter One of the simplest linear filters is implemented by a local averaging operation where the value of each pixel is replaced by the average of all the values in the local neighborhood: h[i,j] = ~ L j[k,l] (k,I)EN where M is the total number of pixels in the neighborhood N. Purpose in moving from linear filters to non linear filters The size of the neighborhood N controls the amount of filtering. A larger neighborhood, corresponding to a larger convolution mask, will result in a greater degree of filtering. As a trade-off for greater amounts of noise reduction, larger filters also result in a loss of image detail. When designing linear smoothing filters, the filter weights should be chosen so that the filter has a single peak, called the main lobe, and symmetry in the vertical and horizontal directions. Linear smoothing filters remove high-frequency components, and the sharp detail in the image is lost. For example, step changes will be blurred into gradual changes, and the ability to accurately localize a change will be sacrificed. A spatially varying filter can adjust the weights so that more smoothing is done in a relatively uniform area of the image, and little smoothing is done Using non linear filter
CHAPTER
Page | 14
Nonlinear filters can be spatially invariant, meaning that the same calculation is performed regardless of the position in the image, or spatially varying. Example: Median Filter The main problem with local averaging operations is that they tend to blursharp discontinuities in intensity values in an image. An alternative approach is to replace each pixel value with the median of the gray values in the localneighborhood. Filters using this technique are called median filters.Median filters are very effective in removing salt and pepper and impulsenoise while retaining image details because they do not depend on valueswhich are significantly different from typical values in the neighborhood.Median filters work in successive image windows in a fashion similar to linear filters. However, the process is no longer a weighted sum. Bilateral filter In order to overcome the disadvantages of median filter, we use bilateral filter. The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong vision and computergraphics, and fast versions have greater applications.
CHAPTER
1.5 Organisation of thesis: This thesis deals with the following chapters and if we zoom in on each chapter, we understand the following Chapter 1: Introduction Introduction deals with image processing which means taking one of the array of pixels as input and producing another array of pixels as output which in some way represents an improvement to the original array. Image formation can be explained in terms of Image acquisition chain, Light (brightness and illuminance), Human visual system, and image digitization. In image enhancement the principal objective of enhancement is to process a given image so that the result is more suitable than the original image for a specific application. Image enhancement can be done by two techniques
Page | 15
Spatial domain techniques and Frequency domain techniques. Objective of the thesis deals with the causes of the noises ways to remove the noise using filters like linear filters, nonlinear filters and why its better to use non linear filters to linear filters, gives basic introduction about bilateral filter and examples of each said above. Chapter 2: Linear filters and non-linear filters This chapter gives a detail explanation about linear filters and non-linear filters their origin, working, examples like mean filter for linear filter and median filter for non- linear filter orders of the two kinds of filters. It contains comparison of the linear and non linear filters for image enhancement and noise removal. Finally, it explains why it is preferable to use nonlinear filters to the linear filters. Chapter 3: Image restoration Image restoration methods are used to improve the appearance of an image by applications of a restoration process that uses a mathematical model for image degradation. Degradation and noise removal which gives a brief explanation about causes of degradation i.e., by blurring and about different types of noises like Gaussian noise, salt and pepper noise and their effects on images and results are also shown after removing these noises. Chapter 4: Image enhancement using bilateral filter Image enhancement is used in computer graphics, the process of improving the quality of a digitally stored image by manipulating the image with software. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced image enhancement software also supports many filters for altering images in various ways. Programs specialized for image enhancement is sometimes called image editors. This chapter this a detail explanation about bilateral filters their origin, equation representation, concept involved, different variants, and also the applications of bilateral filter. Chapter 5: Introduction to MATLAB MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. This chapter deals with definition of the MATLAB, how to use MATLAB editor to create m-files, MATLAB working environment i.e., using MATLAB desktop how to use the
Page | 16
software etc. A MATLAB system is also a sub topic under this chapter. Finally explains about how to get help for using MATLAB software for technical computing. Chapter 6: MATLAB Code This chapter consists of a code written for enhancing an image and removing the noise from it using MATLAB. Code can be explained in simple terms as first an image is selected from list of images and then that image is degraded by motion blurring and noise. Finally a bilateral filter is used to enhance the image and the resulted image is compared with the degraded image and original using histograms and PSNR values. Chapter 7: Result and histogram Pictorial representation of resulted images and histograms are shown in this chapter. A tabular column of PSNR values is also shown. By comparing the original image, degraded image and resulted image histograms and PSNR values it can be CHAPTER concluded that bilateral filter output is higher to the degraded output. Chapter 8: Conclusion How bilateral filer is used for image enhancement is concluded in this chapter. Chapter 9: Future scope Improvement of bilateral filter and its future scope like adaptive bilateral filter, double bilateral filter is explained in this chapter. Chapter 10: Bibliography This chapter gives the references based on which this thesis is written.
1.
2.1 Linear Filtering Filtering is a technique for modifying or enhancing an image. For example, you can filter an image to emphasize certain features or remove other features. Image processing operations implemented with filtering include smoothing, sharpening, and edge enhancement. Filtering is a neighborhood operation, in which the value of any given pixel in the output image is determined by applying some algorithm to the values of the pixels in the neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel
Page | 17
Using Linear Filtering You can use linear filtering to remove certain types of noise. Certain filters, such as averaging or Gaussian filters, are appropriate for this purpose. For example, an averaging filter is useful for removing grain noise from a photograph. Because each pixel gets set to the average of the pixels in its neighborhood, local variations caused by grain are reduced. 2.2 Nonlinear filters Many nonlinear filters can be seen as extended linear filters. In the recent years there has been a strong tendency towards non-linear filtering. One of the most important advances in this field is the use of order statistics in filtering. Nonlinear filters based on ranking within a local moving window are one of the most efficient CHAPTERmethods for signal/image restoration and enhancement, especially in the presence of impulsive type noise. Filtering techniques, which are based on order statistics, are generally faster than the conventional linear filtering techniques, because they require comparisons and no arithmetic operations.
2.3 Spatial domain filters Noise removal using spatial filters Spatial filters can be effectively used to remove various types of noise in digital images. These spatial filters typically operate on small neighborhoods, 3 x 3 to 11 x 11, and some can be implemented as convolution masks. 2.4 Order filters Order filters operate on small subimages, windows, and replace the center pixel value (similar to the convolution process). Order statistic is a technique that ranges all the pixels in sequential order, based on gray-level value. The most useful of the order filters is the median filter. The median filter selects the middle pixel value from the ordered set. This type of filter works best with salt-and-pepper noise. Median filter tends to partially blur image (example of median filter applied to image with salt & pepper noise is in Fig.1). These disadvantages
Page | 18
eliminate adaptive median filter (example of adaptive median filter applied to image with salt & pepper noise is in Fig.2). The maximum and minimum filters are two order filters that can be used for elimination of salt-and-pepper (impulse) noise. The maximum filter selects the largest value within an ordered window of pixel values (example of maximum filter applied CHAPTER to image with salt & pepper noise is in Fig. 3), whereas the minimum filter selects the smallest value .
Figure 2.1: Example of maximum filter applied to image with salt and pepper noise
2. Image Restoration
3.1 Image restoration Image restoration methods are used to improve the appearance of an image by applications of a restoration process that uses a mathematical model for image degradation. Examples of the types of degradation include: Geometric distortion caused by imperfect lenses, superimposed interference patterns caused by mechanical systems, Noise from electronic sources. In practice the degradation process model is often not known and must be experimentally determined or estimated. Any available information regarding the images and the systems used to acquire and process them is helpful. This information, combined with the developer's experience, can be applied to solve the specific
Page | 19
application. A general block diagram for the image restoration process is provided in Figure.
CHAPTER
Page | 20
Noise is everywhere and thus we have to learn to live with it. Noise gets introduced into the data via any electrical system used for storage, transmission, CHAPTERand/or processing. In addition, nature will always plays a "noisy" trick or two with the data under observation. When encountering an image corrupted with noise you will want to improve its appearance for a specific application. The techniques applied are applicationoriented. Also, the different procedures are related to the types of noise introduced to the image. Some examples of noise are: Gaussian or White, Rayleigh, Shot or Impulse, periodic, sinusoidal or coherent, uncorrelated, and granular. When performing median filtering, each pixel is determined by the median value of all pixels in a selected neighborhood (mask, template, and window). The median value m of a population (set of pixels in a neighborhood) is that value in which half of the population has smaller values than m, and the other half has larger values than m. This class of filter belongs to the class of edge preserving smoothing filters which are non-linear filters. These filters smooth the data while keeping the small and sharp details. Median filtering is a simple and very effective noise removal filtering process. Its performance is particularly good for removing shot noise. Shot noise consists of strong spike like isolated values. Shown below are the original image and the same image after it has been corrupted by shot noise at 10%. This means that 10% of its pixels were replaced by full white pixels. Also shown are the median filtering results using 3x3 and 5x5 windows; three (3) iterations of 3x3 median filter applied to the noisy image; and finally for comparison, the result when applying a 5x5 mean filter to the noisy image.
Page | 21
CHAPTER-
Comparison of the non-linear Median filter and the linear Mean filter
Page | 22
a) b) Figure 3.4 : a)3x3 Median Filtered applied 3 times; b)5x5 Average Filter Gaussian noise be modeled as a histogram and can be seen in Fig.
CHAPTER-
The shape of the distribution of this noise type as a function of gray level can
Figure 3.5: Image with Gaussian noise and distribution function Uniform noise The shape of the distribution of this noise type as a function of gray level can be modeled as a histogram and can be seen in Fig.
Page | 23
Figure 3.6: Image with uniform noise and distribution function Salt & Pepper noise The shape of the distribution of this noise type as a function of gray level can be modeled as a histogram and can be seen in Fig.
CHAPTER-
Figure 3.7: Image with salt and pepper noise and distribution function
Page | 24
fails at edges, which are consequently blurred by low-pass filtering. Many efforts have been devoted to reducing this undesired effect. How can _we prevent averaging across edges, while still averaging within smooth regions? Anisotropic diffusion is a popular answer: local image variation is measured at every point, and pixel values are averaged from neighborhoods whose size and shape depend on local variation. Diffusion methods average over extended regions by solving partial differential equations, and are therefore inherently iterative. CHAPTERIteration may raise issues of stability and, depending on the computational architecture, efficiency. The idea underlying bilateral filtering is to do in the range of an image what traditional filters do in its domain. Two pixels can be close to one another, that is, occupy nearby spatial location, or they can be similar to one another, that is, have nearby values, possibly in a perceptually meaningful fashion. Closeness refers to vicinity in the domain, similarity to vicinity in the range. Traditional filtering is domain filtering, and enforces closeness by weighing pixel values with coefficients that fall off with distance. Similarly, we define range filtering, which averages image values with weights that decay with dissimilarity. Range filters are nonlinear because their weights depend on image intensity or color. Computationally, they are no more complex than standard no separable filters. Most importantly, they preserve edges. One of the most fundamental problems encountered when dealing with signal acquisition and processing is the presence of signal noise. Signal noise may be caused by various intrinsic and extrinsic conditions that are difficult to avoid. As such, the first step to processing a signal is often to suppress noise and extract the desired signal from the noisy signal. Of particular interest over recent years is the denoising of image signals, due largely to the incredible rise in popularity of digital images and movies. A large number of different image signal denoising methods have been proposed and can be generalized into two main groups: (i) spatial domain filtering and (ii) transforms domain filtering. Spatial domain filtering methods have long been the mainstay of signal denoising and manipulate the noisy signal in a direct fashion. Traditional linear spatial filters such as Gaussian filters attempt to suppress noise by smoothing the signal. While this works well in situations where signal variation is low, such spatial filters result in undesirable blurring of the signal in situations where signal variation is high. To alleviate this problem, a number of newer spatial filtering methods have been proposed to suppress noise while preserving signal characteristics in regions of
Page | 26
high signal variation. These techniques include anisotropic filtering techniques, total variation techniques, and bilateral filtering techniques. Bilateral Filtering is a non-iterative and non-linear filtering technique which utilizing both spatial and amplitudinal distances to better preserve signal detail. In contrast to spatial filtering methods, frequency domain filtering methods transform the noisy signal into the frequency domain and manipulate the frequency coefficients to suppress signal noise before transforming the signal back into the spatial domain. These techniques include Wiener filtering and wavelet-based techniques . While effort has been made in the design of image signal denoising techniques to better preserve signal detail, little consideration has been given to the characteristics of the human perception system in such techniques. This is particularly important in the context of image signals, where the goal of signal denoising is often to improve the perceptual quality of the image signal. Furthermore, many of the aforementioned denoising techniques, including bilateral filtering, utilize fixed parameters that may not be well suited for noise suppression and detail preservation for all regions within an image signal. Therefore, a method for adapting the denoising process based on the human perception system is desired. Bilateral filtering is a non-linear filtering technique introduced. It extends the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Pixels that are very different in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is effectively a convolution with a non-linear Gaussian filter, with weights based on pixel intensities. This is applied as two Gaussian filters at a localized pixel neighborhood, one in the spatial domain, named the domain filter, and one in the intensity domain, named the range filter. The edge-preserving de-noising bilateral filter adopts a low pass Gaussian filter for both the domain filter and the range filter. The domain low-pass Gaussian filter gives higher weight to pixels that are spatially close to the center pixel. The range low pass Gaussian filter gives higher weight to pixels that are similar to the center pixel in gray value. Combining the range filter and the domain filter, a bilateral filter at an edge pixel becomes an elongated Gaussian filter that is oriented along the edge. This ensures that averaging is done mostly along the edge and is greatly reduced in the gradient direction. This is the reason why the bilateral filter can smooth the noise while preserving edge structures.
CHAPTER-
Page | 27
4.3 Variants of the Bilateral filter: Higher-Order Filters: The bilateral filter implicitly assumes that the desired output should be
CHAPTER-
piecewise constant: such an image is unchanged by the filter when the step discontinuities between constant parts are high enough. Several articles [Elad, 2002] extended the bilateral filter to a piecewise-linear assumption. They share the same idea and characterize the local slope of the image intensity to better represent the local shape of the signal. Thus, they define a modified filter that better preserves the image characteristics. In particular, they avoid the formation of shocks. We have not explored this direction but it is an interesting avenue for future work. Cross Bilateral Filter: In computational photography applications, it is often useful to decouple the data I to be smoothed from the data E defining the edges to be preserved. For instance, in a flash no-flash scenario, a picture Pnf is taken in a dark environment without flash and another picture Pf is taken with flash. Directly smoothing Pnf is hard because of the high noise level typical of low-light images. To address this problem, Eisemann and Durand [2004] and Petschnigg [2004] introduced the cross bilateral filter (a.k.a. joint bilateral filter) as a variant of the classical bilateral filter. This filter smoothes the no-flash picture Pnf = I while relying on the flash version Pf = E to locate the edges to preserve. Aurich and Weule [1995] introduced ideas related to the cross bilateral filter, but for a single input image when the filter is iterated. After a number of iterations of bilateral filtering, they filter the original images using range weights derived from the last iteration. 4.4 Applications of Bilateral Filter Natural Video Conferencing In modern video conference system, the existing product line uses very high quality SD cameras that deliver natural, life-like images of meeting participants. With HD cameras and displays, even as they deliver additional sharpness, many unwanted details like wrinkles may also be amplified, resulting in images that may not be as pleasing as the SD images in todays product line. The constant time bilateral filtering method proposed provide a way to retain the salient features in HD images while removing unwanted details and noise.
Page | 28
Linearly blending the original image with the filtered result produces an image time to deliver the most desirable output. Interactive Filtering Full automatic/global bilateral filtering removes unwanted details, e.g., wrinkles. Unfortunately, some interesting details, e.g., hair, are lost. A human-guided local bilateral filtering method is then developed. User is provided with a virtual brush. Filtering is applied locally to the regions where the brush passes by.
which is natural and realistic. The amount of blending can also be controlled in real- IV
CHAPTER-
The edge-preserving property guarantees a natural looking after local filtering and the real-time speed enables human computer interaction. Denoising Tone Mapping Relighting and Texture editing Biological electron microscopy Other applications Bilateral filtering and cross/joint bilateral filtering can also be used for image/video abstraction .
5. Introduction to MATLAB
5.1 What Is MATLAB? MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include Math and computation Algorithm development Data acquisition Modeling, simulation, and prototyping Data analysis, exploration, and visualization Scientific and engineering graphics Application and development, including graphical user interface building. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing
Page | 29
CHAPTER
problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN. The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learnand apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others. 5.2 The MATLAB System: The MATLAB system consists of five main parts: Development Environment This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path. The MATLAB Mathematical Function This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms. The MATLAB Language This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away
Page | 30
CHAPTER
programs, and "programming in the large" to create complete large and complex application programs. Graphics MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for twodimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. The MATLAB Application Program Interface (API) This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.
5.3 MATLAB working environment MATLAB desktop Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic. The command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain information and income instances edit certain properties of the variable. The current directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory
Page | 31
CHAPTER
window shows a list of recently used paths. Clicking on the button to the right of the window allows the user to change the current directory. MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in directories in the computer file system. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes are included in the search path. The easiest way is to see which directories are on the search path. The easiest way to see which directories are soon the search paths, or to add or modify a search path, is to select set path from the File menu the desktop, and then use the set path dialog box. It is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory. The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands. This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session. 5.4 Using the MATLAB Editor to create M-Files The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions. To open the editor, type edit at the prompt opens the M-file filename m in an editor window, ready for editing. As noted earlier, the file must be in the current directory, or in a directory in the search path.
CHAPTER-
Page | 32
The principal way to get help online is to use the MATLAB help browser, opened as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext Markup Language (HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to find information, and the display pane, used to view the information. Self-explanatory tabs other than navigator pane are used to perform a search.
CHAPTER-
6. MATLAB code
Page | 33
Main function: [filename,Pathname] = uigetfile({'*.jpg';'*.bmp';'*.png';'*.gif';},'Select an image'); imor=imread(filename); imor=rgb2gray(imor); imor=imresize(imor,[256 256]); figure,imshow(imor), title('Original Image'); img1=imor; %creating a degraded image%%%%%%%%%%%%%%%%%%%%%%%%%%%% len = 15; tetha =8; PSF = fspecial('motion',len,tetha); Blurred = imfilter(img1,PSF,'replicate'); w= randn(size(img1)); degradedim=Blurred+uint8(w); figure, imshow(degradedim);title('Degraded Image'); figure,imhist(degradedim),title('histogram Degraded Image ' ) mse=sum(sum((imor-degradedim).*(imor-degradedim)))/65536; disp('PSNR of degraded image is:') PSNR =(20*log10(511/sqrt(mse))) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Apply bilateral filter to each image. img1 = double(degradedim)/255; img1(img1<0) = 0; img1(img1>0) = 1; % Set bilateral filter parameters. w = 5; % bilateral filter half-width sigma1 = [1 5]; % bilateral filter standard deviations % Apply bilateral filter to each image. bilteralflt= bflt(img1,w,sigma1(1),sigma1(2)); figure,imhist(bilteralflt),title('histogram bilteral filtered Image ' ) figure, imshow( bilteralflt), title('Bilateral filter image') mse=sum(sum((double(degradedim)-double(bilteralflt)).*(double(degradedim)double(bilteralflt))))/65536; disp('PSNR of bilateral filter output is:')
Page | 34 Pg: CHAPTERVI
PSNR=(20*log10(255/sqrt(mse)))*10 Sub function: % Implements bilateral filtering for grayscale images. function restimg1=bflt(degim,N,sigma_d,sigma_r) % Pre-compute Gaussian distance weights. [X,Y] = meshgrid(-N:N,-N:N); G = exp(-(X.^2+Y.^2)/(2*sigma_d^2)); % Apply bilateral filter. sz1 = size(degim); restimg1 = zeros(sz1); for i = 1:sz1(1) for j = 1:sz1(2) % Extract local region. iMin = max(i-N,1); iMax = min(i+N,sz1(1)); jMin = max(j-N,1); jMax = min(j+N,sz1(2)); regoint = degim(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. H = exp(-( regoint-degim(i,j)).^2/(2*sigma_r^2)); % Calculate bilateral filter response. F = H.*G((iMin:iMax)-i+N+1,(jMin:jMax)-j+N+1); restimg1(i,j) = sum(F(:).* regoint(:))/sum(F(:) end end
CHAPTERVII
Pg:
Page | 35
Page | 36
Pg:
c)Bilateral Filter output image and its histogram After applying the bilateral filter to the degraded image, we get the output which is better than the blurred image. This is indicated by the Histogram . Tabular form showing the PSNR values of the Images: Image type Degraded Image Bilateral filter output PSNR value 38.4501 60.1150
From the PSNR values, we observe that the power signal to noise ratio (PSNR) values of the bilateral filter output is higher than the degraded image .
Page | 37
CHAPTER-
8. CONCLUSION
In this paper we have introduced the concept of bilateral filtering for edgepreserving smoothing. The generality of bilateral filtering is analogous to that of traditional filtering, which we called domain filtering in this paper. The explicit enforcement of a photometric distance in the range component of a bilateral filter makes it possible to process color images in a perceptually appropriate fashion. The Pg: parameters used for bilateral filtering in our illustrative examples were to someIV extent arbitrary. This is however a consequence of the generality of this technique. In fact, just as the parameters of domain filters depend on image properties and on the intended result, so do those of bilateral filters. Given a specific application, techniques for the automatic design of filter profiles and parameter values may be possible. Also, analogously to what happens for domain filtering, similarity metrics different from Gaussian can be defined for bilateral filtering as well. In addition, range filters can be combined with different types of domain filters, including oriented filters. Perhaps even a new scale space can be defined in which the range filter parameter _r corresponds to scale. In such a space, detail is lost for increasing _r, but edges are preserved at all range scales that are below the maximum image intensity value. Although bilateral filters are harder to analyze than domain filters, because of their nonlinear nature, we hope that other researchers will find them as intriguing as they are to us, and will contribute to their understanding.
PagePg: | 38 V
CHAPTER-
9. FUTURE SCOPE
The future scope of the project is to deal with images that are appropriate for digital photography. We do not consider images that are severely degraded. To generate the degraded images, we applied a blur PSF to the original image, and added tone dependent noise to the blurred image. As stated in the introduction, the scope of this paper is to deal with images that are appropriate for digital photography. These are the images with a reasonable level of quality, from ones digital camera or scanners that one would actually want to print. There is a large market for enhancing the digital photographs from consumers, such as the self-serve photo kiosks in the supermarket, and the on-line printing services. These services all apply image enhancing algorithms to the digital photos before printing them. This is the main application we targeted when designing the ABF. Therefore, we adopted an image degradation model that simulates the blur and noise introduced by a real hybrid imaging pipeline. The blur PSF and the tone dependent noise are modeled after an imaging pipeline which involves capturing the original image on silver halide film with a low cost point-and-shoot camera, using a common photo processor to develop the film and print a 4 6 in silver halide print, and finally scanning the print to obtain a digital image.
CHAPTERX
10. Bibliography
Pg: VI Page | 39
References:
1. Rosenfeld and A. C. Kak, Digital Picture Processing. New York:Academic,
1982, vol. 1.
2. A. C. Bovik, Ed., Regularization in image restoration and reconstruction, in
Handbook of Image & Video Processing. San Diego, CA: Academic, 2000, ch. 3.6 (W. C. Karl), pp. 141160. 3. C. B. Atkins, C. A. Bouman, J. P. Allebach, J. S. Gondek, M. T. Schramm, and F. W. Sliz, Computerized Method for Improving Data Resolution, U.S. Patent 058248, 2000. 4. 5. C. B. Atkins, C. A. Bouman, and J. P. Allebach, Optimal image scaling using pixel classication, in Proc. ICIP, 2001, vol. 3, pp. 864867. H. Hu and G. de Haan, Classication-based hybrid lters for image processing, in Proc. SPIE Int. Soc. Opt. Eng., 2006, vol. 6077, pp. 607711 607711. 6. B. Zhang, J.Gondek,M. Schramm, and J. P. Allebach, Improved reso- lution synthesis for image interpolation, in Proc. IS&Ts NIP22, 2006, pp. 343 345.