0% found this document useful (0 votes)
109 views72 pages

DIP by K.K.Barik

DIP

Uploaded by

subashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
109 views72 pages

DIP by K.K.Barik

DIP

Uploaded by

subashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 72
Digital Image Processing Module-1 PECS5406 Digital Image Processing (PECS5406) Syllabus | ‘Module: 1 wv Digital Image fundamentals Image sampling and quantization. relationship between pixels Intensity transformations and spatial filtering some basic intensity transformation finetions, Histogram processing vvvyvy spatial filters for smoothing and sharpening Module: 2 2D DFT and its properties, basic filtering in the frequency domain image smoothing and sharpening Image Restoration and Reconstruction, Image restoration/degradation model, noise models, vvvyvyvy restoration in the presenice of noise only, estimating the degradation function Module: 3 > Color Image Processing: color models, Color transformation > Wavelets and Multi-resolution Processing > Image Compression: Fundamentals, Some basic compression methods > Morphological Image Processing: Erosion and Dilation, opening and closing * RC. Gonzalez, RE. Woods, Digital Image Processing, 3rd Edition, Pearson Education + S.Sridhar, Digital Image Processing, Oxford University Press, 2011 Anil K Jain, Fundamentals of Digital Image Processing * RC Gonzalez, Woods and Eddins, Digital Image Processing using Matlab, 2nd Edition, Tata McGraw Hill byrtgy, Kakyen Digital Image Processing Module-1 PECSS406 Lecture Note-1 Introduction to Digital Image Processing:1 “Gane picture is worth mote than ten thousand words.” Anonymous Instructional Objectives At the end of this lesson, the students should be able to: > Define Digital Image processing > Why Image processing > History of DIP > Field of DIP 1.1: Digital Image Processing * DIP: Processing of digital images by means of a digital computer. * An image may be defined as a two-dimensional function, /(4" y), where x and y are spatial coordinates, and the amplitude off at any pair of coordinates (s, y) is called the intensity or gray level of the image at that point, * When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. + Note that a digital image is composed ofa finite number of elements, each of which has a particular location and value. "These elements are referred to as picture elements, image elements, pels, and pixels is the torm most widely used to denote the elements of a digital image, 1.2: Why Image Processing? > Improvement of pictorial information For Human Perception = To make images more beautiful or understandable > Automatic Perception of Image "We call it Machine Vision, Computer Vision, Machine Perception, Computer Recognition % For Storage and Transmission "Smaller, faster, more effective > Image Processing for New Image Generation(New trends) "Computer Graphics introduced Image Processing and Computer Vision technologies Byreigy Kabyen Digital Image Processing Module-1 PECSS406 1.3: History of Digital Image Processing: > Early 1920s: One of the first applications of digital imaging was in the news- paper industry * The Bartlane cable picture transmission service = _ Images were transferred by submarine cable between London and New York * Pictures were coded for cable transfer and reconstructed at the receiving end on a telegraph printer > Mid to late 1920s: Improvements to the Bartlane system resulted in higher quality images =. New reproduction processes based on photographic techniques = Increased number of tones in reproduced images | } | > 1960s: Improvements in computing technology and the onset of the space race led to a surge of work in digital image processing * 1964: Computers used to improve the quality of images of the moon taken by the Ranger 7 probe "Such techniques were used in other space missions including the Apollo landings > 1970s: Digital image processing begins to be used in medical applications * 1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine for the invention of tomography, the technology behind ‘Computerised Axial Tomography (CA) scans > 1980s - Today: The use of digital image processing techniques has exploded and they are now used for all kinds of tasks in all kinds of areas, | = Image enhancement/restoration | = Artistic effects "Medical visualisation Industrial inspection Law enforcement © Human computer interfaces Digital Image Processing Module-1 PECS5406 1.4: Examples of Fields that Use Digital Image Processing byretgy > The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field, > Some Fields : Biometrics Medical imaging Factory Automation Remote sensing Documents image processing Defense/ Military Applications Photography Entertainment Satellite imagery Terrain classification Meteorology aw enforcers ‘Number plate recognition for speed cameras/automated toll systems Fingerprint recognition Enhancement of CCTV images | 1 \ | | i | | i Digital Image Processing Module-1 PECS5406 Lecture Note-2 Digital Image Fundamentals: 1 Instructional Objectives ‘At the end of this lesson, the students should be able to: > Define Digital image, Pixel > Explain Fundamental steps involved in DIP > Explain Components of DIP system 2.1: Introduction > Scene: What is being observed by human eye is called as scene or object. > Image: Optical observation of an object is called as Image of that object. > Digital image: An image may be defined as # two-dimensional function(x, y), where x and y are spatial coordinates, and the amplitude off at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. > When a y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image, + Note that a digital image is composed of a finite mmber of elements, each of which has a particular location and value. These elements are referted to as picture elements, image elements, pels, and pixels, = Pixel is the term most widely used to denote the elements of a digital image. Seis Gneey) Laer imoge pave Digital Image Processing Module-1 PECSS406 > Digital Image Processing: * The field of processing digital images by means of a digital computer. The process of receiving and analyzing the visual information in digital domain by the means of digital computer is called as digital image = Concems with the transformation of an image to a digital format & its processing by computer or by dedicated H/W; both i/p & op are digital imagesprocessing, => 2.2: Fundamental steps in Digital Image Processing Transmi pression Digitization Image Enhancement Segmentat _ Restoration | Structural L Analysis Description "| Feature Extraction [~~ of object “Image Sensing or Acquisition: * An image is captured by a sensor (such as a monochrome or colour TV camera) & digitized = Ifthe O/P of the camera or sensor is not already in digital form, an ADC converter digitizes it * Note that acquisition could be as simple as being given an image that is already in digital form, Image Digitization * Digitization deals with converting analog image to digital image. "Digitization involves two process 1. Image Sapling 2. Image Quantization bycigy Kalyen | | | | Digital Image Processing, Module-1 PECSS406 * Image Enhancement * To bring out detail that is obscured, or simply to highlight certain features of interest in an image “> Image Restoration * Improving the appearance of an image ‘Tend to be based on mathematical or probabilistic models of image degradation ‘Segmentation Partitions an image into its constituent parts or objects * Autonomous segmentation is one of the most difficult tasks in DIP ‘A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually * OPP of the segmentation stage is raw pixel data, constituting either the boundary of a region or all the points in the region itself ‘& Representation & Description "Converts the raw data from segmentation to a form suitable for computer processing * Representation -make a decision whether the data should be represented as a boundary or as @ complete region + Boundary representation -focus on external shape characteristics, such as comers & inflections * Region representation -focus on internal properties, such as texture or skeleton shape * Description( feature selection), deals with extracting attributes 2.3: Components Of A General-Purpose Image Processing System Bade Spectatizes aaa hardware OS Byeigy Keabyen Digital Image Processing Module-1 PECS5406 > Image acquisition: This is carried out by sensors. Sensing involves 2 elements, namely ‘+ Physical device: sensitive to the energy radiated by the object we wish to image and has two process : Photochemical and Photo electronic + Digitizer: converts the o/p of the physical sensing device into digital form Examples: Microdensitometer and Flying spot scanner > Specialized image processing hardware: consists of a digitizer & an ALU used for performing arithmetic or logical operations on the image. Also called front-end subsystem : Speed! > Computer :is used for performing off line image processing tasks. Computer can range from a general PC to a super computer > Software: For image processing consists of specialized modules that perform specific tasks on the image, with options for users to write code. A well-designed package also includes the capability for the user to write code that, as a minimum, sophisticated software packages allow the integration of those modules and general-purpose software commands from at least one computer language. > Mass Storage : is essential in image processing applications ‘© Short Term Storage: required during processing. Frame Buffers that can store 1 or 2 images at a time & allows image zooming, scrolling &panning are used ‘© On Line Storage: for fast recall. Magnetic disks or optical media storage is used Archival Storage: for infrequent access. Magnetic tapes & optical disks are used > Image Displays: consists of monitors; they are driven by o/ps of image & graphic display cards. Sometimes stereo displays > Hardcopy: devices for recording images include laser printers, film cameras, heat sensitive devices, inkjet units and digital units such as optical & CDROM disks > Networking : is vital function, because it is necessary to transmit images, During transmission BW is the key factor to be. considered Situation getting better with OFC and ‘broadband communication, End Apogy Kalyan Digital Image Processing Module-1 PECS5406 Lecture Note-3 Digital Image Fundamentals:2:Preliminary of visual perception Instructional Objectives At the end of this lesson, the students should be able to: > Explain Human visual Perception > Define Details of Electromagnetic Spectrum and Light Energy > Define Monochromatic Light & Chromatic Light > Define Brightness, Gray level 3.1; Human visual Perception © DIP is built on a foundation of mathematical and probabilistic formulations ‘+ Nonetheless, human intuition and analysis play a central role in the choice of one technique ‘versus another, «_Itis ultimately based on subjective, visual judgments Structure of the Human Eye * ‘The Human visual perception is of three types + Photopic vision or Bright-Light vision + Scotopie vision or Dim-Light vision + Mesopic vision or Intermediate vision Aypegy Kabiyen Digital Image Processing Module-1 PECS5406 VISUAL EYE CORTEX eo ‘OPTICAL NERVES > For visual perception or formation of an image retina of eye plays an important role > When eye is looking at an object a reversal image of the object is formed on the retinal surface of eye through photoreceptors. > There are millions sumber of photoreceptor cells are present in side retinal surface of eye and they collect the reflected light of the object and they convert the light energy to electrical pulses by some photo chemical reactions. > Then these electrical pulses transmitted to brain through visual cortex by optical nerves, after that brain decodes information from electrical pulses. > These photo receptors are of two types Rods and Cones & Rods: * 75 to 150 million rods are present in retinal surface Very less sensitive to color light * Responsible for Scotopic vision or dim-light vision Cones: * Only 6 to 7 million cones are distributed on the center of retinal surface called Fovea Very sensitive to color light. * Responsible for Photopic or Bright-light vision, Rode No.cradsorcnes erat byeay Kelijen ‘Degrees rom vinl axe (seater of fexen) Digital Image Processing Module-1 PECSS406 3.2: Electromagnetic Spectrum and Light Energy * Light is just a particular part of the electromagnetic spectrum that can be sensed by the human eye * The electromagnetic spectrum is split up according to the wavelengths of different forms of energy Energy of one photon (alectoa volts) a a a a Frequency (Hi) 1 119” 10 so!” tol wo! so 10% 10? io 10! a 1 aw ao 1h Wavelength (meters) 1 ot 10 1 ws wt WF Ww! 104 1 we wot 1 wt Ie Hard X-rays ‘Ganma rays Soll Xays O4x 10S Osx 06x 7x 10° Ultraviolet Violet Blue Grecn Yellow” Orange Red Infrared FIGURE 2.10 ‘The electromagnetic spectrum. Ihe visible spectrum is shown zoomed to facilitate explanation, but note that the visible s her ofthe El * The visible portion of the electromagnetic (EM) spectrum. = Itoccurs between wavelengths of approximately 400 and 700 nanometer Digital Image Processing ‘Module-1 PECS5406 3.3: Light tis a particular type of EM radiation that can be seen by human eye, The colors that we perceive are determined by the nature of the light reflected from an object For example, if white light is shone onto a green object most wavelengths are absorbed, while green light is reflected from the object Types: Monochromatic Light ‘Chromatic Light ‘Monochromatic Light > Monochromatic Light has void color. > Intensity of Monochromatic Light is represented by gray scale which varies between Black to White. > Image Produce By this light is called as gray seale image. (Chromatic Light > Chromatic light spans EM spectrum from 0.43 jmn( violet) to 0.79 jm(red) > The amount of Chromatic light is called Luminance of color light. > Image Produce By this light is called as Color image. > Im addition to frequency, wavelength there are three basic quantities are used to describe the quality of color light. * Radiance/Reflectance + The amount of light energy flows from an object or light source is called as radiance. © Unit : Lumens Irvadiance or Ulumination ¢ The amount of light incident on an object is called as irradiance. “© Luminance/Intensity ‘The amount of light energy perceives by an observer or sensor is called as Tuminance of that object. ‘The luminance of an object does not depend upon surrounding of that object. * Brightness * Brightness is a cycovisual concept which may be described as the sensation of a sensing device to the intensity of light. © Itis the subjective or achromatic notation of intensity of an object. * It depend on the surrounding intensity of an object © Brightness VS Function of intensity © Brightness is not a simple function of inténsity ‘+ Visual system tends to undershoot or overshoot around the boundary of regions of different intensities © The intensity of the stripes is constant but we actually perceive a brightness pattern is strongly scalloped near the boundaries. Syreiyy Kabryen Digital Image Processing Module-1 PECSS406 % Intensity of an object by an observer is the product of radiance and irradiance of that object. Let (x,y): irradiance of an object 1(y) : radiance of an object {(x,y) : Intensity of the object by an observer and f(%y) = i(xy) ray) Where [0.5 i(%y) < 00] and [0 Explain Image formation models > Define Brightness adaptation > Define Brightness discrimination > Define Weber Ratio 4.1: Image Formation ‘+ There arc two parts to the image formation process or model: "Geometric model "Photometric model © Geometric Model > The geometry of image formation, which determines where in the image plane the projection off point in the scene will be located. > It deals with dimension conversion ie. 3D to 2D © A Simple model of image formation > The scene is illuminated by a single source. > The scene reflects radiation towards the camera > The camera senses it via chemicals on film. eee | ome ee or ae oS: aaa CAMERA irradiance 7 ‘opticat axis sensor clement Simple image formation model + Pinhole camera > This is the simplest device to form an image of a 3D scene on # 2D surface > Straight rays of light pass through a “pinhole” and form an inverted image of the object on the image plane. byporgy Kalyan Digital Image Processing Module-1 PECSS5406 afk ar i x | y= i | | * Photometric Model | > Photometric model deals with energy conversion. 7 x ets | hosy) | | x i (__——_ Photometric Mode] z » Let (sy) : be the input intensity to the ETS 2(%y) : be the ontput intensity of the ETS & h(x,y) : be the impulse response of the ETS i.e. h(Gy) = TIS(,y)] > The simplest photometric model based on some assumptions = The energy transformation system (ETS) is a position invariant system. = Non negativity ic. flx,y) & (xy) > 0 Superposition * Intensity distribution is a neighboring process i > AsETS is a position invariant system so h(x-a,yefh) = TIS(x-a,y-B)] Here (xy) = TA y)] TB, $n f€e BBO — ay — B)ABdar} ‘And since the system is Linear system 8669) = THY ~ Roo Spee f(@ BITS ~ ay — )|dBda and since the system is linear as well as position invriant aC%y) = THY) = $2 Fyn FG BW(K~ ay — B)dBdee So gQuy) = f(xy) * boy) “If the system ETS associated with some additive noise then 8(y) = fO,y) * hOvy) + mGuy) bylgy Kalyen Digital Image Processing 4,2: Brightness adaptation Module-1 PECSS406 > In human visual system the photo receptors rods provide scotopic vision where cones provide photopic vision. > ‘The eye can adapt to an enormous range (in the order of 10°10) of light intensity, from scotopic threshold to the glare limit. > Experimentally it is observed that Subjective brightness (i perceived intensity) isa log function of the light intensity incident on the eye. In photopic vision alone, the range is about 10°, % It is also observed that the human visual system cannot operate over such a huge range simultaneously; instead, it changes its overall sensitivity. to adapt the huge range of brightness from time to time. This mechanism is called Brightness adaptation, " Bg: if the eye is adapted to Glare limit—| brightness level Bg, the. short intersecting curve represents the range of subjective brightness perceived by the eye. The range is rather restricted, ic. below level By all stimuli are perecived as black. "The upper part of the curve (dashed) is not restricted, but when extended too far it loses its meaning as it raises the adaptation level higher than By peigy Subjective brightness Scotopic __ Photopic threshold (7A daptation range. Lititstil “4-202 4 Log of intensity (mL) Kebyan Digital Image Processing Module-1 PECSS406 4.3: Brightness discrimination and Weber Ratio > The human perception is very sensitive to the intensity of an object w.rt. background of that object rather than the absolute value of the intensity. > The ability of the eye to discrimination b/w changes in brightness at any specific adaptation level is of considerable interest. * For clear human perception of an object there should some difference between background intensity and foreground intensity of the object. This concept is called Brightness discrimination "Let I: is uniformly illumination on a flat area large enough to occupy the entire field of view. "Alle: is the change in the object brightness required to just distinguish the object from the background, © Allll is called Weber ratio and to avoid Brightness discrimination ATelt > 0.02 The quantity Ale, where Ale is the increment of illumination discriminable 0% of the time with background illumination J, is called the Weber ratio. I+aAr. Weber's ratio: AIT/I End Apo Kabyan Digital Image Processing Module-1 PECSS406 Lecture Note-5 Digitization Instructional Objectives At the end of this lesson, the students should be able to: Define 1-D and 2-D comb function Explain Image Digitization Define 2-D sampling How to represent Digital Images Define resolution | | vyvvy 5.1: 1D Comb Function 1D Comb function consists by a train of impulses. tis represented by the symbol eomb(x) or comb(x,Ax).. Graph vvy 1 + x 6 Laxl @ > Comb(%, Ax) = Yif_.6(X— rx) where ‘ris the integer varies from 09 to co. Let FT[Comb(x, Ax)] v set Ws D Comb Function 2D comb function is a plane of impulses. Represented by comb(x,y) or comb(x,y ; Ax, Ay). Comb(%,y) = Df -n D2» 6(x —rAx, y— cAy) Let FT[Comb(x, y)] = COMB(Wx, Wy) Where COMB(Ws,Wy) = 2 ETF, oD wS(We-*, Wy— 2 > > > > > > | | Digital Image Processing, Module-1 PECS5406 5. : Digitization ‘Computer process of images requires that images be available in digital form. Image Digitization is a process of converting an analog image to digital image. Digitization includes = Sampling Quantization Digitization in coordinate values: Sampling Sampling deals with digitizing the analog image to the discrete image. Image representation by 2-D finite matrix is called Sampling. Digitization in amplitude values: Quantization Quantization deals with digitization of continuous amplitude of the analog signal. Each matrix element represented by one of the finite set of discrete values is called quantization. vvy vyvvyvy 5.3.1: 1-D Sampling > Let g(x) is an analog signal which is band limited to W, and FT [g(9)] = G(w,) where Gew,) =0 for fel > We > Time domain: ——ar_ Sampler : Combix, Ax) |} page ‘where (3) is the sampled version of g(x) 8s(X) = g(x).comb(x) 8468) = 263). UP 5(X — rx) So g6(x) = Yip B(x). 5 (x — rAx) where g(rAx) = g(r) sample of g(x) and a6) = J, g@)5(@ rx) dx > Frequency domain: Let FT [g(x)] = G(w,) and FT (g.(x)] = G(w,) Where G.(w,) = G(w,)* COMB (Wy) Ge) = “1D Sampling Theorem If g(x) is band limited to W,., then g(x) can be converted into its sampled version and can be reconstructed from its sampled version exactly if and only if L L ss o9 Ax 5 oy, = Ww, Ayogy Kalyan Digital Image Processing Module-1 PECS5406 1-D Sampling 0 Give) COMbod COMBA) eee oe epi x = te tong = z xy Give PCOMB i) = rae wf" - aves h pcan 5.3.2: 2D Sampling > Let g(,y) is an analog image which is band limited to Ws and Wy in x and y axis. And g(x,y) be the sampled version of g(x,y). > Time domain: ———+, 2DSampler : Comb(xy: Ax,Ay) |-—_—» g(x,y) Bs(%Y) 8s(%y) = g(y).comb(x,y) Bs(%y) = BOGS). Dew De 86069) = Liew Lica —oo 8(TAX, chy). 5G — TAx,y — cy) 00 5(x — rx, y — edy) Where g(rAx, cAy)=a(r,0) is the sampled version of g(x,y) are) = J, (8G y)-5(K — rx, y — cAy)dxdy, > Frequency domain: Bs(%Y) = By)-comb(x,y) So Gy(Wx,Wy) = G(ws,wy)* COMB(Wy, Wey) Glove) = Gl)" JF Bim DIE B(WH Synergy Kalyan Digital Image Processing Module-1 2-D comb function 2D Sampling Theorem 1 ape acs 2Wy 5.4: Image quantization Fourier spectrum of a 2D signal 1 sor and Ay S 2Wy pois Quantize PECS5406 Fourier spectrum of sampled 20 signal 1 ox If g(%.y) an image signal which is band limited to (Wx & Wy) then it can be converted into its sampled version and also exactly recover from its sample if and only if 2 2W, and a = 2W, 2 2W, and = = 2W, Quantizer’s Digital Image Processing Module-1 PECSS406 5.5: Representing Digital Images > The result of sampling and quantization is a matrix of real numbers as shown in Fig, $0.0) f(0,1) FN 1) i, Beet aera. fey=| 1:9 fa fC et f(M ~1,0) f(M- 1,1) + ft -1,.N— 1). A digital image of size Mx N > Itis advantageous to use a more traditional matrix notation to denote a digital image and its elements. Fo.0 @o.1 Pet @o.n—3 2G ay aoe 43.N-1 Om—1.0 S11 Bee 2M—1,.N—t A Digital Image > Image Coordinate System See Se LN 4 o » x Be a eee BA eas ie ae a BESS OEE a ey AE eee eee 2 eh ee One pixet —~ £Oey? Coordinate convention used to represent digital images + The values of the coordinates at the origin are (x.y) = (0,0). "The next coordinate values along the first row are (x,y) = (0,1). * The notation (0,1) is used to signify the 2" sample along the I" row. The number of bits required to store a digitized image is b=M x N xk where M & N are the number of rows and columns, respectively. 2" where k=1,2,...24 It is common practice to refer to the image as a “k-bit image” bony Kabayem The number of gray levels is an integer power of 2: Digital Image Processing Module-1 PECS5406 5.6: Analytical representation of Digital Image > Let g(r, ¢) be a digital image of size (Mx N). For mathematical analysis of image g(r.c) is normally represented by column matrix of size (MINxl) and this conversion is called as Lexicographic ordering, > From Photometric model of image formation we know that @%y) = f(x, y) * bGy) + ay) > Afer converting g(x, y) into digital image g(r, c) og size (MxN) , represented by a(t, ©) =$(6, 6) * her, o) + n(r, ©) > After lexicographic ordering of g(r, ¢) it can be represented by colamn matrix form g=Hftn Where n: noise matrix OF Order MNx1 g: is a column matrix of order MNx1 Hi: is a square matrix, called Degradation matrix of order MNxMN 5.7: Resolution = Resolution (how much you can see the detail of the image) depends on sampling and ‘gray levels. = Bigger the sampling rate (n) and the gray scale (g), the better the approximation of the digitized image from the original, "The more the quantization scale becomes, the bigger the size of the digitized image. % Spatial Resolution * Spatial resolution is the smallest discernible detail in an Image = Sampling is the principal factor determining the spatial resolution of an image = Resolution is the smallest number of discernible line pairs per unit distance * Ex: 100 line pairs per millimeter + Gray-Level * Gray-level resolution refers to the smallest discernible change in gray level (which btw is highly subjective). * Considerable discretion can be accomplished regarding the of samples used to generate a digital image, but this is not true for the number of gray levels * The most common number is 8 bits, with 16 bits being used in some applications where enhancement of specific gray level ranges is necessary, * Bit-depth: No. of bits required to represent the intensity or gray level of pixels + Units or Standards of Spatial resolution * DPI: No. of Dots per unit inch + PPI: No. of Pixels per unit inch * LPI: No. of Lines per unit inch End byoigy Kalyan Digital Image Processing Module-1 PECS5406 Lecture Note-6 Image Topology: Basic Relations between Pixels Instructional Objectives At the end of this lesson, the students should be able to: > Define Neighbors of a Pixel > Explain Adjacency > Define Digital path and pixel Connectivity >» Explain Distance metric of DIP 6.1: Neighbors of a Pixel ‘Type: * 4-neighbors ‘+ diagonal neighbors ‘+ 8neighbors % 4-neighbors > Pet P bea pixel ofan image. Then the 4-neighbors of P is represented by Na(P) > A pixel P at coordinates (x, y) has four horizontal and vertical neighbors whose ‘coordinates are given by (x+1, y), (X-1, y). (% yt), & ¥D. % NAP) ={ HL y), GL, Ws GS y+), O y=) } > Some of the neighbors ofp lie outside the digital image if (x, y) is on the border of the image. “© Diagonal neighbors > The diagonal-neighbors of P is represented by No(P) > The four diagonal neighbors of P have coordinates (x1, y#1), (XL, y1), (x1, y#1), G1, yD) > No(P)= {GH y+), HL, y-T), Cel, yt), (1, yD} Pay) byreigy Kabiyon Digital Image Processing Module-1 PECSS406 Sneighbors > The 8-neighbors of P is represented by NP) > S-neighbors ofa pixel P a combination of Na(P) and Np(P). > Ne(P) =Na(P) U No) % Na(P) ={(cH, y), GL, y), (YH), (% y-1), (HL, YH), (HL, y-T), 1, YH), Go, y-)} =a i 6.2: Adjacency > Concept of Adjacency between pixels simplifies digital image concepts such as regions and boundaries > To establish if two pixels are connected, it must be determined if they are neighbors and if | their gray levels satisfy a specified criterion of similarity | > V will be used to denote the set of gray-level values used to define adjacency I > Ex: Fora binary image V={0,1},and so on. + Types of pixel adjacency 4-adjacency: Two pixels p and q with values from V are 4-adjacent ifq is in the set No(P) v > S-adjacency: Two pixels p and q with values from V are 8-adjacent ifq is in the set No(P) > m-adjacency (mixed adjacency): Two pixels p and q with values from V are m-adjacont if = qisin Nap), or * qs in Np(p) and the set “Na(p) 1 N4(q)” has no pixels whose values are from V 0 01.0 0 0 | 001 0 0 abe FIGURE 2.26 (a) Arrangement of pixels: (b) pixels that are 8-adjacent (shown dashed) to the center pixel; (c) m-adjacency, > Mixed adjacency is a modification of 8-adjacency. > Itis introduced to eliminate the ambiguities that often arise when 8-adjacency is used. Agog Kabryan Digital Image Processing Module-1 PECS5406 6.3: Digital path and pixel Connectivity % Digital Path: A (digital) path (or curve) from pixel p with coordinates (x, y) to pixel q with ‘coordinates (s, t) is a sequence of distinct pixels with coordinates 0 Yo JsG%t » Yi Deverones(ins Ya) % Where (x0, Yo -(%9)5 (Sn » Yn = (S.t) And pixels (xi, yi) and (Xi , 1 Sisn“n” is the length of the path > If (%, Yo}=Ge Ya), then the path is a elosed path, | > We can define 4-,8-, or m-paths depending on type of adjacency specified. ) are adjacent for % Conneetivity: «Two pixels are said connected if they have the same value and there is a path between, them. + [aS isa set ofpixels, > For any pixel p in S, the set of pixels that are connected to it is called a connected component of S. > IfS has only one connected component, Sis called a connected set. > Let R be a subset of pixels-in an image. > We call R a region of the image, if R is a connected set > The boundary (also called border or contour) of a region R is the set of pixels in the region that have one or more neighbors that are not in R. > IER happens to be an entire image (which we recall is a rectangular set of pixels), then its boundary is defined as the set of pixels in the first and last rows and cohumns of the image. I | i | | 6.4: Boundary of a region R > The pixel in the boundary (contour) has at least one 4-adjacent neighbor whose value is 0. 6.5: Edges and Boundaries > The boundary of a finite region forms a closed path and is thus a “global” concept. » Edges are formed from pixels with derivative values that exceed a preset threshold. The idea of an edge is a “local” concept that is based on a measure of gray-level discontinuity at a point. > Edges are intensity discontinuities and boundaries are closed paths. Apogy Kaban Digital Image Processing 6. Distance measures ‘Module-1 PECSS406 > For pixels p, q, and z, with coordinates (x, y), (5, t), and (v, w), respectively. > Disa distance function or metric if * D(p,q)2 0 (D(p, 0 if p=a) > Types * De, q, p) and + De, 2) =D. q) +Da,) = Buclidean distance = City-block distance or D4 distance. * Chessboard distance or D8 distance Euclidean distance * The Euclidean distance between p and q is defined as + Dep o=[% - 9? + y~ 04} "For this distance measure, the pixels having a distance less than or equal to some value r from (x, y) are the points contained in a disk of radius r centered at (x y) + City-block distance or D4 distance The D4 distance (also called city-block distance) between p and q is defined as Dap, @=}x-s|+1y-t] ‘In this case, the pixels having a D4 distance from (x, y) less than or equal to some value r form a diamond centered at (x, y).. ven 2 1 1 2 2 012 2 > For example, the pixels with D4 distance < 2 fiom (x, y) (the center point) form the following contours of constant distance: > The pixels with D4=1 are the 4-neighbors of wy) ‘+ Chessboard distance or D8 distance The D8 distance between p and q is defined as Da(p, q)= max (x-s|,|y-#1) In this case, the pixels with D8 distance from (x, y) less than or equal to some value r form a square centered at (x, y). bygy 2 Reen 2 ReoeN 2 Ree 2 2 2 2 2 > The pixels with D&I are the 8-neighbors of &y) > The D4 and D8 distances between p and q are independent of any paths that might exist between the points because these distances involve only the coordinates of the points End Kakyer Digital Image Processing Module-1 PECSS406 Lecture Note-7 Image Enhancement in Spatial Domain: by Intensity Transformations:Part-1 | Instructional Objectives At the end of this lesson, the students should be able to: > Explain Image enhancement > Define Point processing or Intensity Transform or Gray level Transform v Define Image negative v Explain Log Transformation v Define Gamma transformation 7.1: Introduction > Image enhancement is the process of making images more usefil > Image Enhancement used to enhance certain features of the image. > The reasons for doing this include: > Highlighting interesting detail in images > Removing noise from images > Making images more visually appealing, > The result is more suitable than the original image for certain specific applications. > Examples: apony Kabyon Digital Image Processing Module-1 PECSS406 7.1.1: Different Enhancement Techniques > There are two broad categories of image enhancement techniques > Spatial domain techniques «Work image plane itself * Direct manipulation of image pixels in an images > Frequency domain techniques "Modify Fourier transform coefficient of an image * Take inverse Fourier transform of the modify coefficients to obtain enhanced image. % We will focus on the spatial domain methods 7.2: Spatial domain Enhancement techniques > Spatial domain Enhancement techniques Operate directly on the pixels composing an image > Pixel gray level information is use for spatial domain enhancement technique, > Let output Gray level of pixel f(x,y) is denoted by g(x,y) where | S@Y=TF@ WI T: Mapping Function Or Gray level Transformation and it = is an operator on f defined over a neighborhood of point (x,y) = (xy): output gray level > Usually a neighborhood about image position (x,y) is considered by using a square or rectangular pixel area centered at (x,y). ‘Types spatial domain Enhancement techniques > ‘There are three broad categories of Spatial domain enhancement techniques "Point Processing + Neighborhood Processing * Histogram Processing % Point processing : Mapping function T depends only on the value of fat (x,y) % Neighborhood Processing : Mapping fimction T depends on the value of fat (x,y) & neighboring pixels of f(x,y) * Histogram Processing : Mapping function T depends on all the pixels of an image. 7.3: Intensity Transformations > Point processing also called as * Intensity Transformation or = Gray level Transformation or "Zero Memory Operation > When the neighborhood is 1 x 1 then g depends only on the value of f at (x,y) and T becomes a gray-level transformation (or mapping) function: SI(r) where r and s denotes respectively the intensity of g and fat any point (x, y). dyogy Kalyan Digital Image Processing Module-1 PECSS406 > Types: Image negatives | * Log Transformations * Power-law Transformation = Piecewise-Linear Transformation Functions: * Thresolding * — Contrast stretching * Gray-level slicing " Bitsplane slicing 7.31: Image Negatives > Image Negative is a Point Processing where : =T(r) > Image Negative is obtained by: S=L-1-r whére [0, L-1] denotes intensity levels of the image. > Example tT [2 ]2 Sn] pea 7 s/s 5 4/4 4 i } 7 Ts Is Dee| Onseee | Input image Matrix Negative image matrix > Function reverses the order from black to white so that the intensity of the output image decreases as the intensity of the input increases > Used mainly in medical images and to produce slides of the screen byrogy Kahyor Digital Image Processing ‘Module-1 PECS5406 7.3.2: Log Transformations > Log Transformation of an image is obtained by log(1+r) — Where c: constant | > Graph: S=clog(1+r) > Log Transformation Maps a narrow range of low gray level values into a wider range of output levels and opposite is true of higher values of input levels > It is used to accomplish spreading and compressing of gray levels > Imp: It compresses the dynamic range of images with large variations in pixel levels Ex: Fourier spectra 7.3.3: Power-Law (Gamma) transformation > Power-law Transformation of an image is obtained by: S = cr” Where C, y : positive constants > Maps a narrow range of low gray level values into a wider range of output levels, when ‘7’ is Fractional and opposite is true of higher values of input levels when 'y" is higher. > The process used to correct the power-law response phenomena is called Gamma- Correction > Gamma correction is important when displaying images on computer screen bypogy Kalyan Digital Image Processing Module-1 PECS5406 | teases ened mei | al vig graye A aA logger Gamma Correction ae ed route 39 a) Aerial image. ()-@) Resulls of applying the ‘ansformation in en tand Y= 30.40,and 3.0, respectively, Gamma Correction Images End Apoigy Kabiyan Digital Image Processing Module-1 PECS5406 ; Lecture Note-8 Image Enhancement in Spatial Domain: by Intensity Transformations:Part-2 | Instructional Objectives At the end of this lesson, the students should be able to: > Explain Piecewise-Linear Transformation > Define Thresolding > Define Bit-plane slicing > Explain Gray-level slicing 8.1: Piecewise-Linear Transformation Functions: & Introduction > Piecewise-Linear Transformations are complementary approach to the previous methods of point processing. > These functions can be arbitrarily complex. > Some important transformations are purelyypiece-wise linear transforms > Disadvantage: Specification requires more user input = Types: > Thresolding > Contrast stretching > Gray-level slicing > Bit-plane slicing 8.2: Thresolding > Itis a Piccewise-Linear Transformation L~4 forr >m 0 forr Thresolding is a special case of clipping ie {® > Usefil for binarization of scanned binary images, documents, signatures, fingerprints. Dark — Light Dark —— Light Ari Digital Image Processing, Module-1 PECSS406 8.3: Gray-level slicing > > Italso called as Intensity-level Slicing Itis used to highlight a specific range of gray levels in an image (e.g. to enhance certain features) ‘One way is to display a high value for all gray levels in the range of interest(ROY) and a low value for all other gray levels (binary image). ‘The second approach is to brighten the desired range of gray levels but preserve the background and gray-level tonalities in the image: swine FIGURE 5.12 (2) Aortic angiogram: (b) Rosul of using. sicing transformation of the type illustra in Fig. 3.41(a). with the range of intensities of intesest selecied in the upper end af the gray seale_(c) Result of ‘slag the transformation in Fig 3.11(b). with the solected arca set to black, so that grays inthe area of the ‘ional vessels and kidneys were preserved. (Origiasl image eouetesy of Dr. Thomas R. Gost. University of Michigan Medical Schael.) Enhanced Images by Intensity-level Slicing Byres Kalyer Digital Image Processing Module-1 PECS5406 8.4: Contrast Stretching > Low-vontrast image is due to poor illumination, lack of dynamic range in sensor, wrong setting of lens aperture, > Contrast Stretching is used to increase the dynamic range of the gray levels in the image being processed, > Contrast stretching is a process that expands the range of intensity levels in an image so that it spans the full intensity range of the recording medium or display device. Let > The locations of (ri1) and (t2,S2) control the shape of the transformation function * [fim 1 and r= s: the transformation is a linear fiznction and produces no changes © Tf rian, 5-0 and sy-Lel, the transformation becomes a threshold function that creates a binary image > More on fiction shapes: "Intermediate values of (¢1,51) and (r2,8:) produce various degrees of spread in the gray levels of the output image, thus affecting its contrast * Generally, risty and sis: is assumed * Linear Contrast stretehing Mapping > In Linear contrast stretching output intensity $ is given by S2- $1 Tah S-Sy= ir-n] [r-r] +S, Ayagy Kabyor Digital Image Processing Module-1 PECSS406 8.5: Bit-Plane Slicing > Bit-Plane Slicing is used to highlight the contribution made to the total image appearance by changing specific bits of gray levels. * je. Assuming that each pixel is represented by 8 bits, the image is composed of 8 1-bit planes = Plane 0 contains the least significant bit and plane 7 contains the most significant bit "Only higher order bits (top four) contains visually significant data where other bit, planes contribute the more subtle details. > Application "Aids in determining the adequacy of the number of bits used to quantize each pixel Also in image compression eceeteces Bitpliae? {(qost significant) Bitplane 0 east significant) Example: Show the 3 bit-plane slicing of following image : zs [4 Ans: : Binary equivalent pixels matrix is jor ToT ‘010 | O11 | 100 ‘001 | 001 | 000 > IELSB changed to zero 100] 110] 110] |] 4 |e 010 | 010 | 100 000 | 000 |000]| | fo i. a Digital Image Processing Module-1 PECSS406 > IFUSB changed to zero oor o1o]oi|f {fi [2 73 o10 oii |o00 |] |f2 [3s fo 001 | 001 | 000 1 |r ye Bitplanes Original 8bits/ pixel one 8-bit byte Bitplane 7 Bitplane 0 ‘Bitplane 5 ‘Bilplane + Bitplanes Oagtnl seie/ pit Tee Bitplane 7 Bitplane 0 byreigy Kabiyen Digital Image Processing Module-1 PECS5406 Lecture Note-9 Image Enhancement in Spatial Domain: by HISTOGRAM PROCESSING Instructional Objectives At the end of this lesson, the students should be able to: > Explain Histogram > Define Normalized Histogram > Define Histogram of dark, bright Image. 9.1; Intreduction > The Histogram provides a global description of the appearance of the image. > The histogram of an image shows us the distribution of grey levels in the image. > Massively usefal in image processing, especially in segmentation. 9.1.1: Histogram > Histogram represents the relative frequency of occurrence of the various gray levels in the image * For each gray level, count the no of pixels having that level * Can group nearby Ievels to form a big bin & count no of pixels in it > The histogram ofa digital image with gray levels from 0 to L-I is a discrete function h(r,J=ny, where: is the kth gray level Ni is the # pixels in the image with that gray level n, N=MN is the total number of pixels in the image K=0,1,2,..,1-1 > Itis common practice to normalize a histogram by dividing each of its values by the total number of pixels in the image, denoted by the product MN. ‘Thus, a normalized histogram is given by p(1y)=m/MN > The sum of all components of a normalized histogram is equal to 1 bypeigy Kaliya Digital Image Processing Module-1 PECS5406 | © Example :A 4x4, image > [*[ele[e i 6/3 [ts @/8|9|to 9 |10|10| 7 kK 04234567 8 9 101112131416 Hk © 101400314 2310000 h(n) Tk > Examples : For 4x4 image 1/efele 6/3|1i/8 8]8/9|10 eforo[7] | it kl fo]4[2]2] 1 2/1/8/)4 414/517 | 5|[7[7/3] E ei — fot-of ad | 4{0|15|9 9] 9 |14/13 Le — Ea 9/45/13] 6 - oa __ Above examples proves that by changing intensity histogram can be change and vice-versa. byrogy Kabyen Digital Image Processing Module-1 PECSS406 9.1.2: Normalised Histogram of Image > The normalised histogram function is the histogram function divided by the total number of the pixels of the image Hr.) _ 1m, py, = Ne = MN n > It gives a measure of how likely is for a pixel to have a certain intensity. That is, it gives the probability of occurrence the intensity. > The sum of the normalised histogram fimnction over the range of all intensities is 1. > Example of Normalised Histogram Consider a 5x5 image with integer intensities in the range between one and eight. hy =8 AR) =8/25=032 At) =4 Py) =4/25=0.16 18434 11178 Ars) =3 PC) =3/25=0.12 oe Ny,y=2 | ph)=3/25=008 11852 hQ;)=2 PU%;)=2/25=0.08 Hr) =0 PA) =0/25=0.00 A(r,)=1 Pr) =1/25=0.04 nC AhG)=5125=0.20 + Some Points to Histogram > Interpretation * Treat pixel values as instantiations of a random variable = Histogram is an estimate of the probability distribution of the random variable, > “Unbalanced” histogram doesn’t fully utilize the dynamic range "Low contrast image ~ histogram concentrating in a narrow gray level range * Under-exposed image ~ histogram concentrating on the dark side * Over-exposed image ~ histogram concentrating on the bright side > Balanced histogram gives more pleasant look and reveals rich content > The shape of a histogram provides useful information for contrast enhancement bypegy Kalyan Digital Image Processing Module-1 Some Typical Histogram: Hity PECS5406 Contrast Image End Digital Image Processing Module-1 PECSS406 Lecture Note-10 Image Enhancement in Spatial Domain: by HISTOGRAM EQUALIZATION(HE) Instructional Objectives Ac the end of this lesson, the students should be able to: > Explain HE > Define the application field of HE > Define conditions of HE > Apply HE to images 10.1; Introduction to HE > Histogram Equalization can be used to improve the visual appearance of an image. > Histogram equalization automatically determines a transformation function that produces an output image that has a near uniform histogram % Goal: Map the grey level of each pixel to a new value such that the output image has approximately uniform distribution of gray levels, > So we need to design a gray value transformation s = T(r), based on the histogram of the input image, which will enhance the image. ~~ Original Equalized sey Halyn Digital Image Processing Module-1 PECSS406 10. Histogram Equalization > HE map the grey level of each pixel to a new value such that the output image has approximately uniform distribution of gray levels > As before, we assume that "The intensity levels in an image may be vied as random variables in the interval[0,L-1] "Lot p.(r) and p(s) denote the probability density function (PDF) of random variables r & s, Pal) sao FIGURE 2.18 (a) An arbitrary PDF. (b) Result of applying the transformation in Eq. (23-4) v6 all Intensity levels, 7 The resulting inteneitiog svhave a uniform PDE, independently of the form of the PDE of the 1. > HE based on three assumptions: § =T}r] + T[r] is a strictly monotonically increasing function in the interval 0S r alr) s saa J=0 =O | 790 0.19 7x0.19 = 1.33, 1 r=1 | 1023 0.25 Tx{0.1940.25) = 3.08 i” Ez-2 | 850 0.21 Tx(0.19+0,25+0.21) = 4.55 5 656 0.16 5.67 6 Ta=4 | 329 0.08. 6.23 6 Tse5 | 245. 0.06, 6.65 if 166 | 122 0.03 6.86, 7 81 0.02. 7 7 So Pele) Pale) cy FIGURE 3.19 Ulusiration of histogram equalization of a 2bit (8 intensity levels) image. (a) Original histogram. (b) Transformation function, (c) Equalized histogram, bypoyy Kabyan Digital Image Processing Module-1 PECS5406 ‘Example-2: Perform HE on the following image matrix where r=[0 to 7] es = 14007 44 3 Ss 4 Ses 2 Besser 2 Before HE image matrix After HE image matrix 10.4: Comments to “HE” ‘Histogram equalization(HE) results are similar to contrast stretching but offer the advantage of full automation, since HE automatically determines a transformation function to produce a new image with a near uniform histogram Histogram equalization may not always produce desirable results, particularly if the given histogram is very narrow * It can produce false edges and regions. * It can also increase image “graininess” and “patchiness.” a End apoiyy Kakyen Digital Image Processing Module-1 PECSS406 Lecture Note-11 Image Enhancement in Spatial Domain: by Histogram Specification Instructional Objectives At the end of this lesson, the students should be able to: > Define Histogram Specification > Explain how to do Histogram matching > Define Local Histogram matching 11.41: Introduction > Histogram equalization yields an image whose pixels are (in theory) uniformly distributed among all gray levels. > Sometimes, this may not be desitable. Instead, we may want a transformation that yields an output image with a pre-specified histogram. This technique is called histogram specification. 11.2: Histogram Specification » Histogram specification is otherwise called as Histogram Matching,” > Histogram matching (histogram specification) is used generate - A. processed image that has a target or specified histogram. > Foe Histogram specification “Given Information * Input image from which we can compute its histogram, "Desired histogram + Goal "Derive a point operation, H(¥) that maps the input image info an output image that has the user-specified histogram. ‘© Approach of derivation 22645) Input image_|——— uniform image « ——{ output image =) Ge) dpogy Kahiyan Digital Image Processing Module-1 PECS5406 11.3: Histogram Matehing: Procedure Let pi(1) and p.(2) denote the continuous probability density function of the variables r and z and p,(2) is the specified probability density function and s be a random variable * For continuous: > Obtain p,(e) from the input image and then obtain the values of s”. s=(L-Df" p,(w)dw > Use the specified PDF and obtain the transformation function G(2) G@)=(L-D[" p,Ode=s > Mapping from s to z ; z=G"(s} i ™ For discrete > Obtain p,(t;) from the input image and then obtain the values of s, round the value to the integer range [0, L-1]. ‘ (-1) Ss =TH)= EL “Dy Plt) = ad » Use the specified PDF and obtain the transformation function ‘G(z,), round the value to the integer range [0, L-1]. G2) = (DE p.Gde > Mapping fiom s, t0 2 Gs) 114 Example: Histogram Matching Suppose that a 3-bit image (L=8) of size 64 x 64 pixels (MN = 4096) has the intensity distribution shown in the following table (on the left). Get the histogram transformation function and make the output image with the specified histogram, listed in the table on the right. Rk Plt) = m4/MN | | Specified Astial oe Pelze) PAE) 790 0.19 ps 1023 0.25 2 oan: a ' 850 0.21 r 656 0.16 Jieeieg aa fas 329 0.08 | |a=4 0.20 0.25 245 0.06 5 0.30 0.21 122 0.03 6 0.20 0.24 a1 0.02 7 0.15 O11 Digital Image Processing Module-1 PECSS406 Ans: > Step-1: Obtain the sealed histogram-equalized values for given image z ne] om | reve Be] s=TO=-0) rr) | & 790 0.19 7x0.19 = 1.33 1 1023 0.25 Tx(0.19+0.25) 08 3 850 0.21 7x(0.19+0.25+0.21) = 4.55 Ee 656 0.16 5.67, 6 ra=4| 329 0.08 ~ 6.23 6 505 | 245 0.06 6.65 Ea re=6 | 122 0.03 6.86 El r=7| 81 0.02. 7 i > Step-2: Compute all the values of the transformation function G % | Pr(a)= oe Ge) s 0.00 7x0.00=0 oO 0.00. 7x0.00=0 0 0.00 7x0.00=0 0) 0.15 1 0.20 7x(0.15+0.20)=2.45 es 0.30. 7x(0.15+0.2040.30)=4.55 ba 0.20 5.96 6 0.15 7 7 > Step-3: Mapping from s to Zq > So output Histogram based image matrix is: 7 0 i 2 3 4 5 6 7 Pity | 0 0 0 019 | 025 | 021 | 024 | O11 Byreigy Kakyon Digital Image Processing Module-1 > Histograms of example-1 PECS5406 1234567 a+ bid tisogan, % | (0)Tansfonation ab ad FIGURES.22 (2) Histogram ota Sit image. (0) Speed function obtained frome sped histogram, (@Resitof pecoring 11.8: Example2: Histogram Matching Perform the HS and output image table on 8x8 image shown on the table. th oO 1 2 3 4 5 6 Et Dk 8 10 10 2 12 16 4 ‘Where the target histogram table is. m% (0 |! [2 |3 |4 |5 |6 7 nm {0 ,.{0 {o |o |20 [20 |16 fis Ans: Step 1: Obtained the equalized mapping table of input image a i 5 Te) EDY PG) 5,=TY 2,6) = & apeley Digital Image Processing Module-1 PECS5406 mn ]0 |i [2 [3 ]4 [5 [6 [7 | & fi [2 [3 [3 [5s [e [7 |7 Step 2 : Obtained the equalized mapping table of target image. 1 ps2) z |@ |t |2 [3 |4 [5 |@ [7 ve fe jo jo |o |2 fa fe |7 Step-3: Mapping from 54 fo q Te | se | GQ) | aq 2 + rk *YO 1 z 3 4 5 6 7 ok 8 10 10 2 12 16 4 2 24 a [4 iscta||5: 6 6 7 7 So the output image histogram matrix is 24 ° 1 2 3 4 5 6 7 ng [° 0 0 0 18 12 Byrsigy Kabyam Digital Image Processing Module-1 PECS5406 11.6: Local Histogram Processing > Define a neighborhood and move its center from pixel to pixel > At each location, the histogram of the points in the neighborhood is computed. Either histogram equalization or histogram specification transformation function is obtained. > Map the intensity of the pixel centered in the neighborhood. > Move to the next location and repeat the procedure aie FIGURE 3.26 (a) Original image. (b) Result of global histogram equalization. (¢) Result of local histogram equalization applied (0 (a), using a neighborhood of size 3 3. End Aoehayy Keahryan Digital Image Processing Module-1 PECSS406 Lecture Note-12 Image Enhancement in Spatial Domain: by Spatial Filtering(Introduction Instructional Objectives At the end of this lesson, the students should be able to ® Define mask or kernel » Explain Spatial domain linear filtering, > Explain Spatial domain non-linear filtering 12.1: Introduction > The word “filtering” has been borrowed from the frequency domain, > Filters are classified as: = Low-pass (ie., preserve low frequencies) * High-pass (ie., preserve high frequencies) * Band-pass (ie., preserve frequencies within a band) = Band-reject (ic., reject frequencies within a band) > Backgroun * Filter term in “Digital image processing” is referred to the subimage . * There are others term to call subimage such as mask, kernel, template, or window. * The value in a filter sub image is referred as coefficients, rather than pixels. > Basies of Spatial Filtering: "The concept of filtering has its roots in the use of the Fourier transform for signal processing in the so-called frequency domain. "Spatial filtering term is the filtering operations that are performed directly on the pixels of an image. > Definition: "Spatial filtering are defined by: * A neighborhood and “A predefined operation that is performed on the pixels inside the neighborhood ‘Area or Mask Processing Methods gy) = TiCy)] operates ona neighborhood of pixels spy Kalyan Digital Image Processing, Module-1 PECS5406 > Mechanics of spatial filtering: * The process consists simply of moving the filter mask from point to point in an image. | + At cach point (xy) the response of the filter at that point is calculated using @ i predefined relationship. i "Typically, the neighborhood is rectangular and its size is much smaller than that of wore © Type "Linear spatial filtering "Nonlinear spatial filtering * A filtering method is linear when the output is a weighted sum of the input pixels. * Methods that do not satisfy the above property are called non-linear. te Dts. te Zt 3.2,.-.,9) £5 = max(ze, & bpeigy Kahyar | if Digital Image Processing Module-1 PECSS406 12.2: Spatial Linear Filtering Systems > Spatial Linear Filtering Systems : ——- | Esrsystem | : Input Image ‘Output image ‘Linearity: “things can be added” + Shift-invariance: “things do not change over space” > Filtering with LSI System: "Spatial domain > Convolution = Frequency domain > Multiplication (convolution theorem) > Linear Spatial Filtering Methods ¢ ‘Two main linear spatial filtering methods: = Correlation and Convolution ‘The correlation of a filter w(x,y) of size mxn with an image f(x,y) denoted as wOny)erfey) Hay) SE,9= DY wen sors y+ a + The convolution of a filter w(x,y) of size mxn with an image f(x,y) denoted as wOsy) A fy) nay) flan=T Vas se-sy-0) * The result is the sum of products of the mask coefficients with the corresponding pixels directly under the mask . 8 Y)= WARD SOL yD LOS Gy) + AD Ly t+ ODS y-1)+ W100) FC, y+ ODL Os 9 #1) + WUD Seth y—D+ ALO) F (e+ Ly) + wll) le Hh y AD Aposy Kalyan Digital Image Processing. Module-1 PECS5406 > > ‘The coefficient w(0,0) coincides with image value fy), indicating that the mask is centered at (x,y) when the computation of sum of products takes place. For a mask of size mxn, we assume that m= 2a+1 and n= 2b+1, where a and b are nonnegative integer. Then m and n are odd. In general, linear filtering of an image f of size MxN with a filter mask of size mxn is given by the expression: gy) = LY wy sets +e ‘The process of linear filtering similar to a frequency domain concept called “convolution” RE Wz + Waly tot Wala = DME ai Example: mW) We : ‘Where the w’s are mask coefficients, the 2’s are the value of the image gray levels corresponding to those coefficients. Nonlinear spatial filtering Nonlinear spatial filters also operate on neighborhoods, and the mechanics of sliding a mask past an image are the same as was just outlined. «The filtering operation is based conditionally on the values of the pixels in the neighborhood under consideration = Ex: Noise reduction can be achieved effectively with a nonlinear filter with a basic function of computing the median gray level value in the neighbourhood. © Computation of median is a non-linear operation, as is that of variance ‘Motivation: Limitation of Linear Filters * Frequency shaping enhance some frequency components and suppress the others "For individual frequency component, cannot differentiate its “desirable” and “undesirable” parts + Nonlinear Filters * Cannot be expressed as convolution Cannot be expressed as frequency shaping 4 “Nonlinear” Means Everything (other than linear) © Need to be more specific "Often heuristic = We will study some “nice” ones End Digital Image Processing Module-1 PECSS405 Lecture Note-13 Image Enhancement in Spatial Domain: “Image Smoot! Instructional Objectives At the end of this lesson, the students should be able to: > Define smoothing > Define average filter > Explain order-statistic filters and application > Define median filter, max filter mid-range filter. 13.1: Introduction » Image smoothing is used for two primary purposes: 2” * To give an image a softer or special effect and blurring. "To eliminate noise > In spatial domain, this can be accomplished using various types filters "Liner smoothing Filter + Non-Linear Filter(Order-Statistics Filters) > A larger mask size would give a greater smoothing effect. Too much smoothing will eventually lead to blurring. > Blurring is used in preprocessing steps, such as removal of small details from an image prior to object extraction, and bridging of small gaps in lines or curves. > In the frequency domain, image smoothing is accomplished using a low pass filter. > For image smoothing the elements of the mask must be positive. Sum of mask elements is 1 13.2: Smoothing Linear Filters > Linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter mask. > Sometimes called “averaging filters”. > The idea is replacing the value of every pixel in an image by the average of the gray levels in the neighborhood defined by the filter mask. > Mask size determines the degree of smoothing and loss of detail * The general implementation for filtering an MN image with a weighted averaging filter of size mxn is given by the expression > x w(s,t)f(xts,y+t) YY wos.) sain gy) = ‘Where m=2a+1 &n=2b+1 Digital Image Processing Module-1 PECS5406 ‘Two Smoothing Averaging Filter Masks fa eerea ae a[/a])a)a)a | 1 a 1 oe ae 1 | =x} 1 f asa sya a)afa/ | 9 25 1 i a{a2laflala | 1 1 1 1 1 a 1 1 i | 3X3 mask 5X5 mask Weighted Averaging Filter Masks | rp2};3]2ye 1/2/11 I : eae eet aes: 72%) 2/4] 2 —x}3fe6]fo ]o]s3 16 81 ae eae oa aael ee eee 1f,2;3f2fa 3X3 Weighted average mask 5X5 Weighted average mask 13.3: Image Smoothing Filter Example Q: A 4x4 image and the smoothing mask filter is given. Apply the mask on the image and compute output image matrix. Tho o|= ° ‘4x4 image mask Digital Image Processing Module-1 $5406 Ans: Step-1 : Preprocessing: by Zero-padding eoTe,To;olsojo ijfeléelte ol[1[elelélo s|sirile _ lelets ffs po SFiS ie: ofsls|s |iolo 2 [1o/10, 7 o [se /[10/10) 7 [o eflololojolfo Step-2 : Move mask across the zero-padded image see of[o[o|[o|lole alo $ a}felelelo et ite{t ° 3 /it[s oo ols|[s|{siiojo o | 9/10/16] 7] 0 ololfo} olfo Step-3: Result 2.6/4.3/62/43 3/4|6/4 4.0/6.5|8.0/7.2 round 4|7|8/7 6.6/7.7/9.5/7.3 7 | 8/10! 7 6.0/7.8] 7.7/5.7 6/8/8|6 > Some results Original Image (7x7) Blurred images byogy Kalyan Digital Image Processing Module-1 PECSS406 13.4: Order-statistic (Nonlinear) Filters > Order-statistic filters are nonlinear filters. > Order-staistic filters based on ordering (ranking) the pixels contained in the filter mask. > Replacing the value of the center pixel with the value determined by the ranking result Types: median filter max filter min filter midrange filter max-min filter min-max filter ete > For Order-statistic filter: Where g : filter output for a reference pixel wi: mask coefficient 4: sorted coefficient of input image &1=0,1,233,.....,mn-1 mmm : dimension of mask > Median Filters Median filters are nonlinear, Median filtering reduces noise without blurring edges and other sharp details. Median filters ate excellent to tackle certain types of random noise Median filtering is particularly offective when the noise pattern consists of strong, spike- like components. (Salt-and-pepper noise.) For Median filter B= LD ew, where w= So 1 for i (mn - 1)/2 0 otherwise 8° Zoamn-1)/2 Median filter Process: > Comp region of neighborhood > Sort the values of the pixel in our region > In the mxn mask the median is (mn-1) div 2 > Replace each pixel by the median in a neighborhood around the pixel prey aby Digital Image Processing Module-1 Example: 1 Area or Mask Processing Methods ‘enhanced image aGay) = T[fy)} T operates on 2 neighborhood of pixels Example:2 5[8]718] smputse? Q: A 4x4 grayscale image is given by matrix. ljej7 el impulses a) Apply median filter with zero padding sie cd with repli : i ©) Apply median filter with replicate padding mpl sg [s[e[716 ‘Ans: a) Filter the image with a 3x3 median filter, after zero-padding O[o]oTolole aioe raaeeLEE s[e|7]s o[s|6[7/8,0| median [o]|s[6]o o[e|7}s8| zcro-padding fofole|7[s|o| filtering [se |7|7 5 [6 [5/6 o[s|6|is| elo 5{6|7|7 sl/ei7fa. o{s|6|7|[a8|o olsl[élo olofojololo s|[5]/6[7[ sls 5/6|7[8] replicate [5[5/6|7{8|/6| median|5|6/7/8 @}e[7]8|_-padding [ofole!7[s[e] filtering(S| 6/7 | 8 5 o{75] 8 s|sle|15[/8 [6 5 [eta BOBS s|/5/6l[7/e[s 6 7/8 ~ 5[5|6|7/818 Se impulse cleané byrclgy Kalyan Digital Image Processing Module-1 PECSS406 © Results of median filter ss Ses median filtered 3x3 window median filtered 5x5 window Noisy image 5x5 box fitter 5x5 median filtered 1 Mid Range Filter: w; = {i for ee ane Con ceaD 0 otherwise so $=Zo+ Zmn1 1 fori 0. otherwise } Min Fitter : w= { s0 B= % (2 fori= (mn 1) . eee & Max Filter: 0 otherwise So 8 = Zmn-t) End bpogy Kabyer Digital Image Processing Module-1 PECS5406 Lecture Note-14 Image Enhancement in Spatial Domain: “Image Sharpening” Instructional Objectives At the end of this lesson, the students should be able to: Define 1* derivatives Define 2" derivatives Explain Laplacian operator Explain Gradiant operator Define Sobel operator Define Robert Cross-gradient Operators 14.1: Introduction > Image sharpening deals with enhancing detail information in an image. > The detail information is typically contained in the of the image. vYYVYY igh spatial frequency components > Therefore, most of the techniques contain some form of high pass filtering. > High pass filtering can be done in both the spatial and frequency domain. * Spatial domain: using convolution mask (e.g. enhancement filter) * Frequency domain: using multiplication mask, However, high pass filtering alone can cause the image to lose its contrast. + Foundation > The image blurring is accomplished in the spatial domain by pixel averaging in a neighborhood. Since averaging is analogous to integration. Sharpening could be accomplished by spatial differentiation. > We are interested in the behavior of these derivatives in areas of constant gray level(flat segments), at the onset and end of discontinuities(step and ramp discontinuities), and along gray-level ramps. > These types of discontinuities can be noise points, lines, and edges. 14.2: Definition for a first derivative > First derivative must be zero in flat segments > First derivative must be nonzero at the onset of a gray-level step or ramp; and > First derivative must be nonzero along ramps. + A basic definition of the first-order derivative of a one-dimensional function f(x) is, gy Le peeey-s@ " byogy Kadryen Digital Image Processing Module-1 PECSS406 14.3: Definition for a second derivative > Second derivative must be zero in flat areas; > ‘Second derivative must be nonzero at the onset and end of a gray-level step or ramp; > Second derivative must be zero along ramps of constant slope * We define a second-order derivative as the difference of =f(t+)+fe-)-2f@). 2 Exapmle: 7 6 5 |. Constante, aap MOY Ramp 23 = 3B: E2 1 ° Scan a 6[é6|[6[6[s]4]3 Ist derivative 0 0-1-1 -1-1- 2nd derivative 0 O-1 0 0 0 5 4 3 2 pl 2 0} --@--@\— > -0-0-c- oe ie fan Sena paeecreie at iP ~2 a -3 = Firstderivative 4 Second derivative‘ / -s 4 > The Ist-order derivative is nonzero along the entire ramp, while the 2nd-order derivative is nonzero only at the onset and end of the ramp, > Ist make thick edge and 2nd make thin edge > The response at and around the point is much stronger for the 2nd- than for the Ist-order derivative Apegy Kalyen Digital Image Processing Module-1 PECS5406 14.4: The Laplacian (2" order derivative) Shown by Rosenfeld and Kak{1982] that the simplest isotropic derivative operator is the Laplacian of f(x,y) is defined as Vi v | > Discrete form of derivative: — PF peri ys se The digital implementation of the 2-Dimensional Laplacian is obtained by summing 2 } components vfs ae oF @x? Ax? VS HSL +S OLY +S I++ LO,YD-AP (9) v ‘ Sharpening Spatial Filters: Laplace Operator a af oa 1 ed FIGURE 3.37 (2) Filter mask wed | toimplement ———| F4.666) ih Mask wed to implement an a = extension of this equatioa that 0 -1 | 0 -1 | = “1 includes the | agonal terms | | (cand (a) tivo | “1 4 a a} 8 —1 | otberimplementa- tions of the Laplacien found o -1 | 0 ep na -1 frequent in faa || see — practice. spony Kalyan Digital Image Processing ‘Module-1 PECS5406 ‘+ Laplace Operator Implementation f(xy) - V?f(x,y)if the center coefficient of mask is — ve 9(%y) =} fy) + V2f (x, yf the center coef ficient of mask is + ve Where f(x,y) : is the original image V? f(x,y): is Laplacian filtered image 9(%y) : is the sharpen image 14.5: Unsharp Masking and Highboost Filtering Unsharp masking > Sharpen images consists of subtracting an unsharp (smoothed) version of an image from. the original image > eg. printing and publishing industry Steps * Blur the original image Subtract the blurred image from the original Add the mask to the original > Obtain a sharp image by subtracting a lowpass filtered (i.e., smoothed) image from the original image i.e. Highpass = Original- Low pass “© High boost filter Image sharpening emphasizes edges but details (i.c., low frequency components) might be lost. * High boost filter: amplify input image, then subtract a low pass image. "High boost = A[Original] — Low pass (A- 1) Original + Original- Low pass = (A+ 1) Original + High pass ee anal & — ot & eee Es signal with original shown dashed for refere- ore ecaeaeaeeeeeee nce. (c} Unsharp mask. (d) Sharp ened signal, Sharpened signal — obiained by adding, (¢) to (2). Unsharp mask bypogy Kabryan Digital Image Processing Module-1 PECS5406 14.6: Image Sharpening based on First-Order Derivatives > Let the fimction f{x,y), the gradient of ‘P at coordinate (x, y) is defined as af UE = grad() = (*] = % By, oy] > vf M(xy )=mag (Wf) = oF + 93 or Mtxy)= lal + Lay Z [2 {2s Moxy] =|ze-zsl+lz6—zsl [ota [ze Z |% |% > Robert Cross-gradient Operators: Mixy)= |29 — 2s| + [ze — Z6l > Sobel Operators : Mixy)= |(z, + 2z9 + 29) — (24 + 222 + 25)| -[(z5 + 226 + 29) — (24 + 224 + 27)] &. de FIGURE 3.41 | A3 X 3 region of ae | os | = an image (the zs ta are intensity eee ea gee ee values). (b)-(c) Roberts cross gradient operators. - ee (d)-(e) Sobei gee aeeegrey erate | eee tee 1 operators. All the | mask coefficients sum to zero, as expected of a derivative | | operator. End byogy Kabyer Digital Image Processing Module-1 PECS5406 Lecture Note-15 Some Important Questions with Answer Discussion for Module-1 10. ML. 12, B. 14, 15. ‘What is Digital Image processing? With a neat block diagram, explain the fundamental steps in digital image processing. Explain photometric image formation model. With the help of a neat block diagram, explain the components of a general purpose image processing system. Give an example for each of the component and explain how this system is different from other systems. Explain how image enhancement is achieved by gray level transformation, Describe operations on an image which lighten a dark image Explain image sharpening based on first-order derivatives. Explain the difference between Histogram equalization & Histogram specification techniques in image enhancement, Discuss briefly the spatial-domain smoothing filters which are used to reduce noise. Discuss in detail Piecewise-Linear Transformation Functions. With neat diagrams and appropriate mathematical expressions explain: i) Neighbours ii) Adjacency iii) Connectivity For pixels p, q with co-ordinates (x, y) and (s, t) respectively find the distance metric D fir the following cases: ) —_Buclidean distance De (p, @) City block distance D4 (p, 4) iil) Chessboard distance Dg (p, 4) Explain: t) Contrast stretching ii) Gray level slicing iii) Bit plane Slicing. Explain histogram matching What is @ histogram? Briefly explain the histogram equalization technique for image enhancement, Define different types of adjacency & explain how m -adjacency is different from 8 - adjacency with an example. byony Kaban Digital Image Processing ‘Module-1 PECS5406 16. 17.(), wi, What do you mean by digital image & digital image processing .? Distinguish between point processing & mask processing Distinguish between 8-adjacency & m-adjacency What do you mean by unsharp-masking ? If M-N=25, and the number of bits to represent each pixel is 8 find the ‘memory requirement to store the image, Define Weber Ratio. Distinguish between Brightness adaptation & Brightness discrimination . State average filter & it’s use for image smoothing. Show the average value of the Laplacian operator is zero, What will be the effect of repeatedly applying a 3x3 smoothing filter to a digital image? eRe ee hora Describe the technique of histogram specification. An image has the gray level probability density function p,(r) shown in the following diagram, It is desired to transform the gray levels of this image so that they will have the specified probability density function p,(2) shown, Assume continuous quantities and find the transformation (in terms of r and z) that will accomplish this, P(r) P.(2) 18, The objective of an edge detection algorithm is to locate the regions where the intensity is changing rapidly. (@ Scarching for regions of rapidly changing intensity corresponds to searching for regions where the local first derivative of the image intensity is large. Illustrate this statement using an appropriate figure (i Another possible way to search for regions of rapidly changing intensity corresponds to searching for regions where the local second derivative of the image intensity bas @ zero crossing, Illustrate this statement using an appropriate figure. 19. Give an example of 3x3 Laplace spatial mask. Show that this mask approximates local second derivative operator. 20. State the main disadvantage of the Laplacian mask when it is used to detect edges. Justify your answer. Apogy End

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy