0% found this document useful (0 votes)
1 views19 pages

CGIP QB

The document discusses the fundamentals of digital image processing, including its definition, purpose, advantages, and disadvantages. It outlines various application areas such as criminology, medical imaging, remote sensing, and transportation, and explains concepts like bit depth, sampling, and color modes. Additionally, it categorizes images into types such as binary, grayscale, and color images, detailing their characteristics and uses.

Uploaded by

Gireesh Mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
1 views19 pages

CGIP QB

The document discusses the fundamentals of digital image processing, including its definition, purpose, advantages, and disadvantages. It outlines various application areas such as criminology, medical imaging, remote sensing, and transportation, and explains concepts like bit depth, sampling, and color modes. Additionally, it categorizes images into types such as binary, grayscale, and color images, detailing their characteristics and uses.

Uploaded by

Gireesh Mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 19
B82 B.Tech, Sinth Semester TP Solved Serie « «Computer Graphies and Image Processing gy, | Module 4 Fundamentals of Digital Image Processing lett eee eS IMAGE PROCESSING Ques 1) What is image processing? What is the Purpose of image processing? ‘Ans: Image Processing Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and ‘output may be image or characteristics associated with that image. Usually image processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering ‘and computer science disciplines too. Digital image processing is the technology of applying & number of computer algorithms to process digital images. ‘The outcomes of this process can be either images or a set of representative characteristics or properties of the original images. Purpose of Image Processing ‘The purpose of image processing is divided into five Observe the objects that are not visible. 2) Image Sharpening and Restoration: To create a better image. 3) Image Retrieval: Seek for the image of interest. 4) Measurement of Pattern: Measures various objects in an image. 5) Image Recognition: Distinguish the objects in an image. Ques 2) What are the advantages and disadvantages of digital image processing? Ans: Advantage of Digital Image Processing 1) It improves the visual quality of an image and the distribution of intensity. 2) Itcan easily process a degraded image of uncoverable objects. 3) _Itcan process an image in such a way that the result is ‘more suitable than the original image. 4) 5) 6) nD ‘An image can be easily modified using a number techniques. ‘The image compression technique reduces the wuired to represent a digital image. orrrtal and logical operations canbe perform on an image like addition subtraction, OR etc The image segmentation is used t0 deg, discontinuity, the presence or absence of specifi anomalies like missing components or broke connection path. Disadvantages of Digital Image Processing 1) Digital image processing requires so much storge 2» 3) 4) and processing power. Progress in the field of digits image processing is dependent on the development of digital computers and supporting _ technology including data storage, display and transmission. Effect of environmental conditions may degrade the image quality. It involves various types of redundancy like daz redundancy, inter-pixel redundancy etc. Segmentation of non-trivial image is one of the mos difficult tasks in digital image processing. Ques 3) What are the different application areas of image processing? ‘Ans: Application Areas of Di Image Processing Following are the some application areas of digital image processing: » 2 Criminology/Forensics: Few types of evidence at more incriminating than a photograph or videotape that places a suspect at a crime scene. Ideally, th image will be clear, with all persons, settings, am! objects reliably identifiable. Unfortunately, thoug that is not always the case, and the photograph of video image may be grainy, blurry, of poor cont or even damaged in some way, In such cases, investigators may rely on computerize! technology that enables digital processing enhancement of an image. The U.S. government. in particular, the military, the FBI, and the Nation! Aeronautics and Space Agency (NASA), and m0 recently, private technology firms, have develore! advanced computer software that can dramatcil! improve the clarity of and amount of detail visible ® still and video images, Medical Imaging: This is a technology that cat used fo generate images of a human body (or att it), These images are then processed or analyzed "! ‘i Jc oF Digital Image Processing (Module 4) aon pe xs, who provide clinical prescription based on erpservations, Ultrasonic, X-ray, Computerized ther graphy (CT) and Magnetic Resonance Imaging verre quite often seen in daily life, though ensory systemis are individually applied expe te Sensing: This is technology of employing ‘nsors (0 gather information about the Earth. the information vid on clectromiagnetic radiation, force fields, or emmauic energy that can be detected by cameras, evometers. — Misers, radar systems, sonar ee emographs, thermal meters, ete remot mot 5 remmetiy the techniques used to obt sua ene’ sfittary: This area has been overwhelmingly studied fcntly. Existing applications consist of object Tection, tracking and three dimensional oetnstructions of territory, etc. For example, a human fai or any subject producing heat can be detected in it time using infrared imaging sensors, This ‘eehnique has been commonly used in the battle fields. Another example is that three dimensional recovery ‘of a target is used to find its correspondence to the template stored in the database before this target is destroyed by a missile. 5) Transportation: This is a new area that has just been developed in recent years. One of the key tectmological progresses is the design of automatically driven vehicles, where imaging systems play a vital role in path planning, obstacle avoidance and servo control. Digital image processing has also found its applications in traffic control and transportation planning, etc. Ques 4) Describe the Image as 2D data? Or Discuss how digital image can be represented? Or What is digital image? ‘ns: Image as 2D Data 4 digital image is a representation of a two-dimensional ‘mage a a finite set of digital values, called picture “ements or pixels, elements, each of which has a particular = For each pixel, there is an associated number rid as Digital Number (DN) or sample, which dictates ‘ey Hor and brightness for that particular pixel. An image 5 be defined as a two-dimensional function, f(x, y), oie % and ‘y’ are spatial (plane) coordinates, and the lude of ‘fat any pair of coordinates (x, y) is called the ily or gray level of the image at that point. b in gat may be defined ‘as a two-dimensional function, ampiltte x and y are spatial(plane) coordinates and sc ilude of fat any pair of coordinates (x, y) is called "ity of the image at that point. ta Bray level is used often to refer to the intensity of bing images. Color images are formed by a og ae of individual 2-D images. For example, in the en anat system, a color image consists of three(red, 'nd blue) individual component images. For this B-83 developed for many of the techniques monochrome images can be extended to color images by processing the three component images individually, An image ma be continuous with respect to the x- and y- coordinates, and also in amplitude, Converting such an image to digital form requires that the coordinates, as well as the amy value is called samplin; called quantization. ide, be digitized. Digitizing the coordinate digitizing the amplitude values is when x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. Digital Image Representation The important aspect in digital image processing is image representation. ‘Any monochrome image can be represented by means of a two-dimensional light intensity function f(x, y), where x and y denotes spatial coordinates and the value of x at any point (x, y) is the gray level or the brightness of the image at that point. The axis convention used to represent the image is shown in figure 4.1. The origin is taken at the top left comer and the horizontal line and the vertical line through the origin are taken as y and x axes, respectively. 0.0). Figure 4.1: Coordinates, Conventions and Image The monochrome image f(x, y) is discretized both is spatial coordinates and gray level values to obtain the digital image. A digital image can be represented as a matrix whose rows and columns are used to locate a point in the image and the corresponding element values give the gray level at that point. Each element in this matrix/digital array is called as image elements ot pixels. A typical digital image of size MXN is represented as given in equation (1). £00 ~— f@ £@.N-D £00) f(D fa,N-p f(xy) =I . ° . D) F(N-L0) f(N-LD ... fON-UM- The images we normally perceive in daily visual activities consist of light reflected from objects. Hence the function f(x, y) may consist of two components: » 2 ‘The amount of light incident on the scene being viewed. The amount of light reflected by the object in the scene. Bx Rech, Sixth Semester 1P Sofved Series ‘The light incident and reflected can be denoted as Hx, y) and r(x. y). respectively. Then the image function f (x, 9) is nothing but the proxuet of i(x, 9) and 114. y) and the ame is given in equation (2) FOLDS YINNO. eo Where 0.b, where L = 2°, It is assumed that €; =.0, € = 1... (, = L~ 1. The M x Nas having a spatial resolution of M x N pixels and a grey-level résolution of L levels. image of 1024x1024 pixels whose grey levels are represented by 8 bits. The other images shown are the result of sub sampling the 1024 x 1024 image. The sub sampling was accomplished by deleting the appropriate number of rows and columns from the original image. For example, the 512 x 512 image was obtained by deleting every other row and column from 1024 x 1024 images. The 256 x 256 image was generated by deleting every other row and columns from 512 x 512 images, and so on. The number of grey levels kept at 256. (Ques 13) Explain the reducing effect of Spatial and Gray Level Resolution. Ans: Effect of Reducing the Spatial Resolution creasing spatial resolution of a digital image, within the ‘ime area, may result in what is known as checkerboard Fattem, Also image details are lost when the spatial "solution is reduced. To demonstrate the checkerboard effect, we sub-sample the 1024 x 1024 image ‘town in the figure below to obtain the image of size 512 *S12 pixels The 512 x $12 is then sub-sampled to 256 x 256 image, So on until 32 x 32 image. The sub-sampling process Bal means deleting the appropriate number of rows and columns from the original image. The number of allowed trav levels was kent at 256 in all the images. ae a To see the effects resulting from the reduction in the number of samples, we bring all the sub-sampled images up to size 1024 x 1024 by row and column pixel replication, ‘The resulted images are shown in the figure 4.18. @ © © Figure 4.18: (a) 1024 x 1024, 8-Bit Image. (b) Through (0) $12 x 512, 256 x 256, 128 x 128, 64 x 64, and 32 x 32 Images ‘Resampled into 1024 x 1024 Pixels by Row and Column Duplication Compare figure 4.18(a) with the 512 x 512 image in figure 4.18 (b), we find that the level of detail lost is simply too finé to be seen on the printed page at the scale Which these images are shown. Next, the 256 x 256 image in figure 4.18(c) shows a -very slight fine checkerboard pattern in the borders between flower petals and the black background. A slightly more pronounced graininess throughout the image also is beginning to appear. These effects are much more visible in the 128 x 128 image in figure 4.18 (d), and they become pronounced in the 64 x 64 and 32 x 32 images in figures 4.18(e) and 4.18 (0), respectively, Effect of Reducing the Gray-Level Resolution Decreasing the gray-level resolution of a digital image may result in what is known as false contouring. This effect is caused by the use of an insufficient number of ‘gray levels in smooth areas of a digital image. To illustrate the false contouring effect, we reduce the number of gray levels of the 256-level image shown in figure 4.18 (a) from 256 to 2. The resulted images are shown in the Mgures 4.18 (b) through (h). This can be achieved by reducing the number of bits from k = 7 to k = 1 while keeping the spatial resolution constant at 452 x 374 pixels. B92 We can clearly see that the 256-, 128-, and 64-Level images are visually identical. However, the 32-level image shown in figure 4.19(d) has an almost imperceptible set of very fine ridge-like structures in areas of smooth gray levels (particularly in the skull),False contouring generally sie Cee} eS displayed using 16 or less uniformly spaced gray levels, as the images 4.19 (¢) through (h) show ges in gues o ® ) Figure 4.19: 452 x 374, 256-Level Image. (b)-(h) Image Displayed in 128, 64, 32, 16,8, 4, and 2 Gray Levels, While Keeping the Spatial Resolution Constant Ques 14) Describe the relationship between pixels. Or Explain the terms related to pixel. Or ‘What do you understand by the following terms with respect to pixels. Neighbours, Adjacency, Connectivity. (Dee. 2018[04), Ans: Relationship between Pixels/ Terms Related to Pixel ‘We know that an image is given by f(x, y). The pixels in the image can be denoted by lower case letters, such as p and q. The upper case letter S can be used. to represent a set of pixels in the image f (x, y). In order to analyse or identify different regions or entities in an image it is necessary to know the relationship between neighbouring pixels. Another important concept to determine the boundaries of objects or components is connectivity. Neighborhood and Connectivity Any pixel p at coordinates (x, y) has two horizontal pixels, (one on the left and the other on the right) and two vertical pixels (one above and one below). These four pixels are called as 4-neighbors of pixels p and denoted by Ny (p). whose coordinates are given by, (x y= 1). (4, y +1, (4-1, y) and (+ 1, y) The four neighbors of p are denoted by N, (p), Similarly, for any pixel p there are four diagonal neighbors and is denoted by Np (p) and whose coordinates are given as follows: (x1, yo), (c+, y+), (x1, #1) and (41, y-1) B.Tech, Sixth Semester TP Solved Series (Comp! ter Graphics and Image Processing) ky 1 neighbors are combined with eartiey bors, they are called 8-neighbors of {p), Some of the points in Ny (p), Np outside the image if the point p ig ‘When these diago mentioned 4-neigh! and denoted by Na (p) and Ne (p> may fall Jocated in the border of image. find the boundaries of the object or component ‘Of ihe region in an image, the concept of connectivity between pixels is used. Two pixels are stid 10 be connected. if they are neighbors in some sense say 4. eighbors or 8-neighbors and if their gray levels satisfy g criterion of similarity. For example, in a binary image ‘vith gray level values O and 1, the two pixels are said to be connected, if they are 4-connected and have the same gray level values. In order t Let § be a set of gray level values available in an image. For example, the gray level in an image ranges from 16 to 32 and be denoted as a set V = (16, 17, 19, 26, 28, 31, 32}. ‘There are three different types of connectivity procedures and they are defined as follows: 1) 4-Connectivity: Two pixels p and q with gray levels from the set V are said to be 4-connected if q is in the set Ny (P). 2) 8-Connectivity: Two pixels p and q with gray levels from the set V are said to be 8-connected if q is in the set Ns (P) 3) m-Connectivity: Two pixels p and q with gray level values from the set V are m-connected if q is in Ni (p) or q is in Np (p) and the set Ny (p) > Ns (q) isa null set. The m-connectivity is used to eliminate the multiple path connections that may arise when 8-connectivity is used. For example, consider a binary image of size 3 x 3 and the pixel arrangements as shown in the figure 4.20(a). The set V is assumed as V = (1). The path between the &- neighbors of the center pixel is shown by the dashed line in figure 4.20 (b). ‘The bottom right diagonal pixel can be reached in two ways and this ambiguity is removed by the m-connectivity as shown in Figure 4.20 (c). Ose onmar cnet oo 1 o roo ° ° o or. 0 o (a) Arrangements of (b) 8-Neighbors of | (c) m-Neighbors Pixels Center Pixel Figure 4.20 A pixel p is said to be adjacent to pixel q if p and 4 connected. We can define 4 or 8 or m adjacency based 0" the type of the connectivity. Two image subsets $1 and S: are adjacent if some pixels in S, are adjacent to SO pixels in So ondamentals of Digital Image Processing (Module 4) path between two pixels p and q with coordinates(x, ¥,) Jad (Kes Ya Fespectively, is a sequence of pixels ‘with foordinate® (Ko. Yor (Xt. Yds (Xa, Ya) vos (Rm Yas WhETE (Xi, pois adjacent 0 (Xi. Yor) for 1Si Sn fpand q are the pixels of an image of subset S, then p is connected (0 q in S, if there exists a path from p to q consisting entirely of pixels in S. Given a pixel p in S, the set of pixels in S that are connected to p is called the Connected component of S. For example, in the pixel arrangements with the set V= (20, 32, 40, 50) ‘The center pixel 50 has the connected component C= (20, 32, 40} Figure 421 Adjacency Two pixels are adjacent if they are neighbours and their intensity level “V" satisfies some specific criteria of similarity. For example, V = {1} and V Binary image = (0, 1} 0,2) Gray scale image = (0, 1, 2, ......, 255} 1) In binary images, 2 pixels are adjacent if they are neighbours and have some intensity values either 0 or 1 2) In gray scale, image contains more gray level values in range 0 to 255. Thete are three different types of adjacency: 1) 4-adjacency: Two pixels p and q with the values from set ‘V" are 4-adjacent if q is in the set of Ns (p). For example, V = (0, 1} 1{1/0 1 1}o}1 Pin box ean be any value in Bold. 2) Badjacency: Two pixels p and q with the values from set ‘V" are 8-adjacent if q is in the set of Ne (P). For example, V = (1, 2} O}iji 0 2] 0 ojoji B-93 pin box «can be any value in Bold 3) madjacency: Two pixels p and q with the values from set "V" are m-adjacent if i) qisin Na (p) or, ii) _q is in Np (p) AND the set Na (p) 0 Nu (q) have no pixels whose values are from *V" For example, V = {1} Oa 1b tc Od te Of Og Oh li i) qisinN, ) gis in Ns (p) Fine Pixel P Second pixel q > mde Oa 0b te od le of Os 0h or So: band c are m-adjacent. ii) qisinN4 (p) Example V = (1) i) bande 02 Ob te od te or oe on on So: b and e are m-adjacent, ii) eandi 02 0b te oa [iq o% So: e and i are m-adjacent. iii). e and c SetNs (pi N@ {0}not in V oa oa © on on Soe and ¢ are not m-adjacent. B94 B.Tech, Sixth Semester TP Solved Series (Compu Path and Path lengths A path from pixel p with coordinates (x, y) to pixel q with coordinates (5, 1) is a sequence of distinct pixels with coordinates: Go. Yon OX Ys 26 Ya) on Bye Yue Where (0. Yo) = (X. ¥) and (Ry. Yq) = (8,0 (Xi yi) is adjacent to (x) y,_;) Is Here n is the length of the path. We can define 4-, 8-, and m-paths based on type of adjacency used Connected Components If p and q are pixels of an image subset then p is connected to q in S if there is a path from p to q consisting entirely of pixels in S. For every pixel p in S, the set of pixels in S that are connected to p is called a connected component of S. If S has only one connected component then S is called connected set. Regions and Boundaries A subset R of pixels in an image is called a region of the image if R is a connected set. ‘The boundary of the region R is the set of pixels in the region that have one or more neighbors that are not in R. Ques 15) Consider the image segment shown below: (2019[06]) 3 @ 2 1 1 onne vee ©) Compute the lengths of shortest 4, shortest 8 and shortest m paths between pixels p and q where V=(0,1}. If a particular path does not exist between these two Points, explain why ‘Ans: Image Segment is shown below: (0,0) Hip) When V = (0, 1}, 4path does not exist between p and q because it is impossible to get from p to q by traveling along points that are both 4-adjacent and also have values from V. Figure 4.22(a) shows this condition; it is not possible to get to q. 1x Graphics and Image Processing) kry “the shortest S-pathis shown in figure 3(0); its length xg “The length ofthe shortest m-path (shown dashed) is 5, 10 2 Figure 4.22(b) Figure 4.22(a) Ques 16) Explain how to measure distance between tw pixels. s: Distance Measure . ., the dames between any two pixels in a given image can be given by tree different types of measures and they re 1) Euclidian distance 2) Dedistance and 3) Dg distance ‘The Euclidian distance between p and q is defined as, D.(p.4)= (G2 + i=) Where (x1, 1) and (x, ¥2) are the coordinates of the pixels pand q, respectively. ‘The Dy distance also called as city-blocking distance ‘between p and q is defined as, D(p,p)=Il (x — x2) + (y1-y)II - The Ds distance also called chessboard distance between pand qis defined as, ‘Ds (p, 4) = max (| (X-%0),1 (1 -¥2)1) Ques 17)Let V = {0, 1}, compute Dg Ds, and Dy distances between p and q. Figure 4.23 Ans: Computation of Euclidian Distance ‘The coordinates of q are (0,0) ‘The coordinates of p are (3, 3) De= (0x) ~ x2) + 1 -y2))" = (0-3) + (0-3))!7= 3 435M 18% 2 ‘Computation of D, Distance D.=10-3 +10-3=34+3=6 Computation of Dy Distance Dy=max (0-3),]0-3))=3 of Digital Image Processing (Module 4) dame! a ics 18) Explain the concept of spatial convolution. au Or hat do you understand by convolution operations In vce of image processing? (Dee. 2018[02]) Or .¢plain spatial domain Convolution operations. ese Or vhat is mean by convolution? Give applications of 2D {umvolution in the field of image processing. (2019[04}) ns: Spatial Convolution Generation of images with low-frequency or high- fequency components is called spatial filter processing {pat employs a method called spatial convolution, which jaces a pixel with the weighted average of the pixel and {eneighbors of the convolution mask. convolution and correlation operation operations are mainly used (© extract information from images. Convolution and correlation are basically linear and shift jnvariant operations. The term linear indicates that a pixel jz replaced by the linear combination of its neighbors. The tem shift invariant indicates that the same operation is performed at every point in the image. Convolution is basically mathematical operation where each value in the output is expressed as the sum of values in the input multiplied by a set of weighting coefficients, Depending upon the weighting ccefficients, convolution B-95 operation is used to perform spatial domain low-pass filtering of the image. An image can be either smoothened or sharpened by convolving the image with respect to low- pass and high-pass filter mask respectively. This principle is widely used in the construction of the image pyramid. Convolution has a multitude of applications including image filtering, image enhancement, image restoration, feature extraction and template matching. The two-dimensional discrete convolution between two signals x(n), ng) and h{n,, ng) is given by, ylnyng]= SD) x(ky.K)h(ny —k2,n2—k2) kere 2D convolution can be represented as a sequence of two ID convolutions only if the signals are separable. Convolution can be performed either in the spatial domain or in the frequency domain. Applications of Convolution 1) Convolution is used to merge signals. 2) It is used to apply operations like smoothing and filtering images where the primary task is selecting the appropriate filter template or mask. 3) Convolution can also be used to find gradients of the image (Ques 19) Perform the linear convolution between these two matrices x(m, n) and h(m, n) given below: 123 xim,n)=|4 5 6 ‘h(m, n) = (3.45) 789 Ans: The indices of the given input matrices are shown below: (0.0) 01) (0.2) i 2 3 4.0 ch a2) (00) x(mn)=| 4 5 6 nena = (‘3 20 Gy 22) 7°38 9 Determining the Dimension of the Resultant Matrix @2y 45 Te dimension of the resultant matrix will depend upon the dimension of input matrices, x(m, n). The dimension of x(m, n) ‘tiven by 3x3 (three row and three columns). The dimension of h(m, n) is given by 1X3(one row and three columns). irefore, the resultant matrix dimension will be calculated as: Dimension of resultant matrix (No.of rows of x(m,n)+ No.rows of h(m,n)—1) = x (No.of columns of x(m,n)+No.of columns of h(m,n)- 1) Di qwetsion of resultant matrix = (3 + 1-1) x (3+3-1)=3 x 5 “resultant matrix, y (m, n) of size 3 X 5 is given as, 'y(0,0) —-y(0,1) (0,2) (0,3) (0,4) y(m,n)=| y,0) ys) -y(,2) 3) y(t) y(2,0) (2,1) (2.2) (2,3) (2,4), Graphics and Image Processir aid B.Tech, Sith Semester 72 Solved Series (Computer Graph nD ky ‘The graphical representation of xim, n) is shown helow: ximn) maOmatm=2 representation of him, n) and i's folded version, h (-m, -n) are given below: (m1) ua) wt 4 | 9 The steps are as follows: 1) To determine the value of y (0,0) xn) ™ m2 3 6 9 1 boss | oxo O11 g 9 m=Om=1m=2 (0,0) =1x3= 2) Finding the value of y (0, 1) m0) t 9 a “? Io =0m=1m=2 y(O,l) = 23+1x4=10 3) Finding the value of y (0, 2) x(m, 0) him, 2-0) » 0 . ne? A 6 9 0 0 ne2 0 1 sow o .9 nel o—())—4*—_, me0m=1m=2 (0,2) =3x3+2%441x5 = 22 m=Om=1m=2 Fundamentals of Digital Image Processing (Module 4) 4) Finding the value of y (0, 3) x(rm,n) "t n=2 6 9 nel 5 8 po m=0m=1 y(03)=3x4+2x5=22 5) Finding the value of y (0, 4) x(m, 0) ft 4 6 9 } 5 8 o— | 4 m=0m=1m=2 y(0,4) =3x5=15 6) Finding the value of y (1, 4) (mn a4 a=3 y(1,4) = 6x5 = 7) Finding the value of y (1, 3) mn) ° 3 © » 2 @s8 | —4—7—» msom=1m=2 ¥0L3) = 6x44+5x5=49 8) Finding the value of y (1, 2) vz) lo, m1 2 © 5 om —O—-, m=Om=1m=2 YU,2) = 6x3+5x4+4x5=58 B97 0 0 naz m=Om=1m=2 ‘h(-m, 4-0) : ans a=3 0 0 ne? oo ast 7 —o—o— a=0 nee tom, 3-0) oe § @o n=2 ¢ @ 0 a a ne a=0 not 3.98 9) Finding the value of y (1, 1) an) m2 3 6 8 i m=1 2 @® 6 | Oo nel 7 m=0m=1m=2 y(hl) =5%34+4x4=31 10) Finding the value of y(1, 0) yO) =4x3=12 11) Finding the value of y (2, 0) x(m,n) ‘ (2,0) =7x3=21 12) Finding the value of y(2, 1) xm, 0) | 2 $ 69 rel) tts. ©) a-0— | —4*_Q)> | ° m=0m=1m=2 (2,1) =8x3+7x4 =52 B.Tech, Sixth Semester TP Solved Series a Graphics and Image Processing) ky, bm, 1-0) m=Om=2im=2 oo na ° nat ——O— 1-0 - 4 ne-l 5 ne ‘bQ-m, I-m) 0 9 ae o @ ast —o—@— neo Sane m=0 m=! m=2 andamentals of Digital Image Processing (Module 4) 15) Finding the value of y(2, 2) w2 3 6 @ a1 2 5 @® ner | 4 Oe m=0m=Im=2 (2.2) =9X3+8x447x5 = 94 14) Finding the value of y (2, 3) aon os ° a2 5 6 @ n= 2 5 @® wo | m=0m=1m=2 (2,3) =9X448x5 = 76 15) Finding the value of y (2, 4) Zen) o ° 3 6 © 205 8 nzo— | —4—745 W024) =9x5 = 45 The resultant values obtained from steps 1 to 15 are given below: W00)=3 y(O1)=10 y(0,2)=22 y(0,3)=22 (0,4) =15 2 y(ll)=31 y(,2)=58 y(3)=49 y(14)=30 ¥20)=21 y(2,1)=52 y(2,2)=94 y(2,3)=76 (3,4) =45 wh B99 hom, 20) ey ee g 0 @ vt ap azo m=O mat m=2 b@-m, 3-0) "t 3 ones g 09 @ 2 9 9 @ a= $9 —0—o— x0 m=0 m=1 m=2 bem en) | : 4 ns3 9 9 @ a2 ; @ 8 0 net spe to asl m=0 m=1 m=2 The graphical and the matrix forms of representation of the resultant matrix indices are shown below: x(m, 0) a=4 15 3045 naz 8 a= 1382 neo} m=0m=1m=2 Graphical Representation “(3 10 22 22 15 y(m,n)=|12 31 58 49 30 21 52 94 76 45 Matrix Representation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy