0% found this document useful (0 votes)
67 views24 pages

Unit-1 Important Questions

Image processing involves manipulating digital images using computer algorithms. It has three main steps: image acquisition, analysis/manipulation, and output. There are two types: analog processes hard copies while digital uses computers. Digital image processing has many applications including medical imaging, remote sensing, machine vision, and more. It involves steps like image enhancement, restoration, representation, segmentation, and using a knowledge base. An image processing system has components like sensors, hardware, software, storage, displays, and networking to process images digitally.

Uploaded by

Srikanth Chapidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views24 pages

Unit-1 Important Questions

Image processing involves manipulating digital images using computer algorithms. It has three main steps: image acquisition, analysis/manipulation, and output. There are two types: analog processes hard copies while digital uses computers. Digital image processing has many applications including medical imaging, remote sensing, machine vision, and more. It involves steps like image enhancement, restoration, representation, segmentation, and using a knowledge base. An image processing system has components like sensors, hardware, software, storage, displays, and networking to process images digitally.

Uploaded by

Srikanth Chapidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT-1 IMPORTANT QUESTIONS

1)Define Image Processing and its applications?

A) Image processing is a method to perform some operations on an image, in order to get an


enhanced image or to extract some useful information from it. It is a type of signal processing in
which input is an image and output may be image or characteristics/features associated with that
image. Nowadays, image processing is among rapidly growing technologies. It forms core research
area within engineering and computer science disciplines too.
Image processing basically includes the following three steps:

• Importing the image via image acquisition tools;


• Analysing and manipulating the image;
• Output in which result can be altered image or report that is based on image analysis.

There are two types of methods used for image processing namely, analogue and digital image
processing. Analogue image processing can be used for the hard copies like printouts and
photographs. Image analysts use various fundamentals of interpretation while using these visual
techniques. Digital image processing techniques help in manipulation of the digital images by using
computers. The three general phases that all types of data have to undergo while using digital
technique are pre-processing, enhancement, and display, information extraction.
Applications:
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
2)Fundamental Steps in Digital Image Processing
A) Step 1: Image Acquisition The image is captured by a sensor (eg. Camera), and digitized if the
output of the camera or sensor is not already in digital form, using analogue-to-digital convertor
Step 2: Image Enhancement The process of manipulating an image so that the result is more
suitable than the original for specific applications. The idea behind enhancement techniques is to
bring out details that are hidden, or simple to highlight certain features of interest in an image.
Step 3: Image Restoration - Improving the appearance of an image - Tend to be mathematical or
probabilistic models. Enhancement, on the other hand, is based on human subjective preferences
regarding what constitutes a “good” enhancement result.
Step 4: Colour Image Processing Use the colour of the image to extract features of interest in an
image
Step 5: Wavelets Are the foundation of representing images in various degrees of resolution. It is
used for image data compression
Step 6: Compression Techniques for reducing the storage required to save an image or the
bandwidth required to transmit it.
Step 7: Morphological Processing Tools for extracting image components that are useful in the
representation and description of shape. In this step, there would be a transition from processes
that output images, to processes that output image attributes.
Step 8: Image Segmentation Segmentation procedures partition an image into its constituent parts
or objects
Step 9: Representation and Description - Representation: Make a decision whether the data should
be represented as a boundary or as a complete region. It is almost always follows the output of a
segmentation stage. - Boundary Representation: Focus on external shape characteristics, such as
corners and (‫ )انحناءات‬inflections - Region Representation: Focus on internal properties, such as
texture or skeleton (‫( ھیكلیة‬shape
Step 9: Representation and Description - Choosing a representation is only part of the solution for
transforming raw data into a form suitable for subsequent computer processing (mainly
recognition) - Description: also called, feature selection, deals with extracting attributes that result
in some information of interest
Step 10: Knowledge Base Knowledge about a problem domain is coded into an image processing
system in the form of a knowledge database.

3)Component of Image Processing System


A) 1. Image Sensors Two elements are required to acquire digital images. The first is the physical
device that is sensitive to the energy radiated by the object we wish to image (Sensor). The second,
called a digitizer, is a device for converting the output of the physical sensing device into digital
form
2. Specialized Image Processing Hardware Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), which
performs arithmetic and logical operations in parallel on entire images. This type of hardware
sometimes is called a frontend subsystem, and its most distinguishing characteristic is speed. In
other words, this unit performs functions that require fast data throughputs that the typical main
computer cannot handle.
3. Computer The computer in an image processing system is a general-purpose computer and can
range from a PC to a supercomputer. In dedicated applications, sometimes specially designed
computers are used to achieve a required level of performance.
4. Image Processing Software Software for image processing consists of specialized modules that
perform specific tasks. A well-designed package also includes the capability for the user to write
code that, as a minimum, utilizes the specialized modules.
5. Mass Storage Capability Mass storage capability is a must in a image processing applications.
And image of sized 1024 * 1024 pixels requires one megabyte of storage space if the image is not
compressed. Digital storage for image processing applications falls into three principal categories:
1. Short-term storage for use during processing. 2. on line storage for relatively fast recall 3.
Archival storage, characterized by infrequent access
6. Image Displays The displays in use today are mainly color (preferably flat screen) TV monitors.
Monitors are driven by the outputs of the image and graphics display cards that are an integral part
of a computer system
7. Hardcopy devices Used for recording images, include laser printers, film cameras, heat-sensitive
devices, inkjet units and digital units, such as optical and CD-Rom disks.
8. Networking Is almost a default function in any computer system, in use today. Because of the
large amount of data inherent in image processing applications the key consideration in image
transmission is bandwidth
4) Simple Image Formation Model
A)NOT FOUND
5)Image Sampling and Quantization
A) In order to become suitable for digital processing, an image function f(x,y) must be digitized both
spatially and in amplitude. Typically, a frame grabber or digitizer is used to sample and quantize the
analogue video signal. Hence in order to create an image which is digital, we need to covert
continuous data into digital form. There are two steps in which it is done:

• Sampling
• Quantization

The sampling rate determines the spatial resolution of the digitized image, while the quantization
level determines the number of grey levels in the digitized image. A magnitude of the sampled image
is expressed as a digital value in image processing. The transition between continuous values of the
image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of fine shading
details in the image. The occurrence of false contours is the main problem in image which has been
quantized with insufficient brightness levels.
Sampling :

• The process of digitizing the co-ordinate values is called Sampling.


• A continuous image f(x, y) is normally approximated by equally spaced samples arranged in
the form of a NxM array where each elements of the array is a discrete quantity.

• The sampling rate of digitizer determines the spatial resolution of digitized image.
• Finer the sampling (i.e. increasing M and N), the better the approximation of continuous
image function f(x, y).

Quantization :

• The process of digitizing the amplitude values is called Quantization.


• Magnitude of sampled image is expressed as the digital values in Image processing.
• No of quantization levels should be high enough for human perception of the fine details in
the image.
• Most digital IP devices uses quantization into k equal intervals.
• If b-bits are used,

No. of quantization levels = k = 2b2b

• 8 bits/pixels are used commonly.

6)Relationship between pixels

A) Pixel

Pixel is the smallest element of an image. Each pixel correspond to any one value. In an 8-bit gray
scale image, the value of the pixel between 0 and 255. The value of a pixel at any point correspond
to the intensity of the light photons striking at that point. Each pixel store a value proportional to the
light intensity at that particular location.A pixel is also known as PEL.

Calculation of total number of pixels

We have define an image as a two dimensional signal or matrix. Then in that case the number of
PEL would be equal to the number of rows multiply with number of columns.This can be
mathematically represented as below:Total number of pixels = number of rows ( X ) number of
columns Or we can say that the number of (x,y) coordinate pairs make up the total number of
pixels.
Basic Relationships Between Pixels
1. Neighborhood
2.Adjacency
3. Connectivity
4.Paths
5.Regions and boundaries
1. Neighbors of a Pixel
• Any pixel p(x, y) has two vertical and two horizontal neighbors, specified by
[(x+1, y), (x-1, y), (x, y+1), (x, y-1)]
• This set of pixels are known the 4-neighbors of P, and is denoted by N4(P).
•All of them are at a unit distance from P.
• The four diagonal neighbours of p(x,y) are given by, [(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1 ,y-1)]
• This set is denoted by ND (P).
• Each of them are at Euclidean space of 1.414 from P.
• The points ND(P) and N4 (P) are together known as 8-neighbors of the point P, denoted by N8(P).
• Some of the points in the N4, ND and N8may fall outside image when P lies on the border of image.
Neighbors of a pixel
a. 4-neighbors of a pixel p are its vertical and horizontal neighbors denoted by N4(p)
b. 8-neighbors of a pixel p are its vertical horizontal and 4 diagonal neighbors denoted by N8(p)
2.Adjacency
• Two pixels are linked if they are neighbors and their gray levels satisfy few detailed pattern of
similarity.
• For instance, in a binary image two pixels are connected if they are 4-neighbors and have same
value (0/1).
PATHS
▪ A path from pixel p with coordinates (x, y) to pixel q with coordinates (s,t) is a sequence of distinct
pixel with coordinates:
(x0, y0), (x1, y1), (x2, y2)……… (xn, yn), where (x0, y0) = (x, y) and (xn, yn) = (s, t); (xi, yi) is adjacent to
(xi-1, yi-1) for 1≤i ≤ n . where n is the length of path.
REGION AND BOUNDARIES
A subset R of pixel in an image is called a region of the image R in a connected set. The boundary of
the region R is the set of pixel in the region that have one or more neighbors that are not in R.
UNIT-2 IMPORTANT QUESTIONS

1)Image Enhancements in Spatial Domain

• Image enhancement is the process of adjusting digital images so that the results are more
suitable for display or further image analysis. For example, you can remove noise, sharpen,
or brighten an image, making it easier to identify key features.
• Enhancement is the process an image so that the result is more suitable than the original
image for a specific Application
• There are two approaches to Image enhancement
1. Spatial Domain:- Refer to image plane itself. Direct manipulation of image itself.
2. Frequency Domain:- Based on modifying the fourier transform of the image
• Good Images:
1. For human visual: - The visual evaluation of image quality is a very highly subjective
process.It is hard to standardize the definition of a high quality image as it depends as per
the persons vision and requirement
2. For Machine Perception:- The evaluation task is easier. A good image is the one which gives
the best machine result

2)Basic Gray Level Transformation

A) Gray level transformation

All Image Processing Techniques focused on gray level transformation as it operates directly on
pixels. The gray level image involves 256 levels of gray and in a histogram, horizontal axis spans
from 0 to 255, and the vertical axis depends on the number of pixels in the image.
The simplest formula for image enhancement technique is:

1. s=T*r

Where T is transformation, r is the value of pixels, s is pixel value before and after processing.

There are three basic gray level transformation.

• Linear
• Logarithmic
• Power – law

Linear Transformation

• The linear transformation includes identity transformation and negative transformation.


• In identity transformation, each value of the image is directly mapped to each other values of
the output image.
• Negative transformation is the opposite of identity transformation. Here, each value of the
input image is subtracted from L-1 and then it is mapped onto the output image

Logarithmic transformations

Logarithmic transformation is divided into two types:

1. Log transformation
2. Inverse log transformation

• The formula for Logarithmic transformation

s = c log(r + 1)
• Here, s and r are the pixel values for input and output image. And c is constant. In the
formula, we can see that 1 is added to each pixel value this is because if pixel intensity is
zero in the image then log(0) is infinity so, to have minimum value one is added.

Power - Law transformations

Power Law Transformation is of two types of transformation nth power transformation and nth root
transformation.

Formula:

s = cr ^ γ

• Here, γ is gamma, by which this transformation is known as gamma transformation.


• All display devices have their own gamma correction. That is why images are displayed at
different intensity.

3)Histogram Processing

A) In digital image processing, the histogram is used for graphical representation of a digital image.
A graph is a plot by the number of pixels for each tonal value. Nowadays, image histogram is
present in digital cameras. Photographers use them to see the distribution of tones captured.

In a graph, the horizontal axis of the graph is used to represent tonal variations whereas the vertical
axis is used to represent the number of pixels in that particular pixel. Black and dark areas are
represented in the left side of the horizontal axis, medium grey color is represented in the middle,
and the vertical axis represents the size of the area.

Histogram Processing Techniques

Histogram Sliding

In Histogram sliding, the complete histogram is shifted towards rightwards or leftwards. When a
histogram is shifted towards the right or left, clear changes are seen in the brightness of the image.
The brightness of the image is defined by the intensity of light which is emitted by a particular light
source

Histogram Stretching
In histogram stretching, contrast of an image is increased. The contrast of an image is defined
between the maximum and minimum value of pixel intensity.If we want to increase the contrast of
an image, histogram of that image will be fully stretched and covered the dynamic range of the
histogram.

Histogram Equalization

Histogram equalization is used for equalizing all the pixel values of an image. Transformation is
done in such a way that uniform flattened histogram is produced.Histogram equalization increases
the dynamic range of pixel values and makes an equal count of pixels at each level which produces a
flat histogram with high contrast image

4)Image Enhancements in arithematic and logic operators

A) Arithmetic Operations:

• Adding or subtracting a constant value to/from each image pixel value can be used to
increases/decrease its brightness.
• Blending Adding images together produces a composite image of both input images. This can
be used to produce blending effects using weighted addition.
• Multiplication and division can be used as a simple means of contrast adjustment and
extension to addition/subtraction (e.g. reduce contrast to 25% = division by 4; increase
contrast by 50% = multiplication by 1.5).

Logical Operations on Images

A standard logical operations can be performed between images such as NOT, OR, XOR, and AND. In
general, logical operation is performed between each corresponding bit of the image pixel
representation (i.e. a bit-wise operator).
• NOT (inversion): This inverts the image representation. In the simplest case of a binary
image, the (black) background pixels become (white) and vice versa.
• OR/XOR: are useful for processing binary-valued images (0 or 1) to detect objects which
have moved between frames. Binary objects are typically produced through application of
thresholding to a grey-scale image.
• Logical AND: is commonly used for detecting differences in images, highlighting target
regions with a binary mask or producing bit-planes through an image.

5)Basic Spatial Filtering

A) Spatial Filtering technique is used directly on pixels of an image. Mask is usually


considered to be added in size so that it has specific center pixel. This mask is moved on the
image such that the center of the mask traverses all image pixels.

There are two types:


1. Linear Spatial Filter
2. Non-linear Spatial Filter

Smoothing Spatial Filter: Smoothing filter is used for blurring and noise reduction in the
image. Blurring is pre-processing steps for removal of small details and Noise Reduction is
accomplished by blurring.
Sharpening Spatial Filter: It is also known as derivative filter. The purpose of the sharpening
spatial filter is just the opposite of the smoothing spatial filter. Its main focus in on the removal
of blurring and highlight the edges. It is based on the first and second order derivative.

6)Smoothening and Sharpening Spatial Filters

A) Smoothing and sharpening function use the pixels in an N x N neighborhood about each pixel to
modify an image. For both smoothing and sharpening filters the larger the N x N neighborhood the
stronger the smoothing or sharpening effect. Smoothing and sharpening function can be either non-
adaptive or adaptive to local statistics found in each N x N neighborhood of an image. Non-adaptive
functions use the same calculation in each N x N neighborhood to modify a pixel

1. Low pass filters (Smoothing)


Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise from a
digital image. The low-pass filters usually employ moving window operator which affects one pixel
of the image at a time, changing its value by some function of a local region (window) of pixels. The
operator moves over the image to affect all the pixels in the image.

2. High pass filters (Edge Detection, Sharpening)


A high-pass filter can be used to make an image appear sharper. These filters emphasize fine
details in the image - the opposite of the low-pass filter. High-pass filtering works in the same way
as low-pass filtering; it just uses a different convolution kernel.

7)Combining Spatial Enhancement methods


A)

combining-spatial-enh
ancement-methods-38_compress (1).pdf

https://nanopdf.com/queue/combining-spatial-enhancement-methods-38_pdf?queue_id=-
1&x=1606666212&z=NDkuMjA1LjIyOC4yNTE=

Check the bone scan example in above pdf

UNIT-4 IMPORTANT QUESTIONS

1)Color Fundamentals
DIP_Lecture15.pdf
A)
https://www.uotechnology.edu.iq/ce/lecture%202013n/4th%20Image%20Processing%20_Lectures/DIP
_Lecture15.pdf

Check color models from above link too

2)Color Models

A) Color models (color space, or color system)

• A color model is a specification of a coordinate system and subspace where each color is
represented as single poi t n t

• Examples

1. RGB
2. CMY (cyan, magenta, yellow)/CMYK (cyan, magenta, yellow, black)
3. NTSC
4. YCbCr
5. HSV
6. HIS
3)Psuedo Color Image Processing

• Pseudo-color processing is a technique that maps each of the grey levels of a black and
white image into an assigned color. This colored image, when displayed, can make the
identification of certain features easier for the observer. The mappings are computationally
simple and fast. This makes pseudo-color an attractive technique for use on digital image
processing systems that are designed to be used in the interactive mode.
• Pseudocoloring is a technique to artificially assign colors to a grey scale. There are various
approaches for assigning color to grey-level images. A technique, known as intensity slicing,
assigns a shade of color to all grey levels that fall under a specified value and a different
shade of color to those grey levels that exceed the specified value. The majority of the
techniques perform a grey level to color transformations. The idea is to perform 3
transformations on a particular grey level and feed this to the three color inputs (RGB) of a
color monitor. The result is a composite image whose color content depends on the grey
level to color transformations.

4)Full Color Image Processing


A)

5)Color Transforms

A)

6)Smothening and Shaerpening of Color Image Processing


A)

• Several factors impact on colour images and they do not only affect visual perception of the
image. They also hinder the identification and distinction of image features that are relevant
for different applications such assegmentation or pattern recognition. Noise is one of the
most common of these factors and it can significantly affect visual quality of images, as well
as the performance of most image processing tasks. It is the result of errors in the image
acquisition process.
• In several cases, images are taken under not suitable conditions: low light, too much clarity
or poor weather conditions. Deficient quality equipment can hamper image acquisition
because of transmissions errors, problems with networked cables, signal disturbances,
troubles with sensors, etc. Therefore, pixel intensity values do not reflect true colors of the
real scene we are shooting. For these reasons, lots of methods have been developed in
order to recover lost image information and to enhance image details. Color image
smoothing is part of pre-processing techniques intended for removing possible image
perturbations without losing image information.
• Analogously, sharpening is a pre-processing technique that plays an important role for
feature extraction inimage processing. But even in this last case, smoothing will be needed
in order to obtain a robust solution. This has motivated the study and development of
methods that were able to cope with both operations.
• The initial approach is usually to consider it as a two-steps process: first smoothing and
later sharpening, or the other way around. However, this approach usually leads to many
problems. On the one hand, if we first apply a smoothing technique, then we could be losing
information that cannot be recovered in the succeeding sharpening step. On the other hand,
if we first apply a sharpening method over a noisy image, we will amplify the noise present
in it. The ideal way to address this problem is to consider a method that was able to sharp
image details and edges while removing noise. Nevertheless, this is not a simple task given
the opposite nature of these two operations.
• Many methods for both sharpening and smoothing have been proposed in the literature, but
if we restrict ourselves to methods that consider both of them simultaneously, the state-of-
the-art is not so extensive. In this work we will also survey several methods of two-steps
approaches in order to intensify the features of an image and to reduce the existing noise of
the image. We will also review techniques that address both goals simultaneously

7)Color Segmentation

A) Check out not found

UNIT-5 IMPORTANT QUESTIONS

1)Fundamentals of Image Compression

A) Image compression is a type of data compression applied to digital images, to reduce their cost
for storage or transmission. Algorithms may take advantage of visual perception and the statistical
properties of image data to provide superior results compared with generic data compression
methods which are used for other digital data

2)Image Compression Models

A)

Mapper: transforms data to account for interpixel redundancies

Quantizer: quantizes the data to account for psychovisual redundancies

Symbol encoder: encodes the data to account for coding redundancies

The decoder applies the inverse steps

Huffman Coding
(addresses coding redundancy
• Source symbols are encoded one at a time!

• There is a one-to-one correspondence between source symbols and code words.

Optimal code - minimizes code word length per source symbol

3)Lossypredictive Coding

• Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline of
pulse-code modulation (PCM) but adds some functionalities based on the prediction of the
samples of the signal. The input can be an analog signal or a digital signal.
• If the input is a continuous-time analog signal, it needs to be sampled first so that a
discrete-time signal is the input to the DPCM encoder.
1. Option 1: take the values of two consecutive samples; if they are analog samples, quantize
them; calculate the difference between the first one and the next; the output is the
difference.
2. Option 2: instead of taking a difference relative to a previous input sample, take the
difference relative to the output of a local model of the decoder process; in this option, the
difference can be quantized, which allows a good way to incorporate a controlled loss in the
encoding.
• Applying one of these two processes, short-term redundancy (positive correlation of nearby
values) of the signal is eliminated; compression ratios on the order of 2 to 4 can be achieved
if differences are subsequently entropy coded because the entropy of the difference signal is
much smaller than that of the original discrete signal treated as independent samples.


4)Image Compression Models

A)NOT FOUND

5)Error Free Compression

A) Error-free compression

• Useful in application where no loss of information is tolerable. This maybe due to accuracy
requirements, legal requirements, or less than perfect quality of original image.

• Compression can be achieved by removing coding and/or interpixel redundancy.

• Typical compression ratios achievable by lossless techniques is from 2 to 10

UNIT-6 IMPORTANT QUESTIONS

1)Morphological Image Processing

• Morphological image processing is a collection of non-linear operations related to the shape


or morphology of features in an image.
• According to Wikipedia, morphological operations rely only on the relative ordering of pixel
values, not on their numerical values, and therefore are especially suited to the processing
of binary images.
• Morphological operations can also be applied to greyscale images such that their light
transfer functions are unknown and therefore their absolute pixel values are of no or minor
interest.

• Morphological techniques probe an image with a small shape or template called a


structuring element.
• The structuring element is positioned at all possible locations in the image and it is
compared with the corresponding neighbourhood of pixels.
• Some operations test whether the element "fits" within the neighbourhood, while others test
whether it "hits" or intersects the neighbourhood:

2)Preliminaries

• The language of mathematical morphology is a set theory. As such, morphology offers a


unified and powerful approach to numerous image processing problems.
• Sets in mathematical morphology represent objects in an image.
• For example,the set of all white pixels in a binary image is a complete morphological
description of the image. In binary images, the sets in question are numbers of the 2-D
integer space Z2, where each element of a set is tuple (2-D vector) whose coordinates are
the (x, y) of a white (or black, depending on the convention) pixel in the image.

3)Dilation and Erosion

A) The most basic morphological operations are dilation and erosion. Dilation adds pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries. The number
of pixels added or removed from the objects in an image depends on the size and shape of
the structuring element used to process the image

Erosion

With A and B as sets in Z 2 , the erosion of A by B, denoted A Ө B, is defined as

A Ө B = { z | ( B )z ⊆ A } (3)

This equation (3) indicates that the erosion of A by B is the set of all points z such that B, translated
by z, is contained in A. The set B is a structuring element. Equation (3) is the mathematical
formulation of the example in Figure4. The statement that B has to be contained in A is equivalent to
B not sharing any common elements with the background; we can express erosion in the following
equivalent form: A Ө B = { z | ( B ) z ⋂ Ac = ⊘}

Dilation

With A and B as sets in Z 2 , the dilation of A by B, denoted A⊕B, is defined as

A⊕B = {z|(B R )z⋂ A ≠ ⊘} (5)

This equation is based on reflecting B about its origin and shifting the reflection by z. The dilation of
A by B then is the set of all displacements, z. Such that B R and A overlap by at least one element.
Based on this interpretation, equation (3) can be written equivalently as
A⊕B = {z|[( B R )z⋂ A] ⊆A} (6)

We assume that B is a structuring element and A is the set (object in an image) to be dilated. Unlike
erosion, this is a shrinking or thinning operation, dilation”grow” or “thicken” objects in a binary
image. The specific manner and extent of this thickening is controlled by the shape of the structuring
element.

4)Open and Closing in Morphological Image Processing

A)

5)Hit and Miss Transform

• The hit-and-miss transform is a general binary morphological operation that can be used to
look for particular patterns of foreground and background pixels in an image.
• It is actually the basic operation of binary morphology since almost all the other binary
morphological operators can be derived from it.
• As with other binary morphological operators it takes as input a binary image and a
structuring element, and produces another binary image as output.

• The hit and miss transform allows to derive information on how objects in a binary image
are related to their surroundings. The operation requires a matched pair of structuring
elements, {s1, s2}, that probe the inside and outside, respectively, of objects in the image:

• f {s1, s2} = (f s 1) ∩ ( f c s2).

• A pixel belonging to an object is preserved by the hit and miss transform if and only
if s1 translated to that pixel fits inside the object AND s2 translated to that pixel fits outside
the object. It is assumed that s1 and s2 do not intersect, otherwise it would be impossible for
both fits to occur simultaneously.
• The hit and miss transform can be used for detecting specific shapes (spatial arrangements
of object and background pixel values) if the two structuring elements present the desired
shape, as well as for thinning or thickening of object linear elements.

6)Basic Morphologic Algorithms

A)check the PDF

UNIT-7 IMPORTANT QUESTIONS

1)Image Segmentation

• In digital image processing and computer vision, image segmentation is the process of
partitioning a digital image into multiple segments (sets of pixels, also known as image
objects).
• The goal of segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze.Image segmentation is typically
used to locate objects and boundaries (lines, curves, etc.) in images.
• More precisely, image segmentation is the process of assigning a label to every pixel in an
image such that pixels with the same label share certain characteristics.
• The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image (see edge detection).
• Each of the pixels in a region are similar with respect to some characteristic or computed
property, such as color, intensity, or texture.
• Adjacent regions are significantly different with respect to the same characteristic(s). When
applied to a stack of images, typical in medical imaging, the resulting contours after image
segmentation can be used to create 3D reconstructions with the help of interpolation
algorithms like marching cubes.
• Semantic segmentation is an approach detecting, for every pixel, belonging class of the
object.
• For example, when all people in a figure are segmented as one object and background as
one object.
• Instance segmentation is an approach that identifies, for every pixel, a belonging instance of
the object. It detects each distinct object of interest in the image.
• For example, when each person in a figure is segmented as an individual object.
• There are three classes of segmentation techniques.
1. Classical approaches
2. AI based techniques
3. Techniques not falling into the above two categories

2)Detection of Discontinous

A)

3)Edge Linking and Boundary Detection

A)

4)Thresholding
• In digital image processing, thresholding is the simplest method of segmenting images.
• From a grayscale image, thresholding can be used to create binary images
• The simplest thresholding methods replace each pixel in an image with a black pixel if the

I I T
image intensity i,j is less than some fixed constant T (that is, i,j < ), or a white pixel if the
image intensity is greater than that constant. In the example image on the right, this results
in the dark tree becoming completely black, and the white snow becoming completely white.
• The simplest method of image segmentation is called the thresholding method. This method
is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image.
• The key of this method is to select the threshold value (or values when multiple-levels are
selected). Several popular methods are used in industry including the maximum entropy
method, balanced histogram thresholding, Otsu's method (maximum variance), and k-means
clustering.
• Recently, methods have been developed for thresholding computed tomography (CT) images.
The key idea is that, unlike Otsu's method, the thresholds are derived from the radiographs
instead of the (reconstructed) image.
• New methods suggested the usage of multi-dimensional fuzzy rule-based non-linear
thresholds. In these works decision over each pixel's membership to a segment is based on
multi-dimensional rules derived from fuzzy logic and evolutionary algorithms based on
image lighting environment and application.
• Thresholding methods are categorised as follows
1. Histogram shaped Methods
2. Entropy methods
3. Clustering methods
4. Object Attribute methods
5. Spatial methods
6. Layered methods

Method Limitations
• Automatic thresholding will work best when a good background to foreground contrast ratio
exists. Meaning the picture must be taken in good lighting conditions with minimal glare.

5)Region Based Segmentation

• Region-based techniques rely on common patterns in intensity values within a cluster of


neighboring pixels.
• The cluster is referred to as the region, and the goal of the segmentation algorithm is to
group regions according to their anatomical or functional roles.
• Region growing is a simple region-based image segmentation method. It is also classified as
a pixel-based image segmentation method since it involves the selection of initial seed
points.
• This approach to segmentation examines neighboring pixels of initial seed points and
determines whether the pixel neighbors should be added to the region. The process is
iterated on, in the same manner as general data clustering algorithms.
• Region-based segmentation methods attempt to partition or group regions according to
common image properties. These image properties consist of :
1. Intensity values from original images, or computed values based on an image operator
2. Textures or patterns that are unique to each type of region
3. Spectral profiles that provide multidimensional image data

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy