0% found this document useful (0 votes)
22 views18 pages

Module 1

The document discusses the fundamentals of digital image processing including image acquisition, enhancement, restoration, compression and representation. It outlines the basic steps involved in digital image processing such as image sensing, sampling, quantization, segmentation, description and recognition.

Uploaded by

sanjanaoza02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views18 pages

Module 1

The document discusses the fundamentals of digital image processing including image acquisition, enhancement, restoration, compression and representation. It outlines the basic steps involved in digital image processing such as image sensing, sampling, quantization, segmentation, description and recognition.

Uploaded by

sanjanaoza02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Best VTU Student Companion App You Can Get

DOWNLOAD NOW AND GET


Instant VTU Updates, Notes, Question Papers,
Previous Sem Results (CBCS), Class Rank, University Rank,
Time Table, Students Community, Chat Room and Many
More

CLICK BELOW TO DOWNLOAD VTU CONNECT APP


IF YOU DON’T HAVE IT

* Visit https://vtuconnect.in for more info. For any queries or questions wrt our
platform contact us at: support@vtuconnect.in
VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

Module 1: DIGITAL IMAGE FUNDAMENTALS

Structure
1.1.0 Introduction
1.1.1 Objectives
1.1.2 Digital Image Processing
1.1.3 Fundamental Steps in Digital Image Processing
1.1.4 Components of an Image processing systems
1.1.5 Elements of Visual Perception
1.5.1.1 Image Formation in the Eye
1.5.1.2 Brightness Adaptation and Discrimination
1.2.1 Image Sensing and Acquisition
1.2.1.1 Image Acquisition Using a Single Sensor
1.2.1.2 Image Acquisition Using Sensor Strips
1.2.1.3 Image Acquisition Using Sensor Arrays
1.2.2 Image Sampling and Quantization
1.2.3 Some basic Relationships between Pixels
1.2.3.1 Neighbors of a Pixel
1.2.3.2 Adjacency, Connectivity, Regions, and
Boundaries
1.2.4 Linear and Nonlinear Operations
1.2.5 Applications of Digital image processing.
1.2.5 Outcomes
1.2.6 Questions /Answers
1.2.7 Further Readings

ATMECE, ECE Page 1

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

1.1.0 Introduction
The science of image processing combines the natural way of image by humans with the science
of mathematics. Thus image processing is defined as the manipulation and analysis of
information contained in image. Digital image processing allows us to enhance important image
features while attenuating details irrelevant to a specific application.
Imaging is based predominantly on energy radiated by electromagnetic waves and also sound
reflected from objects can be used to form ultrasonic images. Other major sources of

1.1.1 Objectives
Acquire the basic knowledge of the Digital Image Processing.
Main components involved in image processing.
Introduce different types of sensors used for acquire an Image
Gain the knowledge of different types of the sensors.
1.1.2 Digital Image Processing
The science of image processing combines the natural way of image by humans with the science
of mathematics. Thus image processing is defined as the manipulation and analysis of
information contained in image. Digital image processing allows us to enhance important image
features while attenuating details irrelevant to a specific application.
Imaging is based predominantly on energy radiated by electromagnetic waves and also sound
reflected from objects can be used to form ultrasonic images. Other major sources of digital
images are electron beams for electron microscopy and synthetic images used in graphics and
visualization.
1.1.3 Fundamental Steps in Digital Image Processing
It is helpful to divide the material covered in the following chapters into the two broad categories
methods whose input and output are images and methods whose inputs may be images, but
whose outputs are attributes extracted from those images. Rather, the intention is to convey an
idea of all the methodologies that can be applied to images for different purposes and possibly
with different objectives.
Image acquisition is the first process acquisition could be as simple as being given an image that
is already in digital form. Generally, the image acquisition stage involves preprocessing, such as
scaling.

ATMECE, ECE Page 2

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

The Fig. 1.1 does not imply that every process is applied to an image.

Fig. 1.1: Fundamental Steps in Digital Image Processing


Image enhancement is among the simplest and most appealing areas of digital image processing.
Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or
simply to highlight certain features of interest in an image. A familiar example of enhancement is
when we increase the contrast of an image because ―it looks better.‖ It is important to keep in
mind that enhancement is a very subjective area of image processing
Image restoration is an area that also deals with improving the appearance of an image. Unlike
enhancement, which is subjective, image restoration is objective, in the sense that restoration
techniques tend to be based on mathematical or probabilistic models of image degradation.
Enhancement, on the other hand, is based on human subjective preferences regarding what
constitutes a ―good‖ enhancement result.
Color image processing is an area that has been gaining in importance because of the
significant increase in the use of digital images over the Internet. Color is used also in later
chapters as the basis for extracting features of interest in an image.
Wavelets are the foundation for representing images in various degrees of resolution. In
particular, this material is used in this book for image data compression and for pyramidal
representation, in which images are subdivided successively into smaller regions.
Compression, as the name implies, deals with techniques for reducing the storage required
saving an image, or the bandwidth required transmitting it. Although storage technology has

ATMECE, ECE Page 3

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

improved significantly over the past decade, the same cannot be said for transmission capacity.
This is true particularly in uses of the Internet, which are characterized by significant pictorial
content. Image compression is familiar (perhaps inadvertently) to most users of computers in the
form of image file extensions, such as the jpg file extension used in the JPEG (Joint
Photographic Experts Group) image compression standard.
Morphological processing is deals with tools for extracting image components. They are useful
in the representation and description of shape. The material in this chapter begins a transition
from processes that output images to processes that output image attributes.
Segmentation procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged
segmentation procedure brings the process a long way toward successful solution of imaging
problems that require objects to be identified individually. On the other hand, weak or erratic
segmentation algorithms almost always guarantee eventual failure. In general, the more accurate
the segmentation, the more likely recognition is to succeed.
Representation and description almost always follow the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels
separating one image region from another) or all the points in the region itself. In either case,
converting the data to a form suitable for computer processing is necessary. The first decision
that must be made is whether the data should be represented as a boundary or as a complete
region. Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections. Regional representation is appropriate when the
focus is on internal properties, such as texture or skeletal shape. In some applications, these
representations complement each other.
Choosing a representation is only part of the solution for transforming raw data into a form
suitable for subsequent computer processing. A method must also be specified for describing the
data so that features of interest are highlighted. Description, also called feature selection, deals
with extracting attributes that result in some quantitative information of interest or are basic for
differentiating one class of objects from another.
Recognition is the process that assigns a label (e.g., ―vehicle‖) to an object based on its
descriptors. As detailed in Section 1.1, we conclude our coverage of digital image processing
with the development of methods for recognition of individual objects. So far we have said

ATMECE, ECE Page 4

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

nothing about the need for prior knowledge or about the interaction between the knowledge base
and Knowledge about a problem domain is coded into an image processing system in the form of
a knowledge database. This knowledge may be as simple as detailing regions of an image where
the information of interest is known to be located, thus limiting the search that has to be
conducted in seeking that information. The knowledge base also can be quite complex, such as
an interrelated list of all major possible defects in a materials inspection problem or an image
database containing high-resolution satellite images of a region in connection with change-
detection applications.
In addition to guiding the operation of each processing module, the knowledge base also
controls the interaction between modules. This distinction is made in Fig. 1.1 by the use of
double headed arrows between the processing modules and the knowledge base, as opposed to
single-headed arrows linking the processing modules. Although we do not discuss image display
explicitly at this point, it is important to keep in mind that viewing the results of image
processing can take place at the output of any stage.
1.1.4 Components of an Image Processing System
The Components of an Image Processing System is shown in the Fig. 1.2.

Fig. 1.2: Components of an Image Processing System

ATMECE, ECE Page 5

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

Specialized image processing hardware usually consists of the digitizer just mentioned, plus
hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), which
performs arithmetic and logical operations in parallel on entire images. One example of how an ALU
is used is in averaging images as quickly as they are digitized, for the purpose of noise reduction.
This type of hardware sometimes is called a front-end subsystem, and its most distinguishing
characteristic is speed. In other words, this unit performs functions that require fast data
throughputs (e.g., digitizing and averaging video images at 30 frames_s) that the typical main
computer cannot handle.
The computer in an image processing system is a general-purpose computer and can range from
a PC to a supercomputer. In dedicated applications, sometimes specially designed computers are
used to achieve a required level of performance, but our interest here is on general-purpose
image processing systems. In these systems, almost any well-equipped PC-type machine is
suitable for offline image processing tasks.
Software for image processing consists of specialized modules that perform specific tasks. A
well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules. More sophisticated software packages allow the integration of
those modules and general- purpose software commands from at least one computer language.
Mass storage capability is a must in image processing applications.An image of size 1024*1024
pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of storage
space if the image is not compressed. When dealing with thousands, or even millions, of images,
providing adequate storage in an image processing system can be a challenge. Digital storage for
image processing applications falls into three principal categories: (1) short term storage for use
during processing, (2) on-line storage for relatively fast recall, and (3) archival storage,
characterized by infrequent access. Storage is measured in bytes (eight bits), Kbytes (one
thousand bytes), Mbytes (one million bytes), Gbytes (meaning giga, or one billion, bytes), and T
bytes (meaning tera, or one trillion, bytes).
One method of providing short-term storage is computer memory. Another is by specialized
boards, called frame buffers, that store one or more images and can be accessed rapidly, usually
at video rates (e.g., at 30 complete images per second).The latter method allows virtually
instantaneous image zoom, as well as scroll (vertical shifts) and pan (horizontal shifts). Frame
buffers usually are housed in the specialized image processing hardware unit. Online storage

ATMECE, ECE Page 6

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

generally takes the form of magnetic disks or optical-media storage. The key factor
characterizing on-line storage is frequent access to the stored data. Finally, archival storage is
characterized by massive storage requirements but infrequent need for access. Magnetic tapes
and optical disks housed in ―jukeboxes‖ are the usual media for archival applications.
Image displays in use today are mainly color (preferably flat screen) TV monitors. Monitors are
driven by the outputs of image and graphics display cards that are an integral part of the
computer system. Seldom are there requirements for image display applications that cannot be
met by display cards available commercially as part of the computer system. In some cases, it is
necessary to have stereo displays, and these are implemented in the form of headgear containing
two small displays embedded in goggles worn by the user.
Hardcopy devices for recording images include laser printers, film cameras, heat-sensitive
devices, inkjet units, and digital units, such as optical and CD-ROM disks. Film provides the
highest possible resolution, but paper is the obvious medium of choice for written material. For
presentations, images are displayed on film transparencies or in a digital medium if image
projection equipment is used. The latter approach is gaining acceptance as the standard for image
presentations.
Networking is almost a default function in any computer system in use today. Because of the
large amount of data inherent in image processing applications, the key consideration in image
transmission is bandwidth. In dedicated networks, this typically is not a problem, but
communications with remote sites via the Internet are not always as efficient. Fortunately, this
situation is improving quickly as a result of optical fiber and other broadband technologies.
1.1.5 Elements of Visual Perception

Although the digital image processing field is built on a foundation of mathematical and

probabilistic formulations, human intuition and analysis play a central role in the choice of one

technique versus another, and this choice often is made based on subjective, visual judgments.

1.5.1.1 Image Formation in the Eye

The principal difference between the lens of the eye and an ordinary optical lens is that the
former is flexible. As illustrated in Fig, 1.3 the radius of curvature of the anterior surface of the

ATMECE, ECE Page 7

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

lens is greater than the radius of its posterior or surface. The shape of the lens is controlled by
tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles
cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become
thicker in order to focus on objects near the eye. The distance between the center of the lens and
the retina (called the focal length) varies from approximately 17 mm to about 14 mm, as the
refractive power of the lens increases from its minimum to its maximum.

Fig. 1.3: Graphical representation of the eye.


When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest
refractive power. When the eye focuses on a nearby object, the lens is most strongly refractive.
This information makes it easy to calculate the size of the retinal image of any object.
In Fig, for example, the observer is looking at a tree15 m high at a distance of 100 m. If h is the
height in mm of that object in the retinal image, the geometry of fig yields 15/100=h/17 or
h=2.55 mm. the retinal image is reflected primarily in the area of the fovea. Perception then takes
place by the relative excitation of light receptors, which transform radiant energy into electrical
impulses that are ultimately decoded by the brain.

1.5.1.2 Brightness Adaptation and Discrimination

Because digital images are displayed as a discrete set of intensities, the eye’s ability to
discriminate between different intensity levels is an important consideration in presenting image
processing results. The range of light intensity levels to which the human visual system can adapt is
enormous— on the order of1010— from the scotopic threshold to the glare limit. Experimental
evidence indicates that subjective brightness (intensity as perceived by the human visual system)
is a logarithmic function of the light intensity incident on the eye. Fig-1.4 a plot of light intensity
versus subjective brightness illustrates this characteristic. The long solid curve represents the range of
intensities to which the visual system can adapt. In photopic vision alone, the range is about 106. The
transition from scotopic to photopic vision is gradual over the approximate range from 0.001 to
0.1 millilambert (–3 to –1 mL in the log scale), as the double branches of the adaptation curve in

ATMECE, ECE Page 8

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

this range show. The essential point in interpreting the impressive dynamic range depicted in Fig.
is that the visual system cannot operate over such a range simultaneously. Rather, it
accomplishes this large variation b y changes in its overall sensitivity, a phenomenon known as
brightness adaptation

Fig. 1.4: Range of subjective brightness sensation

1.2.1 Image Sensing and Acquisition


The types of images in which we are interested are generated by the combination of an
―illumination‖ source and the reflection or absorption of energy from that source by the elements of
the ―scene‖ being imaged. We enclose illumination and scene in quotes to emphasize the fact that
they are considerably more general than the familiar situation in which a visible light source
illuminates a common everyday 3-D (three-dimensional) scene. For example, the illumination
may originate from a source of electromagnetic energy such as radar, infrared, or X-ray energy.
But, as noted earlier, it could originate from less traditional sources, such as ultrasound or even a
computer-generated illumination pattern. Similarly, the scene elements could be familiar objects,
but they can just as easily be molecules, buried rock formations, or a human brain. We could
even image a source, such as acquiring images of the sun. Different types of sensor are shown in
the Fig. 1.5.

Fig. 1.5: Single, line and array sensor

ATMECE, ECE Page 9

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

Depending on the nature of the source, illumination energy is reflected from, or transmitted
through, objects. An example in the first category is light reflected from a planar surface. An
example in the second category is when X-rays pass through a patient’s body for the purpose of
generating a diagnostic X-ray film. In some applications, the reflected or transmitted energy is
focused onto a photo converter (e.g., a phosphor screen), which converts the energy into visible
light. Electron microscopy and some applications of gamma imaging use this approach.
The idea is simple: Incoming energy is transformed into a voltage by the combination of input
electrical power and sensor material that is responsive to the particular type of energy being
detected.
The output voltage waveform is the response of the sensor(s), and a digital quantity is
obtained from each sensor by digitizing its response. In this section, we look at the principal
modalities for image sensing and generation.
1.2.1.1 Image Acquisition Using a Single Sensor
The most familiar sensor of this type is the photodiode, which is constructed of silicon materials and
whose output voltage waveform is proportional to light. The use of a filter in front of a sensor
improves selectivity. For example, a green (pass) filter in front of a light sensor favors light in the
green band of the color spectrum. As a consequence, the sensor output will be stronger for green light
than for other components in the visible spectrum. In order to generate a 2-D image using a single
sensor, there has to be relative displacements in both the x- and y-directions between the sensor and
the area to be imaged. Fig. 1.6 shows an arrangement used in high-precision scanning, where a film
negative is mounted onto a drum whose mechanical rotation provides displacement in one
dimension. The single sensor is mounted on a lead screw that provides motion in the perpendicular
direction. Since mechanical motion can be controlled with high precision, this method is an
inexpensive (but slow) way to obtain high-resolution images.

Fig. 1.6: Combining a single sensor with a motion to generate a 2 D image.

ATMECE, ECE Page 10

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

Other similar mechanical arrangements use a flat bed, with the sensor moving in two linear directions.
These types of mechanical digitizers sometimes are referred to as microdensitometers.

1.2.1.2 Image Acquisition Using Sensor Strips


A geometry that is used much more frequently than single sensors consists of an in-line
arrangement of sensors in the form of a sensor strip, as shown in the Fig 1.7. The strip provides
imaging elements in one direction. Motion perpendicular to the strip provides imaging in the
other direction. This is the type of arrangement used in most flatbed scanners. Sensing devices
with 4000 or more in-line sensors are possible. In-line sensors are used routinely in airborne
imaging applications, in which the imaging system is mounted on an aircraft that flies at a
constant altitude and speed over the geographical area to be imaged. One-dimensional imaging
sensor strips that respond to various bands of the electromagnetic spectrum are mounted
perpendicular to the direction of flight. The imaging strip gives one line of an image at a time,
and the motion of the strip completes the other dimension of a two-dimensional image. Lenses or
other focusing schemes are used to project area to be scanned onto the sensors. Sensor strips
mounted in a ring configuration are used in medical and industrial imaging to obtain cross-
sectional (―slice‖) images of 3-D objects.

Fig. 1.7: Combining a line sensor and circular strip with a motion to generate a 2 D image.
1.2.1.3 Image Acquisition Using Sensor Arrays
The individual sensors arranged in the form of a 2-D array. Numerous electromagnetic and some
ultrasonic sensing devices frequently are arranged in an array format. This is also the predominant

ATMECE, ECE Page 11

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

arrangement found in digital cameras. A typical sensor for these cameras is a CCD array, which can
be manufactured with a broad range of sensing properties and can be packaged in rugged arrays of
elements or more. CCD sensors are used widely in digital cameras and other light sensing
instruments. The response of each sensor is proportional to the integral of the light energy projected
onto the surface of the sensor, a property that is used in astronomical and other applications requiring
low noise images. Noise reduction is achieved by letting the sensor integrate the input light signal
over minutes or even hours. The two dimensional, its key advantage is that a complete image can be
obtained by focusing the energy pattern onto the surface of the array. Motion obviously is not
necessary, as is the case with the sensor arrangements
This Fig.1.8 shows the energy from an illumination source being reflected from a scene
element, but, as mentioned at the beginning of this section, the energy also could be transmitted
through the scene elements. The first function performed by the imaging system is to collect the
incoming energy and focus it onto an image plane. If the illumination is light, the front end of the
imaging system is a lens, which projects the viewed scene onto the lens focal plane. The sensor
array, which is coincident with the focal plane, produces outputs proportional to the integral of
the light received at each sensor. Digital and analog circuitry sweep these outputs and convert
them to a video signal, which is then digitized by another section of the imaging system.

Fig. 1.8: Digital image acquisition process.

ATMECE, ECE Page 12

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

1.2.2 Image Sampling and Quantization


To create a digital image, we need to convert the continuous sensed data into digital form. This
involves two processes: sampling and quantization. A continuous image, f(x, y), that we want to
convert to digital form. An image may be continuous with respect to the x- and y-coordinates, and
also in amplitude. To convert it to digital form, we have to sample the function in both coordinates
and in amplitude. Digitizing the coordinate values is called sampling. Digitizing the amplitude values
is called quantization.

Fig. 1.9: Process involved generation of digital image from continuous image.
The one-dimensional function shown in Fig. 1.9(b) is a plot of amplitude (gray level) values of
the continuous image along the line segment AB. The random variations are due to image noise.
To sample this function, we take equally spaced samples along line AB, The location of each
sample is given by a vertical tick mark in the bottom part of the figure. The samples are shown as
small white squares superimposed on the function. The set of these discrete locations gives the
sampled function. However, the values of the samples still span (vertically) a continuous range
of gray-level values. In order to form a digital function, the gray-level values also must be
converted (quantized) into discrete quantities. The right side gray-level scale divided into eight
discrete levels, ranging from black to white. The vertical tick marks indicate the specific value
assigned to each of the eight gray levels. The continuous gray levels are quantized simply by
assigning one of the eight discrete gray levels to each sample. The assignment is made depending

ATMECE, ECE Page 13

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

on the vertical proximity of a sample to a vertical tick mark. The digital samples resulting from
both sampling and quantization
1.2.3 Some basic Relationships between Pixels
In this section, we consider several important relationships between pixels in a digital image. As
mentioned before, an image is denoted by f(x, y).When referring in this section to a particular
pixel, and we use lowercase letters, such as p and q.
1.2.3.1: Neighbors of a Pixel
A pixel p at coordinates (x, y) has four horizontal and vertical neighbors whose coordinates are
given by (x+1, y), (x-1, y), (x, y+1), (x, y-1)
This set of pixels, called the 4-neighbors of p, is denoted by N4(p). Each pixel is a unit distance
from (x, y), and some of the neighbors of p lie outside the digital image if (x, y) is on the border
of the image.
The four diagonal neighbors of p have coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1) and

are denoted by ND(p). These points, together with the 4-neighbors, are called the 8-neighbors of

p, denoted by N8(p). As before, some of the points in ND(p) and N8(p) fall outside the image if

(x, y) is on the border of the image.

1.2.3.2: Adjacency, Connectivity, Regions, and Boundaries


Connectivity between pixels is a fundamental concept that simplifies the definition of numerous
digital image concepts, such as regions and boundaries. To establish if two pixels are connected,
it must be determined if they are neighbors and if their gray levels satisfy a specified criterion of
similarity (say, if their gray levels are equal).For instance, in a binary image with values 0 and 1,
two pixels may be 4-neighbors, but they are said to be connected only if they have the same
value.
Let V be the set of gray-level values used to define adjacency. In a binary image, V={1}
if we are referring to adjacency of pixels with value 1. In a grayscale image, the idea is the same,
but set V typically contains more elements. For example, in the adjacency of pixels with a range
of possible gray-level values 0 to 255, set V could be any subset of these 256 values. We
consider three types of adjacency:
(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set
N4(p).

ATMECE, ECE Page 14

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set
N8(p).
(c) m-adjacency (mixed adjacency).Two pixels p and q with values from V are
m-adjacent if
(i) q is in N4(p), or
(ii) q is in ND(p) and the set has no pixels whose values are from V.

1.2.4 Linear and Nonlinear Operations


Let H be an operator whose input and output are images. H is said to be a linear operator if, for
any two images f and g and any two scalars a and b,
H(af + bg) = aH(f) + bH(g).
In other words, the result of applying a linear operator to the sum of two images (that have been
multiplied by the constants shown) is identical to applying the operator to the images
individually, multiplying the results by the appropriate constants, and then adding those results.
For example, an operator whose function is to compute the sum of K images is a linear operator.
An operator that computes the absolute value of the difference of two images is not.
Linear operations are exceptionally important in image processing because they are based on a
significant body of well-understood theoretical and practical results. Although nonlinear
operations sometimes offer better performance, they are not always predictable, and for the most
part are not well understood theoretically.
1.2.5 Applications of Digital Image Processing

Visual information is the most important type of information perceived, processed and
interpreted by the human brain. One third of the cortical area of the human brain is dedicated to
visual information processing.
Digital image processing, as a computer-based technology, carries out automatic
processing, manipulation and interpretation of such visual information, and it plays an
increasingly important role in many aspects of our daily life, as well as in a wide variety of
disciplines and fields in science and technology, with applications such as television,
photography, robotics, remote sensing, medical diagnosis and industrial inspection.

ATMECE, ECE Page 15

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

 Computerized photography (e.g., Photoshop)


 Space image processing (e.g., Hubble space telescope images, interplanetary probe
images)
 Medical/Biological image processing (e.g., interpretation of X-ray images, blood/cellular
microscope images)
 Automatic character recognition (zip code, license plate recognition)
 Finger print/face/iris recognition
 Remote sensing: aerial and satellite image interpretations
 Industrial applications (e.g., product inspection/sorting
1.2.5 Outcomes
Provides a general introduction to basics of digital image processing
Understand the concept of visual perception mechanism.
Understand the concept of sampling and quantization.
The basic relationship between pixels and introduce to the principal mathematical tool
1.2.6 Recommended Questions

1 What is digital image processing? Explain the fundamental steps in digital image
processing
2 Briefly explain the components of an image processing system.
3 How is image formed in an eye? Explain with examples the perceived brightness is not a
simple function of intensity.
4 Explain the importance of brightness adaption and discrimination in image processing.
5 Define spatial and gray level resolution. Briefly discuss the effects resulting from a
reduction in number of pixels and gray levels.
6 What are the elements of visual perception?
7 Explain the concept of sampling and quantization of an image.
8 Explain i) false contouring ii) checkboard pattern
9 How image is acquired using a single sensor? Discuss.
10 Explain zooming and shrinking digital images.
11 Define 4-adjacency, 8 – adjacency and m – adjacency.
12 Explain the relationships between pixels and also the image operations on a pixel basis.

ATMECE, ECE Page 16

VTU Connect Android App Download Now On Google Play Store


VTU Connect Android App Download Now On Google Play Store
DIGITAL IMAGE PROCESSING 15EC72

13 With a suitable diagram, explain how an image is acquired using a circular sensor strip.
14 Explain linear and nonlinear operations
15 Define image and briefly explain the applications of an image processing.

1.2.7 Further Readings

1. “Digital Image Processing”, Rafael Gonzalez and Richard E. Woods, Pearson


Education, 2001, 3 rd edition.
2. “Fundamentals of Digital Image Processing”, Anil K. Jain, Pearson Edun, 2004
3. “Digital Image Processing”- S.Jayaraman, S.Esakkirajan, T.Veerakumar, Tata McGraw
Hill 2014.

ATMECE, ECE Page 17

VTU Connect Android App Download Now On Google Play Store

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy