0% found this document useful (0 votes)
90 views

EC32

This document discusses image processing and provides details on several key topics: - It defines what an image is and how digital images are formed through sampling. - It describes different types of image operations including point, local, and global operations and different neighborhood types. - It discusses important video parameters for image processing like interlacing and frame sizes. - It explains the importance of magnitude and phase for reconstructing images from Fourier transforms. - It introduces common image statistics used in processing like probability distribution and density functions.

Uploaded by

raghu_srinu08480
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views

EC32

This document discusses image processing and provides details on several key topics: - It defines what an image is and how digital images are formed through sampling. - It describes different types of image operations including point, local, and global operations and different neighborhood types. - It discusses important video parameters for image processing like interlacing and frame sizes. - It explains the importance of magnitude and phase for reconstructing images from Fourier transforms. - It introduces common image statistics used in processing like probability distribution and density functions.

Uploaded by

raghu_srinu08480
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

IMAGE PROCESSING

ANITHA.SAYIMPU NAVYA.TALLURI
Sayimpu99@gmail.com talluri.navya9@gmail.com
2ndB.TECH, CSE 2ndB.TECH, CSE
V.R.SIDHARTHA COLLEGE OF ENGG., V.R.SIDHARTHA COLLEGE OF ENGG.,
VIJAYAWADA. VIJAYAWADA.

ABSTRACT

In the present world Computer Graphics plays an important role. The areas here we are using computer
graphics are Entertainment, Presentations, Education and training, Visualization, Design, Image Processing and Graphical
User Interface. In all these Image Processing has its own importance. Image Processing deals with how we can improve the
clarity of image and to manipulate the image which is a ver important application of computer graphics. In Image processing
we are doing some operation on image.This paper mainly concentrates on what is an image and how processing takes place,
digital image. It also deals with Characteristics of image operations like types of operations and types of neighborhood, video
parameters, statistics of images, contour representations like chain code, crack code, run code.This paper also deals with
Noise that contaminates the images acquired from modern sensors and one of the main applications of Image Processing that
is cameras.


INTRODUCTION:-
Modern digital technology has made it
possible to manipulate multi-dimensional Signals
with systems that range from simple digital
circuits to advanced parallel computers. The goal
of this manipulation can be divided into three
categories:
Image Processing image in ->
Image Analysis image in -
Image Understanding image in -> -level
description out
An image defined in the real world is
considered to be a function of two real
variables, for example, a(x,y) with a as the
amplitude (e.g. brightness) of the image at the
real coordinate position (x,y). An image may
be considered to contain sub-images
sometimes referred to as regions of
interest(ROI) or simply regions. This concept
reflects the fact that images frequently contain
collections of objects each of which can be the
basis for a region. In a sophisticated image
processing system it should be possible to
apply specific image processing operations to
selected regions. Thus one part of an image
(region) might be processed to suppress motion
blur while another part might be
processed to improve color rendition.

The amplitudes of a given image will
almost always be either real numbers or integer
numbers. The latter is usually a result of a
quantization process that converts a continuous
range to a discrete number of levels. In certain
image-forming processes, however, the signal
may involve photon counting which implies that
the amplitude would be inherently quantized. In
other image forming procedures, such as
magnetic resonance imaging, the direct physical
measurement yields a complex number in the
form of a real magnitude and a real phase.







DIGITAL IMAGE:-

A digital image a[m,n] described in a
2D discrete space is derived from an analog
image a(x,y) in a 2D continuous space through
a sampling process that is frequently referred
to as digitization.

The 2D continuous image a(x,y) is
divided into N rows and M columns. The
intersection of a row and a column is termed a
pixel. The value assigned to the integer
coordinates [m,n] with {m=0,1,2,,M1} and
{n=0,1,2,,N1} is a[m,n]. In fact, in most
cases a(x,y) which we might consider to be the
physical signal that impinges on the face of a
2D sensor is actually a function of many
variables including depth (z), color() , and
time (t).


Digitization of continuous image
In the above figure the coordinates with [m=10,
n=3] has the highest brightest value.

CHARACTERISTICS OF
IMAGE OPERATIONS:-
There are a variety of ways to classify and
characterize image operations. The reason for
doing so is to understand what type of results we
might expect to achieve with a given type of
operation or what might be the computational
burden associated with a given operation.


Types of operations:-
The types of operations that can be
applied to digital images to transform an input
image a[m,n] into an output image b[m,n] (or
another representation) can be classified into
three categories.
Point Characterization is the output value at a
specific coordinate is dependent only on the input
value at that same coordinate.
Local Characterization is the output value
at a specific coordinate is dependent on the
input values in the neighborhood of that same
coordinate.
Global Characterization is the output value
at a specific coordinate is dependent on all the
values in the input image.
Note: Complexity is specified in operations
per pixel.
Illustration of various types of
image operations
Types of neighborhoods:-
Neighborhood operations play a key
role in modern digital image processing. It is
therefore important to understand how images
can be sampled and how that relates to the
various neighborhoods that can be used to
process an image.
Rectangular sampling In most cases,
images are sampled by laying a rectangular
grid over an image as illustrated in above
figure. This results in the type of sampling
shown in figure below.
Hexagonal sampling An alternative
sampling scheme is shown in figure below
and is termed hexagonal sampling. Both
sampling schemes have been studied
extensively and both represent a possible
periodic tiling of the continuous image
space.

We will restrict our attention,
however, to only rectangular sampling as it
remains, due to hardware and software
considerations, the method of choice. Some
of the most common neighbourhoods are
the 4-connected neighborhood and the 8-
connected neighborhood in the case of
rectangular sampling and the 6-connected
neighborhood in the case of hexagonal
sampling illustrated in Figure.
Figure (a) Figure(b) Figure(c)
Rectangular sampling Rectangular
sampling Hexagonal sampling
4-connected 8-connected 6-connected
VIDEO PARAMETERS:-
We do not propose to describe the
processing of dynamically changing images in
this introduction. It is appropriate given that
many static images are derived from video
cameras and frame grabbers to mention the
standards that are associated with the three
standard video schemes that are currently in
worldwide use NTSC, PAL, and SECAM.
In an interlaced image the odds
numbered lines (1,3,5) are scanned in
half of the allotted time (e.g. 20 ms in PAL)
and the even numbered lines (2,4,6) are
scanned in the remaining half. The image
display must be coordinated with this
scanning format. The reason for interlacing
the scan lines of a video image is to reduce
the perception of flicker in a displayed
image. If one is planning to use images that
have been scanned from an interlaced video
source, it is important to know if the two
half-images have been appropriately
shuffled by the digitization hardware or if
that should be implemented in software.
Further, the analysis of moving objects
requires special care with interlaced video
to avoid zigzag edges.
The number of rows (N) from a
video source generally corresponds oneto
one with lines in the video image. The
number of columns, however, depends on
the nature of the electronics that is used to
digitize the image. Different frame grabbers
for the same video camera might produce M
= 384, 512, or 768 columns (pixels) per
line.

IMPORTANCE OF PHASE
AND MAGNITUDE:-
Both the magnitude and the phase
functions are necessary for the complete
reconstruction of an image from its Fourier
transform. Figure (a) shows what happens
when Figure below is restored solely on the
basis of the magnitude information and
Figure (b) shows what happens when
Figure(1) is restored solely on the basis of
the phase information.
Figure (1)

Figure(a) Figure(b)
Neither the magnitude information nor the
phase information is sufficient to restore the
image. The magnitudeonly image (Figure
a) is unrecognizable and has severe
dynamic range problems. The phase-only
image (Figure b) is barely recognizable,
that is, severely degraded in quality.

STATISTICS:-
In image processing it is quite
common to use simple statistical
descriptions of images and subimages. The
notion of a statistic is intimately connected
to the concept of a probability distribution,
generally the distribution of signal
amplitudes. For a given region which could
conceivably be an entire image we can
define the probability distribution function
of the brightness in that region and the
probability density function of the
brightness in that region. We will assume in
the discussion that follows that we are
dealing with a digitized image a[m,n].

Probability distribution function of the
brightness
The probability distribution function
P(a), is the probability that a brightness
chosen from the region is less than or equal
to a given brightness value a. As a increases
from to +, P(a) increases from 0 to 1.
P(a) is monotonic, nondecreasing in a and
thus dP/da > 0.
Probability density function of the
brightness
The probability that a brightness in a region
falls between a and a+Aa, given the
probability distribution function P(a), can
be expressed as p(a)Aa where p(a) is the
probability density function:
The brightness probability distribution
function for the image shown in Figure (1)
is shown in Figure (a). The (unnormalized)
brightness histogram of Figure (1) which is
proportional to the estimated brightness
probability density function is shown in
Figure (b). The height in this histogram
corresponds to the number of pixels with a
given brightness.

Figure (a)
Figure (b)
Both the distribution function and
the histogram as measured from a region are
a statistical description of that region. It
must be emphasized that both p[a] and p[a]
should be viewed as estimates of true
distributions when they are computed from
a specific region. That is, we view an image
and a specific region as one realization of
the various random processes involved in
the formation of that image and that region.

CONTOUR
REPRESENTATIONS:-
When dealing with a region or
object, several compact representations are
available that can facilitate manipulation of
and measurements on the object. In each
case we assume that we begin with an
image representation of the object. Several
techniques exist to represent the region or
object by describing its contour.
Chain code:
We follow the contour in a
clockwise manner and keep track of the
directions as we go from one contour pixel
to the next. For the standard implementation
of the chain code we consider a contour
pixel to be an object pixel that has a
background (non-object) pixel as one or
more of its 4-connected neighbors. The
codes associated with eight possible
directions are the chain codes and, with x as
the current contour pixel position, the codes
are generally defined as:
3 2 1
Chain codes = 4 x 0
5 6 7
(43)


Region (shaded) as it is transformed from
(a) continuous to (b) discrete form and then
considered as a (c) contour or (d) run
lengths illustrated in alternating colors.

Chain code properties
- Even codes {0,2,4,6} correspond to
horizontal and vertical directions; odd codes
{1,3,5,7} correspond to the diagonal
directions.
- Each code can be considered as the angular
direction, in multiples of 45, that we must
move to go from one contour pixel to the
next.
- The absolute coordinates [m,n] of the first
contour pixel together with the chain code
of the contour represent a complete
description of the discrete region contour.
- When there is a change between two
consecutive chain codes, then the contour
has changed direction. This point is defined
as a corner.

Crack code:-
An alternative to the chain code for
contour encoding is to use neither the
contour pixels associated with the object
nor the contour pixels associated with
background but rather the line, the crack,
in between.
The crack code can be viewed as
a chain code with four possible directions
instead of eight.
1
Crack codes = 2 x 0
3

(a) Object including part to be studied.
(b) Contour pixels as used in the chain code are
diagonally shaded. The crack is shown
with the thick black line.




Run codes:-
A third representation is based on
coding the consecutive pixels along a row a
run that belong to an object by giving the
starting position of the run and the ending
position of the run. There are a number of
alternatives for the precise definition of the
positions. Which alternative should be used
depends upon the application.
NOISE
Images acquired through modern
sensors may be contaminated by a variety of
noise sources. By noise we refer to
stochastic variations as opposed to
deterministic distortions such as shading or
lack of focus. We will assume for this
section that we are dealing with images
formed from light using modern electro-
optics. In particular we will assume the use
of modern, charge-coupled device (CCD)
cameras where photons produce electrons
that are commonly referred to as
photoelectrons. Nevertheless, most of the
observations we shall make about noise and
its various sources hold equally well for
other imaging modalities.
While modern technology has made
it possible to reduce the noise levels
associated with various electro-optical
devices to almost negligible levels, one
noise source can never be eliminated and
thus forms the limiting case when all other
noise sources are eliminated.
Images acquired through modern
sensors may be contaminated by a variety of
noise sources. By noise we refer to
stochastic variations as opposed to
deterministic distortions such as shading or
lack of focus. We will assume that we are
dealing with images formed from light
using modern electro-optics. In particular
we will assume the use of modern, charge-
coupled device (CCD) cameras where
photons produce electrons that are
commonly referred to as photoelectrons.
While modern technology has made
it possible to reduce the noise levels
associated with various electro-optical
devices to almost negligible levels, one
noise source can never be eliminated and
thus forms the limiting case when all other
noise sources are eliminated.


PHOTON NOISE
When the physical signal that we
observe is based upon light, then the
quantum nature of light plays a significant
role. A single photon at = 500 nm carries
an energy of E = hv = hc/ = 3.97 1019
Joules. Modern CCD cameras are sensitive
enough to be able to count individual
photons.
The noise problem arises from the
fundamentally statistical nature of photon
production. We cannot assume that, in a
given pixel for two consecutive but
independent observation intervals of length
T, the same number of photons will be
counted. Photon production is governed by
the laws of quantum physics which restrict
us to talking about an average number of
photons within a given observation window.

THERMAL NOISE
An additional, stochastic source of
electrons in a CCD well is thermal energy.
Electrons can be freed from the CCD
material itself through thermal vibration and
then, trapped in the CCD well, be
indistinguishable from true
photoelectrons. By cooling the CCD chip it
is possible to reduce significantly the
number of thermal electrons that give rise
to thermal noise or dark current.

AMPLIFIER NOISE
The standard model for this type of
noise is additive, Gaussian, and independent
of the signal. In modern well-designed
electronics, amplifier noise is generally
negligible. The most common exception to
this is in color cameras where more
amplification is used
in the blue color channel than in the green
channel or red channel leading to more
noise in the blue channel.

PERCEPTION:-
Many image-processing applications are
intended to produce images that are to be
viewed by human observers (as opposed to,
say, automated industrial inspection.). It is
therefore important to understand the
characteristics and limitations of the human
visual system to understand the receiver
of the 2D signals. At the outset it is
important to realize that
1) The human visual system is not well
understood,
2) No objective measure exists for judging the
quality of an image that corresponds to
human assessment of image quality,
3) The typical human observer does not
exist.
Nevertheless, research in perceptual
psychology has provided some important
insights into the visual system. .

APPLICATIONS:-

CAMERAS:-
The cameras and recording media available
for modern digital image processing
applications are changing at a significant
pace.

Video cameras
Values of the shutter speed as low as 500 ns
are available with commercially available
CCD video cameras although the more
conventional speeds for video are 33.37ms
(NTSC) and 40.0 ms (PAL, SECAM).
Values as high as 30 s may also be achieved
with certain video cameras although this
means sacrificing a continuous stream of
video images that contain signal in favor of
a single integrated image amongst a stream
of otherwise empty images. Subsequent
digitizing hardware must be capable of
handling this situation.

Scientific cameras
Again values as low as 500 ns are possible
and, with cooling techniques based on
Peltier-cooling or liquid nitrogen cooling,
integration times in excess of one hour are
readily achieved.

CONCLUSIONS
I concluded that image processing is
one of the pioneering application of
computer graphics. Due to the vitality of the
image processing it is dealt as a separate
subject. In image processing many new
techniques were developed and still
developing to overcome the disturbances
created by noise when acquiring images
through modern sensors. In present
technology, movies mainly consist of
animations and graphics. Image processing
plays a major role in animations. So in the
future, importance of image processing
increases to a very large extent.

This image processing requires
highlevel involvement, an understanding of
system aspects of graphics software, a
realistic feeling for graphics, system
capabilities and ease of use.

REFERENCES:-
digital image processing - cartleman
the image processing handbook russ t.c
digital image processing gonzalez,
r.e.woods.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy