Equency Domain Analysis
Equency Domain Analysis
Till now, all the domains in which we have analyzed a signal, we analyze it with respect to time.
But in frequency domain we don’t analyze signal with respect to time, but with respect of
frequency.
Difference between spatial domain and frequency domain
In spatial domain, we deal with images as it is. The value of the pixels of the image change with
respect to scene. Whereas in frequency domain, we deal with the rate at which the pixel values
are changing in spatial domain.
For simplicity, Let’s put it this way.
Spatial domain
In simple spatial domain, we directly deal with the image matrix. Whereas in frequency domain,
we deal an image like this.
Frequency Domain
We first transform the image to its frequency distribution. Then our black box system perform
what ever processing it has to performed, and the output of the black box in this case is not an
image, but a transformation. After performing inverse transformation, it is converted into an
image which is then viewed in spatial domain.
It can be pictorially viewed as
Here we have used the word transformation. What does it actually mean?
Transformation
A signal can be converted from time domain into frequency domain using mathematical
operators called transforms. There are many kind of transformation that does this. Some of them
are given below.
● Fourier Series
● Fourier transformation
● Laplace transform
● Z transform
Frequency components
Any image in spatial domain can be represented in a frequency domain. But what do this
frequencies actually mean.
Fourier
Fourier was a mathematician in 1822. He give Fourier series and Fourier transform to convert a
signal into frequency domain.
Fourier Series
Fourier series simply states that, periodic signals can be represented into sum of sines and
cosines when multiplied with a certain weight. It further states that periodic signals can be
broken down into further signals with the following properties.
How it is calculated
Since as we have seen in the frequency domain, that in order to process an image in frequency
domain, we need to first convert it using into frequency domain and we have to take inverse of
the output to convert it back into spatial domain. That’s why both Fourier series and Fourier
transform has two formulas. One for conversion and one converting it back to the spatial domain.
Fourier series
The Fourier series can be denoted by this formula.
Fourier transform
The Fourier transform simply states that that the non periodic signals whose area under the curve
is finite can also be represented into integrals of the sines and cosines after being multiplied by a
certain weight.
The Fourier transform has many wide applications that include, image compression (e.g JPEG
compression), filtering and image analysis.
Difference between Fourier series and transform
Although both Fourier series and Fourier transform are given by Fourier but the difference
between them is Fourier series is applied on periodic signals and Fourier transform is applied for
non periodic signals
Which one is applied on images
Now the question is that which one is applied on the images, the Fourier series or the Fourier
transform. Well, the answer to this question lies in the fact that what images are. Images are non
– periodic. And since the images are non periodic, so Fourier transform is used to convert them
into frequency domain.
Discrete fourier transform
Since we are dealing with images, and in fact digital images, so for digital images we will be
working on discrete fourier transform
● Spatial Frequency
● Magnitude
● Phase
The spatial frequency directly relates with the brightness of the image. The magnitude of the
sinusoid directly relates with the contrast. Contrast is the difference between maximum and
minimum pixel intensity. Phase contains the color information.
For a square image of size N×N, the two-dimensional DFT is given by:
where f(a,b) is the image in the spatial domain and the exponential term is the basis
function corresponding to each point F(k,l) in the Fourier space. The equation can be
interpreted as: the value of each point F(k,l) is obtained by multiplying the spatial
image with the corresponding base function and summing the result.
The basis functions are sine and cosine waves with increasing
frequencies, i.e. F(0,0) represents the DC-component of the image which corresponds
to the average brightness and F(N-1,N-1) represents the highest frequency.
In a similar way, the Fourier image can be re-transformed to the spatial domain. The
inverse Fourier transform is given by:
In most implementations the Fourier image is shifted in such a way that the DC-value
(i.e. the image mean) F(0,0) is displayed in the center of the image. The further away
from the center an image point is, the higher is its corresponding frequency.
We can see that the DC-value is by far the largest component of the image. However,
the dynamic range of the Fourier coefficients (i.e. the intensity values in the Fourier
image) is too large to be displayed on the screen, therefore all other values appear as
black. If we apply a logarithmic transformation to the image we obtain
The result shows that the image contains components of all frequencies, but that their
magnitude gets smaller for higher frequencies. Hence, low frequencies contain more
image information than the higher ones. The transform image also tells us that there
are two dominating directions in the Fourier image, one passing vertically and one
horizontally through the center. These originate from the regular patterns in the
background of the original image.
The value of each point determines the phase of the corresponding frequency. As in
the magnitude image, we can identify the vertical and horizontal lines corresponding
to the patterns in the original image. The phase image does not yield much new
information about the structure of the spatial domain image; therefore, in the
following examples, we will restrict ourselves to displaying only the magnitude of the
Fourier Transform.
Before we leave the phase image entirely, however, note that if we apply the inverse
Fourier Transform to the above magnitude image while ignoring the phase (and
then histogram equalize the output) we obtain
Although this image contains the same frequencies (and amount of frequencies) as the
original input image, it is corrupted beyond recognition. This shows that the phase
information is crucial to reconstruct the correct image in the spatial domain.
Convolution Theorem
Now what’s the relationship between image or spatial domain and frequency domain. This
relationship can be explained by a theorem which is called as Convolution theorem.
The relationship between the spatial domain and the frequency domain can be established by
convolution theorem.
The convolution theorem can be represented as.
It can be stated as the convolution in spatial domain is equal to filtering in frequency domain and
vice versa.
The filtering in frequency domain can be represented as following:
● At first step we have to do some pre – processing an image in spatial domain, means
increase its contrast or brightness
● Then we will take discrete Fourier transform of the image
● Then we will center the discrete Fourier transform, as we will bring the discrete Fourier
transform in center from corners
● Then we will apply filtering, means we will multiply the Fourier transform by a filter
function
● Then we will again shift the DFT from center to the corners
● Last step would be take to inverse discrete Fourier transform, to bring the result back
from frequency domain to spatial domain
● And this step of post processing is optional, just like pre processing in which we just
increase the appearance of image.
Filters
The concept of filter in frequency domain is same as the concept of a mask in convolution.
After converting an image to frequency domain, some filters are applied in filtering process to
perform different kind of processing on an image. The processing include blurring an image,
sharpening an image e.t.c.
The common type of filters for these purposes are:
Derivative masks
A derivative mask has the following properties.
The relationship between blurring mask and derivative mask with a high pass filter and low pass
filter can be defined simply as.
When one is placed inside and the zero is placed outside, we got a blurred image. Now as we
increase the size of 1, blurring would be increased and the edge content would be reduced.
When 0 is placed inside, we get edges, which gives us a sketched image. An ideal low pass filter
in frequency domain is given below.
Resultant Image
With the same way, an ideal high pass filter can be applied on an image. But obviously the
results would be different as, the low pass reduces the edged content and the high pass increase
it.
This is the representation of ideal low pass filter. Now at the exact point of Do, you cannot tell
that the value would be 0 or 1. Due to which the ringing effect appears at that point.
So in order to reduce the effect that appears is ideal low pass and ideal high pass filter, the
following Gaussian low pass filter and Gaussian high pass filter is introduced.
Homomorphic filter
Homomorphic filtering is a frequency-domain technique that aims at a
simultaneous increase in contrast and dynamic range compression. It is mainly
utilized for non-uniformly illuminated images in medical, sonar images etc. It is
used for edge enhancement that makes the image details clear to the observer. In
Homomorphic filtering, the illumination and reflectance components are processed
using a discrete Fourier Transform.
Homomorphic filtering can be used for improving the appearance of a grayscale image by
simultaneous intensity range compression (illumination) and contrast enhancement (reflection). An
image i(x, y) can be expressed as the product of illumination and reflectance components:
Where,
m = image,
i = illumination,
r = reflectance
We have to transform the equation into frequency domain in order to apply high pass filter. Equation
above cannot be used directly to operate separately on the frequency components of illumination
and reflectance because the Fourier transform of the product of two functions is not separable.
Therefore, we use 'log' to help solve this problem.
Then, applying Fourier transformation
Next, applying high-pass filter to the image. To make the illumination of an image more even, the
high-frequency components are increased and low-frequency components are decrease.
Where
H = any high-pass filter
N = filtered image in frequency domain
Afterward, returning frequency domain back to the spatial domain by using inverse Fourier
transform.
Finally, using the exponential function to eliminate the log we used at the beginning to get the
enhanced image
The key to the approach is the separation of the illumination and reflectance components achieved.
The homomorphic filter function H (u, v) can then operate on these components separately.
The illumination component of an image generally is characterized by slow spatial variations, while
the reflectance component tends to vary abruptly, particularly at the junctions of dissimilar objects.
These characteristics lead to associating the low frequencies of the Fourier transform of the
logarithm of an image with illumination and the high frequencies with reflectance. Although these
associations are rough approximations, they can be used to advantage in image enhancement.
The following figures show the results of applying the homomorphic filter, high-pass filter, and the
both homomorphic and high-pass filter. All figures were produced using Matlab.
Figure 1: Original image: trees.tif
According to figures one to four, we can see how homomorphic filtering is used for correcting non-
uniform illumination in the image, and the image become clearer than the original. On the other
hand, if we apply the high pass filter to the homomorphic filtered image, the edges of the images
become sharper and the other areas become dimmer. This result is similar to applying only a high-
pass filter to the original image.