0% found this document useful (0 votes)
3 views

Image_compression

Chapter 7 discusses image compression, highlighting its necessity for efficient storage and transmission across various applications such as the internet and medical imaging. It outlines two primary types of compression methods: lossless and lossy, along with their respective techniques and advantages. The chapter also explains the compression system model, detailing the stages of compression and decompression, and emphasizes the importance of retaining necessary information while reducing file sizes.

Uploaded by

bimal shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Image_compression

Chapter 7 discusses image compression, highlighting its necessity for efficient storage and transmission across various applications such as the internet and medical imaging. It outlines two primary types of compression methods: lossless and lossy, along with their respective techniques and advantages. The chapter also explains the compression system model, detailing the stages of compression and decompression, and emphasizes the importance of retaining necessary information while reducing file sizes.

Uploaded by

bimal shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

CHAPTER 7

IMAGE COMPRESSION
Introduction and Overview
✓The need to store and transmit images will continue to increase
faster than the available capability to process all the data.

✓Applications that require image compression are many and varied


such as:
1. Internet,
2. Businesses,
3. Multimedia,
4. Satellite imaging,
5. Medical imaging
✓ Compression algorithm development starts with applications to two-
dimensional (2-D) still images

✓ After the 2-D methods are developed, they are often extended to video
(motion imaging)

✓ However, we will focus on image compression of single frames of image


data

✓ Image compression involves reducing the size of image data files, while
retaining necessary information
✓ Retaining necessary information depends upon the application

✓ Image segmentation methods, which are primarily a data reduction process, can

be used for compression

✓ The reduced file created by the compression process is called the compressed file

and is used to reconstruct the image, resulting in the decompressed image

✓ The original image, before any compression is performed, is called the

uncompressed image file

✓ The ratio of the original, uncompressed image file and the compressed file is

referred to as the compression ratio


▪ The Compression ratio is denoted by:
𝑈𝑛𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑒𝑑 𝑓𝑖𝑙𝑒 𝑠𝑖𝑧𝑒 𝑆𝐼𝑍𝐸𝑈
▪ Compression Ratio = = ; Often written as
𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑒𝑑 𝑓𝑖𝑙𝑒 𝑠𝑖𝑧 𝑆𝐼𝑍𝐸𝐶
SIZEu:SIZEc
▪ Example 1: The original image is 256*256 pixels, single band
(grayscale), 8 bits per pixel. The file is 65,536 bytes (64k). After
compression, the image file is 6,554 bytes. The compression
ratio is SIZEu/SIZEc=65,536/6,554 =9.999 ~ 10. This can also be
written as 10:1.

This is called a 10 to 1 compression, a 10 times compression, or


can be stated as “compressing the image to 1/10 its original
size.” Another way to state the compression is to use the
terminology of bits per pixel. For and N*N :
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑖𝑡𝑠 8 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑦𝑡𝑒𝑠
Bits per pixel = =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 𝑁∗𝑁
▪ Example 2: Using the preceding example, with a compression

ratio of 65,536/6,554 bytes, we want to express this as bits per


pixel. This is done by first finding the number of pixels in the
image : 256*256= 65,536 pixels. We then find the number of bits
in the compressed image file : (6,554 bytes)(8 bits/byte) =
52,432 bits. Now, we can find the bits per pixel by taking the
ratio: 52,432/ 65,536 =0.8 bits/pixel

▪ The reduction in file size is necessary to meet the bandwidth

requirements for many transmission systems, and for the


storage requirements in computer databases

▪ Also, the amount of data required for digital images is

enormous.
Example 3

✓This number is based on the actual transmission rate being the


maximum, which is typically not the case due to Internet traffic,
overhead bits and transmission errors

✓Additionally, considering that a web page might contain more than


one of these images, the time it takes is simply too long

✓For high quality images the required resolution can be much higher
Example 4:

The above example applies maximum data rate to Example 3


Example 5
✓Now, consider the transmission of video images, where
we need multiple frames per second
✓If we consider just one second of video data that has
been digitized at 640x480 pixels per frame, and
requiring 15 frames per second for interlaced video,
then:

✓Example 6: To transmit one second of interlaced video


that has been digitized at 640*480 pixels :
✓Waiting 35 seconds for one second’s worth of
video is not exactly real time!
✓Even attempting to transmit uncompressed
video over the highest speed Internet
connection is impractical
✓Applications requiring high speed connections such as high definition
television, real-time teleconferencing, and transmission of multiband high
resolution satellite images, leads us to the conclusion that image
compression is not only desirable but necessary.

✓Key to a successful compression scheme is retaining necessary information.


There are two primary types of image compression methods:

1. Lossless compression methods:

▪ Lossless data compression is used to compress the files without losing an original file's
quality and data. Simply, we can say that in lossless data compression, file size is reduced,
but the quality of data remains the same.

▪ The main advantage of lossless data compression is that we can restore the original data in
its original form after the decompression.

▪ Lossless data compression mainly used in the sensitive documents, confidential


information, and PNG, RAW, GIF, BMP file formats.

▪ Some of the most important lossless data compression techniques are :

1. Run Length Encoding (RLE)

2. Lempel Ziv –Welch (LZW)

3. Huffman Coding

4. Arithmetic Coding
2. Lossy compression methods:
• Lossy data compression is used to compress larger files into smaller
files.
• In this compression technique, some specific amount of data and
quality are removed (loss) from the original file.
• It takes less memory space from the original file due to the loss of
original data and quality.
• This technique is generally useful for us when the quality of data is not
our first priority. Used in .jpeg format.
Some of the most important lossless data compression techniques are :
1. Transform Coding
2. Discrete Cosine Transform (DCT)
3. Discrete Wavelet Transform (DWT)
▪ Compression algorithms are developed by taking advantage of the
redundancy that is inherent in image data
▪ Four primary types of redundancy that can be found in images are:
➢ Coding
➢ Interpixel
➢ Interband
➢ Psychovisual redundancy

Different from image quality, which is often referred to as the subject


preference for one image over another, image fidelity represents to the
ability of a process to render an image accurately, without any visible
distortion or information loss.
1. Coding redundancy
▪ Occurs when the data used to represent the image is not utilized in an
optimal manner.
▪ If the gray levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each gray level then the
resulting image is said to contain coding redundancy.
2. Interpixel redundancy
▪ Occurs because adjacent pixels tend to be highly correlated, in most
images the brightness levels do not change rapidly, but change
gradually.
3. Interband redundancy
▪ Occurs in color images due to the correlation between bands within an
image – if we extract the red, green and blue bands they look similar.
4. Psychovisual redundancy
▪ Some information is more important to the human visual system than
other types of information. Certain information simply has less relative
importance than other information in normal visual processing. It can be
eliminated without significantly impairing the quality of image
perception.
▪ The key in image compression algorithm development is to
determine the minimal data required to retain the necessary
information.

▪ The compression is achieved by taking advantage of the redundancy


that exists in images.

▪ To help determine which information can be removed and which

information is important, the image fidelity criteria (subjective (RMS) or


objective (human perception)) are used.

▪ These measures provide metrics for determining image quality.

▪ It should be noted that the information required is application


specific, and that, with lossless schemes, there is no need for a fidelity
criteria
Compression System Model
The compression system model consists of two parts:
1. The compressor
2. The decompressor
The compressor consists of a preprocessing stage and encoding stage,
whereas the decompressor consists of a decoding stage followed by a
postprocessing stage

Fig: Compression System Model


▪ Before encoding, preprocessing is performed to prepare the image
for the encoding process, and consists of any number of operations
that are application specific.

▪ After the compressed file has been decoded, postprocessing can be


performed to eliminate some of the potentially undesirable artifacts
brought about by the compression process.
Fig: The Compressor Fig: The Decompressor
The compressor can be broken into following stages:

▪ Data reduction: Image data can be reduced by gray level and/or


spatial quantization, or can undergo any desired image
improvement (for example, enhancement, noise removal) process.

▪ Mapping: Involves mapping the original image data into another


mathematical space where it is easier to compress the data.

▪ Quantization: Involves taking potentially continuous data from the


mapping stage and putting it in discrete form.

▪ Coding: Involves mapping the discrete data from the quantizer onto
a code in an optimal manner.

▪ A compression algorithm may consist of all the stages, or it may


consist of only one or two of the stages
The decompressor can be broken down into following stages:

▪ Decoding: Takes the compressed file and reverses the original

coding by mapping the codes to the original, quantized values.

▪ Inverse mapping: Involves reversing the original mapping process.

▪ Postprocessing: Involves enhancing the look of the final image

▪ This may be done to reverse any preprocessing, for example,


enlarging an image that was shrunk in the data reduction process.

▪ In other cases the postprocessing may be used to simply enhance


the image to ameliorate any artifacts from the compression
process itself.
Average number of bits per symbol
Each run is stored as a pair (value, run length).

Each value and run length requires 4 bits.Therefore, each run


requires 4 bits+4 bits=8 bits.
Total bits required = Number of runs × Bits per run

Total bits=43×8=???
Variations
Huffman Coding:
• Error-free compression.

• Variable length coding.

The simplest construction algorithm uses a priority queue where the node with
lowest probability is given highest priority:

1. Create a leaf node for each symbol and add it to the priority queue.

2. While there is more than one node in the queue:

i. Remove the two nodes of highest priority (lowest probability) from the
queue.
ii. Create a new internal node with these two nodes as children and with
probability equal to the sum of the two nodes' probabilities.
iii. Add the new node to the queue.

3. The remaining node is the root node and the tree is complete.
Create Huffman Codes and average code length
7

Ensure no code is a prefix of another code,


confirming it’s a prefix-free code.

A prefix-free code is an instantaneously decodable code, whose decoding can


be performed as immediately as a codeword is found on successive receipt of
the code symbols.
Entropy Calculation for Each Symbol:
In practical terms, this means that on average, about 2.2198 bits are needed
to encode each symbol in the most efficient way possible, according to their
given probabilities. This entropy value gives a benchmark for the
effectiveness of any compression algorithm applied to this dataset.

Is this true?????

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy