QB 1
QB 1
• Definition: Digital Image Processing (DIP) refers to the use of computer algorithms
to process, analyze, enhance, and manipulate images in a digital format.
• It involves operations such as image enhancement, filtering, segmentation,
object detection, and compression to improve image quality or extract meaningful
information.
Significance:
1. Medical Imaging: Used in X-rays, MRI, and CT scans for disease diagnosis and
analysis.
2. Remote Sensing: Used in satellite imaging for environmental monitoring and
mapping.
3. Industrial Inspection: Helps in detecting defects in manufactured products using
automated systems.
4. Security and Surveillance: Used in facial recognition, biometric authentication,
and forensic analysis.
5. Entertainment & Multimedia: Applied in photo editing, video enhancement, and
CGI effects in films.
3. . List and explain any five fundamental steps in digital image processing.
1. Image Acquisition: Capturing an image using sensors such as cameras, scanners,
or medical imaging devices.
2. Image Enhancement:Improving image quality by increasing contrast, reducing
noise, and sharpening details.
3. Image Restoration:Correcting distortions and removing unwanted noise or blurring
from an image.
4. Segmentation:Dividing an image into meaningful parts to identify objects,
regions, or features.
5. Object Recognition:Identifying objects or patterns in an image, such as face
detection in security systems.
6. Describe he distribution of rods and cones in the retina and their functions.
Rods:
1. Located in the peripheral region of the retina.
2. More sensitive to low light (night vision).
3. Do not detect colour; only provide black and white vision.
4. About 120 million rods are present in the human eye.
5. Help in detecting motion and peripheral vision.
Cones:
1. Concentrated in the canter (fovea) of the retina.
2. Responsible for color vision (red, green, and blue cones).
3. Work best in bright light conditions.
4. About 6 million cones are present in the human eye.
5. Help in detecting fine details and sharp vision.
9. Explain image sensing and acquisition using a single sensing element with a
suitable diagram.
10. How does image sensing and acquisition using a linear sensing strip work?
1. A linear array of sensors captures an image row-by-row.
2. Commonly used in fax machines and line-scanning cameras.
3. Moves across the image to capture details efficiently.
4. Faster than single sensing elements but requires movement.
5. Produces continuous and high-quality images.
11. Explain the working principle of a circular sensing strip used in imaging
1. Sensors are arranged in a circular pattern around the target.
2. Used in CT (Computed Tomography) scanners.
3. Captures image slices from different angles for a detailed 3D image.
4. Provides high-resolution and accurate imaging.
5. Used in medical imaging, security screening, and industrial testing.
12. . Describe image acquisition using sensor arrays, and compare CCD
and CMOS sensors
Feature CCD (Charge-Coupled CMOS (Complementary Metal-
Device) Oxide-Semiconductor)
Quality High image quality, low Slightly lower image quality
noise
Speed Slower due to charge Faster due to individual pixel readout
transfer
Power Higher power usage Low power consumption
Consumption
Cost Expensive Cheaper
Applications Used in high-end cameras Used in smartphones and webcams
and telescopes
13 Explain the image formation model with respect to illumination and reflectance.
The image formation model is based on the interaction between illumination (light
source) and reflectance (object surface properties).
1. Illumination (i(x,y)i(x,y)i(x,y))
o The amount of light falling on an object from a source.
o Examples: Sunlight, artificial lights, infrared rays.
o Measured in lumens or lux.
2. Reflectance (r(x,y)r(x,y)r(x,y))
o The amount of light reflected from an object’s surface.
o A perfect black object has zero reflectance, while a mirror has 100%
reflectance.
3. Image Function (f(x,y)f(x,y)f(x,y))
o Formed as the product of illumination and reflectance: f(x,y)=i(x,y)×r(x,y)f(x,y)
= i(x,y) \times r(x,y)f(x,y)=i(x,y)×r(x,y)
4. Effect on Image Quality
o High illumination = bright images.
o Low illumination = dark images.
o Uneven reflectance can cause shadows or glare.
5. Application in Image Processing
o Used for image enhancement, object recognition, and texture analysis in
various fields.
14. What is image sampling, and how does it affect image resolution?
1. Definition:
o Sampling refers to converting a continuous image into a discrete digital
format by taking pixel samples at fixed intervals.
2. Effect on Resolution:
o Higher sampling rate → More pixels → Sharper, high-resolution images.
o Lower sampling rate → Fewer pixels → Blurry, pixelated images.
3. Nyquist Sampling Theorem:
o Sampling frequency should be at least twice the highest frequency in the
image to avoid loss of details.
4. Image Resolution Relation:
o Resolution depends on the number of pixels per unit area.
o Common resolutions: 720p, 1080p, 4K, 8K.
5. Applications:
o Used in digital photography, medical imaging (MRI scans), and satellite
imaging.
15. Explain image quantization and its impact on digital image representation
1. Definition:
o Quantization is the process of reducing the number of intensity levels in an
image for efficient storage and processing.
2. Effect on Image Representation:
o More quantization levels → Smooth, high-quality images.
o Fewer quantization levels → Loss of details, posterization effect.
3. Bit Depth & Image Quality:
o 8-bit image = 256 intensity levels (grayscale).
o 24-bit image = 16.7 million colours (true colour).
4. Trade-offs:
o High quantization → Better quality, larger file size.
o Low quantization → Lower quality, smaller file size.
5. Applications:
o Used in JPEG compression, medical imaging, and computer vision.
17. Describe Gamma Ray Imaging and its applications in medical and industrial fields.
1. Definition:
o Imaging technique using high-energy gamma rays to capture images of
internal structures.
2. Working Principle:
o Gamma rays penetrate objects and detect density variations, producing an
image.
3. Medical Applications:
o Cancer detection (PET scans).
o Radiotherapy (targeting tumours with radiation).
4. Industrial Applications:
o Non-destructive testing (NDT) to detect cracks in materials.
o Security screening (cargo and vehicle inspections).
5. Advantages:
o Provides deep tissue imaging.
o Detects defects without damaging the object.
18. Explain the role of X-ray Imaging in healthcare and security applications
1. Definition:
o X-ray imaging uses short-wavelength radiation to capture images of dense
materials.
2. Healthcare Applications:
o Bone fracture detection (orthopaedics).
o Dental imaging (cavities, root infections).
o Lung & chest scans (pneumonia, tuberculosis).
3. Security Applications:
o Airport baggage scanning (detecting prohibited items).
o Border security (checking concealed weapons).
o Industrial testing (weld integrity checks).
4. Working Principle:
o X-rays pass through soft tissues but are absorbed by bones, metals, and
dense objects, forming an image.
5. Advantages:
o Quick, non-invasive diagnostics.
o Effective security screening tool.
19. What is Synthetic Imaging, and how is it applied in modern AI-based image
processing?
1. Definition:
o Synthetic imaging uses AI algorithms to reconstruct, generate, or enhance
images that may not be captured directly.
2. Working Principle:
o AI-based models analyse existing data and generate new images or
enhance low-quality ones.
3. Applications:
o Medical: AI-generated MRI reconstructions.
o Entertainment: Deepfake technology, CGI in movies.
o Forensics: AI restoration of old/damaged images.
o Autonomous Vehicles: Image enhancement in self-driving cars.
o Satellite Imaging: Generating high-resolution maps from low-quality
images.
4. Advantages:
o Improves image quality and details.
o Reduces manual image processing workload.
5. Challenges:
o Ethical concerns (e.g., deepfakes).
o High computational requirements.
20. Compare visible, infrared, and ultraviolet imaging in terms of applications and
properties.
Imaging Type Wavelength Applications Key Properties
Range
Visible 400-700 nm Photography, medical Captures what human
Imaging imaging, security cameras eyes can see
Infrared 700 nm - 1 mm Night vision, thermal Detects heat and
Imaging scanning, remote sensing temperature variations
Ultraviolet 10-400 nm Forensic analysis, skin Reveals details not
Imaging damage detection, visible to the naked eye
astronomy