0% found this document useful (0 votes)
11 views3 pages

CV Viva Probable

Uploaded by

lakahayk757
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views3 pages

CV Viva Probable

Uploaded by

lakahayk757
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Image Processing and Computer Vision

(Course Instructor: Chiranjit Pal)


1. What are active contour models in image processing?
2. How do active contour models differ from traditional edge detection methods?
3. Explain the basic concept of pattern recognition in image understanding.
4. What is scene labeling in the context of image understanding?
5. : Describe constraint propagation in image understanding.
6. How do semantic image segmentation and traditional image segmentation differ?
7. What is a Hidden Markov Model (HMM) and how is it applied in image
understanding?
8. Can you explain the role of energy minimization in active contour models?
9. What are the primary challenges in pattern recognition for image understanding?
10. Describe the concept of the "snakes" algorithm in active contour models.
11. How does scene labeling help in high-level image understanding tasks?
12. What are the key components of an active contour model?
13. What are some common evaluation metrics for pattern recognition algorithms in
image understanding?
14. How do constraint propagation techniques improve the accuracy of image
understanding?
15. Describe a practical application of active contour models in medical imaging.
16. What is the role of semantic information in image segmentation?
17. What are the limitations of active contour models?

UNIT -II

1. What is thresholding in image processing?


2. Explain the concept of edge-based segmentation.
3. What are the advantages of region-based segmentation over edge-based
segmentation?
4. How does region growing differ from region splitting and merging in segmentation?
5. Discuss the importance of matching in object recognition.
6. What are the challenges associated with thresholding techniques?
7. Compare contour-based and region-based shape representation methods.
8. How do shape descriptors aid in object recognition?
9. What role does feature extraction play in shape representation?
10. Describe the role of shape descriptors in shape representation and recognition, and
provide examples of commonly used descriptors.
11. How do shape classes facilitate object recognition, and what factors influence the
choice of shape classes in a recognition system?
12. How is a homography matrix estimated from corresponding points in two images?
13. In what scenarios is Harris Corner Detection preferred over other feature detection
methods?
14. What are the main steps involved in computing SIFT features?
15. How does the Harris Corner Detector handle noise, and what are some strategies to
improve its robustness?
16. Explain the concept of projective geometry and its relationship with the homography
matrix.
17. Why is SIFT considered scale-invariant, and how does this property benefit image
matching?
18. How does the gradient orientation histogram in HOG features help in capturing the
edge information of an image?
19. How do you choose the cell size and block size in HOG feature extraction, and how
do these parameters affect performance?
20. What is the role of the Difference of Gaussians (DoG) in SIFT, and how does it
contribute to scale-invariance?
21. How does non-maximum suppression work in the context of Harris Corner
Detection?
22. What are the limitations of HOG features
23. How does the Harris Corner Detection algorithm handle scale variations, and what
are the limitations in this regard?
24. Explain how RANSAC is used in conjunction with the homography matrix for robust
estimation.
25. What are the key differences between graph cuts and traditional clustering methods
for image segmentation?
UNIT -I
26. What are the essential properties of an image in the context of image analysis?
27. Describe common data structures used in image analysis and their applications.
28. What are geometric transforms in image processing, and why are they important?
29. What are zero-crossings in the context of edge detection, and how are they used?
30. Explain the Canny edge detection algorithm and its key steps.
31. How do edge detectors differ in their sensitivity to noise, and which methods are
typically more robust?
32. What are the main challenges in edge detection for images with low contrast?
33. How does the choice of kernel size in Gaussian smoothing affect edge detection
results?
34. What role does non-maximum suppression play in edge detection, and why is it
necessary?
35. Describe how edge tracking by hysteresis works in the Canny edge detection
algorithm.
36. Describe the role of affine transforms in geometric transformations and their
application in image registration.
37. How does the Sobel operator function in edge detection, and what are its limitations?
38. What are the advantages of using zero-crossing detection in edge detection
algorithms?
39. In the context of Canny edge detection, why is Gaussian smoothing applied before
gradient computation?
40. What are the key characteristics of a good edge detection algorithm?
41. What is noise?
42. Describe how the Hough Transform is used in detecting geometric shapes within an
image.
43. Explain the concept of edge linking and its importance in edge detection.
44. What are the advantages and disadvantages of the Prewitt operator compared to the
Sobel operator?
45. How does the selection of threshold values affect the performance of edge detection
algorithms like the Canny edge detector?
46. hat is the role of the Laplacian of Gaussian (LoG) in edge detection, and how does it
differ from simple gradient-based methods?
47. What is image digitization, and why is it important in image processing?
48. Explain the process of sampling in image digitization.
49. What is quantization in the context of image digitization?
50. Describe the trade-offs involved in choosing the resolution and bit depth during
image digitization.
51. How does the Nyquist sampling theorem apply to image digitization?
52. What are the key differences between computer graphics and computer vision?
53. How do computer graphics and computer vision complement each other in mixed
reality applications?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy