The document provides a comprehensive overview of computer vision, covering its goals, challenges, and applications, as well as fundamental concepts like image representation, noise filtering, edge detection, and stereo vision. It includes a series of questions and answers that test knowledge on various topics within computer vision, such as the Hough transform, camera models, and image noise types. Key insights include the distinction between image processing and computer vision, the importance of smoothing before edge detection, and the principles of stereo vision.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
4 views
Introduction to Computer Vision
The document provides a comprehensive overview of computer vision, covering its goals, challenges, and applications, as well as fundamental concepts like image representation, noise filtering, edge detection, and stereo vision. It includes a series of questions and answers that test knowledge on various topics within computer vision, such as the Hough transform, camera models, and image noise types. Key insights include the distinction between image processing and computer vision, the importance of smoothing before edge detection, and the principles of stereo vision.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8
Introduction to Computer Vision
1. What is the primary goal of computer vision?
A) To transform images into text B) To write programs that can interpret images C) To enhance image resolution D) To compress images for storage Answer: B
2. Which of the following is NOT a typical application of computer vision?
A) Optical character recognition (OCR) B) Face detection C) Audio signal processing D) 3D modeling Answer: C
3. Why is computer vision considered a challenging field?
A) Because images are always high resolution B) Because interpreting images requires building percepts from measurements, not just measuring properties C) Because cameras are expensive D) Because humans are better at all vision tasks Answer: B
4. Which statement best describes the difference between image processing
and computer vision? A) Image processing manipulates images, while computer vision extracts meaning from images B) Image processing is hardware-based, while computer vision is software- based C) Image processing is older, while computer vision is newer D) There is no difference Answer: A
5. What does the "state of the art" in computer vision imply?
A) Computers are always better than humans at vision tasks B) Computers can outperform humans in some tasks but struggle with "hard" problems C) Computers can only recognize handwritten digits D) Computers are incapable of real-time processing Answer: B Images as Functions
6. How can an image be mathematically represented?
A) As a 2D array of numbers B) As a function I(x,y)I(x,y) mapping coordinates to intensity values C) As a vector of pixels D) All of the above Answer: D
7. What is the range of a digital image function?
A) [0,∞)[0,∞) B) [a,b]×[c,d]→[min,max][a,b]×[c,d]→[min,max] C) R2→R3R2→R3 D) [0,1][0,1] Answer: B
8. A color image can be represented as:
A) A single function f(x,y)f(x,y) B) Three stacked functions for red, green, and blue channels C) A 1D array of intensities D) A binary matrix Answer: B
9. What does the function f(x,y)=[r(x,y)g(x,y)b(x,y)]f(x,y)=r(x,y)g(x,y)b(x,y)
represent? A) A grayscale image B) A color image C) A binary image D) A depth map Answer: B
10. In discrete images, sampling refers to:
A) Quantizing intensity values B) Measuring intensity on a regular grid C) Applying a Gaussian filter D) Compressing the image Answer: B
Noise and Filtering
11. What is Gaussian noise?
A) Noise with a uniform distribution B) Noise with values drawn from a Gaussian distribution C) Noise that only affects edges D) Noise caused by lens distortion Answer: B
12. Salt-and-pepper noise is characterized by:
A) Random occurrences of black and white pixels B) Smooth variations in intensity C) Gaussian-distributed pixel values D) Blurring effects Answer: A
13. Which filter is best suited for removing salt-and-pepper noise?
A) Gaussian filter B) Median filter C) Box filter D) Sobel filter Answer: B
14. What is the primary purpose of a smoothing filter?
A) To enhance edges B) To reduce noise by averaging pixel values C) To increase image contrast D) To quantize the image Answer: B
15. The Gaussian filter is preferred over a box filter because:
A) It is computationally cheaper B) It provides uniform weights C) It better approximates a blurry spot with higher weights near the center D) It only works in the frequency domain Answer: C
16. What property must a filter satisfy to be linear?
A) Additivity and homogeneity B) Commutativity C) Associativity D) Differentiability Answer: A
17. Which operation is NOT linear?
A) Sum B) Median C) Average D) Convolution Answer: B
18. The impulse response of a system describes:
A) The output when the input is an impulse B) The noise in the system C) The resolution of the system D) The color response of the system Answer: A
19. Convolution is equivalent to correlation if:
A) The kernel is symmetric B) The image is grayscale C) The kernel is flipped D) The image is padded Answer: A
20. The derivative of a convolution can be computed as:
A) The convolution of the derivative B) The derivative of the kernel C) The sum of the derivatives D) The product of the derivatives Answer: A
Edge Detection and Gradients
21. Edges in an image correspond to:
A) Regions of constant intensity B) Extrema of the image derivative C) Areas of low contrast D) Gaussian noise Answer: B
22. The gradient of an image I(x,y)I(x,y) is given by:
A) ∇I=(∂I∂x,∂I∂y)∇I=(∂x∂I,∂y∂I) B) ∇I=∂I∂x+∂I∂y∇I=∂x∂I+∂y∂I C) ∇I=I(x+1,y)−I(x,y)∇I=I(x+1,y)−I(x,y) D) ∇I=mean(I)∇I=mean(I) Answer: A
23. The Sobel operator is used for:
A) Smoothing B) Edge detection C) Noise removal D) Image compression Answer: B
24. The magnitude of the image gradient represents:
A) The direction of edges B) The strength of edges C) The color of edges D) The curvature of edges Answer: B
25. Why is smoothing necessary before edge detection?
A) To reduce noise that can cause spurious edges B) To increase image brightness C) To quantize the image D) To compress the image Answer: A
Hough Transform
26. The Hough transform is used for:
A) Detecting lines and shapes in images B) Smoothing images C) Compressing images D) Removing noise Answer: A
27. In the Hough transform, a line in image space corresponds to:
A) A point in Hough space B) A circle in Hough space C) A line in Hough space D) A plane in Hough space Answer: A
28. The polar representation of a line is given by:
A) xcosθ+ysinθ=dxcosθ+ysinθ=d B) y=mx+by=mx+b C) ax+by+c=0ax+by+c=0 D) r=x2+y2r=x2+y2 Answer: A
29. What is the primary advantage of the Hough transform?
A) It can detect lines even with missing or noisy data B) It is computationally inexpensive C) It works only on binary images D) It does not require edge detection Answer: A
30. The complexity of the Hough transform depends on:
A) The number of parameters in the model B) The size of the image C) The resolution of the accumulator array D) All of the above Answer: D
Camera Models and Projections
31. The pinhole camera model assumes:
A) Light passes through a small aperture B) The lens is perfectly focused C) The image plane is behind the aperture D) All of the above Answer: A
32. Perspective projection causes:
A) Parallel lines to converge at vanishing points B) Objects to retain their size regardless of distance C) No distortion D) Uniform scaling Answer: A
33. Orthographic projection assumes:
A) The image plane is at infinity B) The focal length is zero C) Objects are scaled by their distance D) The camera is rotating Answer: A
34. The thin lens equation is:
A) 1f=1z+1z′f1=z1+z′1 B) f=z+z′f=z+z′ C) z=f⋅z′z=f⋅z′ D) z′=fzz′=zf Answer: A 35. Depth of field is affected by: A) Aperture size B) Focal length C) Distance to the subject D) All of the above Answer: D
Stereo Vision
36. Stereo vision relies on:
A) Two images from slightly different viewpoints B) A single image with depth cues C) Infrared sensors D) Motion blur Answer: A
37. Disparity in stereo vision is defined as:
A) The difference in pixel positions of a point in two images B) The focal length of the camera C) The distance between the cameras D) The angle between the cameras Answer: A
38. The epipolar constraint states that:
A) Corresponding points lie on conjugate epipolar lines B) All points lie on the same plane C) The baseline is perpendicular to the image plane D) The cameras must be parallel Answer: A
39. The fundamental matrix describes:
A) The relationship between two cameras in stereo B) The intrinsic parameters of a camera C) The distortion of a lens D) The focal length Answer: A
40. Correspondence problem in stereo vision refers to:
A) Finding matching points in two images B) Calibrating the cameras C) Removing noise D) Compressing the images Answer: A
Additional Questions
41. Which of the following is NOT a type of image noise?
A) Gaussian noise B) Salt-and-pepper noise C) Edge noise D) Impulse noise Answer: C
42. The median filter is:
A) Linear B) Non-linear C) Only applicable to color images D) A high-pass filter Answer: B
43. The Canny edge detector includes:
A) Smoothing, gradient computation, non-max suppression, and hysteresis thresholding B) Only gradient computation C) Only smoothing D) Only thresholding Answer: A
44. The vanishing point in an image is where:
A) Parallel lines in the world appear to converge B) The camera is located C) The image is brightest D) The focal point lies Answer: A
45. Chromatic aberration is caused by:
A) Different wavelengths of light focusing at different points B) Gaussian noise C) Motion blur D) Poor lighting Answer: A