Mod 5 CG
Mod 5 CG
In a right-handed viewing system with the viewing direction along the negative zv axis
(Figure below), a polygon is a back face if the z component, C, of its normal vector N
satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value that satisfies the inequality C <=0
Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C, and D can be calculated from polygon vertex
coordinates specified in a clockwise direction.
1
Inequality 1 then remains a valid test for points behind the polygon.
By examining parameter C for the different plane surfaces describing an object, we can
immediately identify all the back faces.
For other objects, such as the concave polyhedron in Figure below, more tests must be carried
out to determine whether there are additional faces that are totally or partially obscured by other
faces
In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
Depth Buffer (Z-Buffer):
It is an image-space approach developed by Cutmull. The Z-depth of each surface is tested for
determining the closest surface.
Across the surface, one pixel position is processed at a time. The color that is to be displayed on the
frame buffer is determined by the closest (smallest z) surface, by comparing the depth values for a
pixel.
The closer polygons are override by using two buffers namely frame buffer and depth buffer. Depth
buffer is used to store depth values for (x, y) position, as surfaces are processed (0 s depth s 1).
The frame buffer is used to store the intensity value of color value at each position (x, y). The z-
coordinates are usually normalized to the range [O, ll. The O value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm:
Depthbuffer (x, y) = O
If Z > depthbuffer (x, y) Compute surface color, set depthbuffer (x, y) = z, framebuffer (x, y) =
surfacecolor (x, y)
2
Advantages:+ Implementation is easy.+The problems related to speed is reduced once it is
implemented in the hardware. It processes one object ata time.
Scan-Line:The visible surface is identified by the image-space method. All the polygons intersecting
need to be grouped and processed for a particular scan-line before processing the next scan-line.
This is done by maintaining two tables' edge table and polygon table.
The Edge Table: It contains coordinate endpoints of each line in the scene, the inverse slope of each
line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table: It contains the plane coefficients, surface material properties, other surface data,
and may be pointers to the edge table.
1. Light Source:Light source is the light emitting source. There are three types of light sources:
a.Point Sources — The source that emit rays in all directions (A bulb in a room).
B.Parallel Sources — Can be considered as a point source which is far from the surface (The
sun).c.Distributed Sources - Rays originate from a finite area (A tubelight).Their position,
electromagnetic spectrum and shape determine the lightning effect.
2.Surface :When light falls on a surface part Of it is reflected and part Of it is absorbed. Now the
surface structure decides the amount Of reflection and absorption Of light. The position of the
surface and positions of all the nearby surfaces also determine the lightning effect.
3.Observer:The observer's position and sensor spectrum sensitivities also affect the lightning
effect.Types of Illumination Models:1.Ambient Illumination:Assume we are standing on a road,
facing a building with glass exterior and sun rays are falling on that building reflecting back from it
and the falling on the object under observation. This would be Ambient Illumination.In simple words,
Ambient Illumination is the one where source of light is indirect. The reflected intensity Iamb of any
point on the surface is:
am b
3
2.Diffuse Reflection:Diffuse reflection occurs on the surfaces which are rough or grainy. In this
reflection the brightness of a point depends upon the angle made by the light source and the
surface.The reflected intensity Idiff of a point on the surface is:
I diff = Kd Ip cos(ß) = Kd I p (N - L)
VVhere , I:the point light intensity,K : the surface diffuse reflectivity, value of Kd varies from O to 1,N :
the surface normal,L : the light direction
3. Specular Reflection:When light falls on any shiny or glossy surface most of it is reflected back, such
reflection is known as Specular Reflection.Phong Model is an empirical model for Specular Reflection
which provides us with the formula for calculation the reflected intensity
vvhere. VV(O) : K
direction Of light source
2.Pre-Rendering:This rendering technique is used in environments where speed is not a concern and
the image calculations are performed using multi-core central processing units rather than dedicated
graphics hardware.This rendering technique is mostly used in animation and visual effects, where
photorealism needs to be at the highest standard possible.Techniques for Computing Rendering:
1. Rasterization And Scanline:Geometrically projects objects in the scene to an image plane, without
advanced optical effects. There are two approach, pixel-by-pixel (image order) and primitive-by-
primitive (object order).
4
accordingly.This method used by all current graphics cards. Rasterization usually becomes an option
when interactive rendering is needed, however, the pixel-by-pixel approach can often produce
higher-quality images and more flexible.For the older form of rasterization, entire face (primitive) is
rendered by a single color. It's more complicated, because we must render the vertices of a face by
first and then rendering the pixels of that face as a blending of the vertex colors.
2. Ray Casting:Geometry model in ray casting is parsed pixel by pixel, line by line, from the point of
view outward, as if casting rays out from the point of view. The color value ofthe Object at the point
of intersection may be evaluated using several methods.The simplest method, its color value
becomes the value of that pixel. The color may be determined from a texture-map. The more
sophisticated method is to modify the color value by an illumination factor.To reduce artifacts, a
number of rays in slightly different directi ons may be averaged.
3. Ray Tracing:Rendering technique by tracing the path of light as pixels in an image plane and
reproduces the path that each light ray follows in reverse direction from the eye back to its point of
origin. The process will continue to repeat until all pixels are formed.This technique involves
reflection, refraction, or shadow effects from points within the scene. Ray tracing also accumulate
the color value of the light and the value of the reflection coefficient of the object in determining the
color of the depiction on the screen.By using this ray tracing technique, effects such as reflection,
refraction, scattering, and chromatic aberration can be obtained.
5
divide field into smaller field to find color details so that the process is slow, but the resulting
visualization is neat and smooth. Radiosity is more precisely used for the final result of an object.
Rendering Method:1.Hidden Line Rendering:This method is used to represent objects whose surface
is covered or blocked by other objects with lines representing the sides of the object, but some lines
are not visible because of the surface that blocks them.
2.Ray Tracing Rendering:This method produces photorealistic images. The basic concept of this
method is to follow the process experienced by a light on its way from the light source to the screen
and estimate what kind of color is displayed on the pixel where the light falls.The process will be
repeated until all the required pixels are formed. The idea of this method originated from Rene
Descartes's experiment, in which he showed the formation of a rainbow using a glass ball filled with
water by resuming the direction of light.
Shaded Rendering:In this method, the computer is required to perform various calculations both
lighting, surface characteristics, shadow casting, etc. This method produces a very realistic image,
but the disadvantage is the long rendering time required.
Wireframe Rendering:In wireframe rendering, an object is formed only visible lines that describe the
edges of an object.This method can be done by a computer very quickly, the only drawback is the
absence Of a surface, so that an Object looks transparent. so there is often a misunderstanding
between the front and back side of an object.