0% found this document useful (0 votes)
3 views6 pages

Mod 5 CG

The document discusses various visible-surface detection methods, classifying them into object-space and image-space approaches, with techniques like back-face detection and depth buffering. It also covers illumination models, rendering techniques, and their applications in computer graphics, highlighting methods such as ray tracing and radiosity. Additionally, it outlines the advantages and disadvantages of different rendering methods, emphasizing the balance between speed and image quality.

Uploaded by

Satabdi Saikia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views6 pages

Mod 5 CG

The document discusses various visible-surface detection methods, classifying them into object-space and image-space approaches, with techniques like back-face detection and depth buffering. It also covers illumination models, rendering techniques, and their applications in computer graphics, highlighting methods such as ray tracing and radiosity. Additionally, it outlines the advantages and disadvantages of different rendering methods, emphasizing the balance between speed and image quality.

Uploaded by

Satabdi Saikia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Visible-Surface Detection Methods

Classification of Visible-Surface Detection Algorithms


 We can broadly classify visible-surface detection algorithms according to whether they
deal with the object definitions or with their projected images.
 Object-space methods: compares objects and parts of objects to each other within the
scene definition to determine which surfaces, as a whole, we should label as visible.
 Image-space methods: visibility is decided point by point at each pixel position on the
projection plane.
 Although there are major differences in the basic approaches taken by the various
visiblesurface detection algorithms, most use sorting and coherence methods to improve
performance.
 Sorting is used to facilitate depth comparisons by ordering the individual surfaces in
a scene according to their distance from the view plane.
 Coherence methods are used to take advantage of regularities in a scene.
4.11 Back-Face Detection
 A fast and simple object-space method for locating the back faces of a polyhedron is based on
front-back tests. A point (x, y, z) is behind a polygon surface ifA x + By + Cz + D < 0
where A, B,C, and Dare the plane parameters for the polygon
 We can simplify the back-face test by considering the direction of the normal vector N for a
polygon surface. If Vview is a vector in the viewing direction from our camera
 position, as shown in Figure below, then a polygon is a back face ifVview . N > 0

 In a right-handed viewing system with the viewing direction along the negative zv axis
(Figure below), a polygon is a back face if the z component, C, of its normal vector N
satisfies C < 0.
 Also, we cannot see any face whose normal has z component C = 0, because our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value that satisfies the inequality C <=0

 Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C, and D can be calculated from polygon vertex
coordinates specified in a clockwise direction.

1
 Inequality 1 then remains a valid test for points behind the polygon.
 By examining parameter C for the different plane surfaces describing an object, we can
immediately identify all the back faces.
For other objects, such as the concave polyhedron in Figure below, more tests must be carried
out to determine whether there are additional faces that are totally or partially obscured by other
faces

 In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
Depth Buffer (Z-Buffer):

It is an image-space approach developed by Cutmull. The Z-depth of each surface is tested for
determining the closest surface.

Across the surface, one pixel position is processed at a time. The color that is to be displayed on the
frame buffer is determined by the closest (smallest z) surface, by comparing the depth values for a
pixel.

The closer polygons are override by using two buffers namely frame buffer and depth buffer. Depth
buffer is used to store depth values for (x, y) position, as surfaces are processed (0 s depth s 1).

The frame buffer is used to store the intensity value of color value at each position (x, y). The z-
coordinates are usually normalized to the range [O, ll. The O value for z-coordinate indicates back
clipping pane and 1 value for z-coordinates indicates front clipping pane.

Algorithm:

Step-I: Buffer values are set as:

Depthbuffer (x, y) = O

Framebuffer (x, y) = background color

Step-2: Each polygon is processed one at a time.

For each projected (x, y) pixel position of a polygon, calculate depth z.

If Z > depthbuffer (x, y) Compute surface color, set depthbuffer (x, y) = z, framebuffer (x, y) =
surfacecolor (x, y)

2
Advantages:+ Implementation is easy.+The problems related to speed is reduced once it is
implemented in the hardware. It processes one object ata time.

Disadvantages: Large memory is required.+ It is a time consuming process.

Scan-Line:The visible surface is identified by the image-space method. All the polygons intersecting
need to be grouped and processed for a particular scan-line before processing the next scan-line.
This is done by maintaining two tables' edge table and polygon table.

The Edge Table: It contains coordinate endpoints of each line in the scene, the inverse slope of each
line, and pointers into the polygon table to connect edges to surfaces.

The Polygon Table: It contains the plane coefficients, surface material properties, other surface data,
and may be pointers to the edge table.

The search for the surfaces that cross a


given scan line can be facilitated by an active list Of edges. Only the edges that cross the scan line is
stored by the active list. For indicating whether the position along a scan line is inside or outside the
surface, a flag is set.Each of the scan line is processes from left to right. The surface flag is turned on
at left intersection and at right intersection it is turned Off.

Illumination Models in Computer Graphics:Illumination model, also known as Shading model or


Lightning model, is used to calculate the intensity of light that is reflected at a given point on surface.
There are three factors on which lightning effect depends on:

1. Light Source:Light source is the light emitting source. There are three types of light sources:
a.Point Sources — The source that emit rays in all directions (A bulb in a room).
B.Parallel Sources — Can be considered as a point source which is far from the surface (The
sun).c.Distributed Sources - Rays originate from a finite area (A tubelight).Their position,
electromagnetic spectrum and shape determine the lightning effect.

2.Surface :When light falls on a surface part Of it is reflected and part Of it is absorbed. Now the
surface structure decides the amount Of reflection and absorption Of light. The position of the
surface and positions of all the nearby surfaces also determine the lightning effect.

3.Observer:The observer's position and sensor spectrum sensitivities also affect the lightning
effect.Types of Illumination Models:1.Ambient Illumination:Assume we are standing on a road,
facing a building with glass exterior and sun rays are falling on that building reflecting back from it
and the falling on the object under observation. This would be Ambient Illumination.In simple words,
Ambient Illumination is the one where source of light is indirect. The reflected intensity Iamb of any
point on the surface is:

am b

VVhere. I : arnbient light i ntensity: surface ambient reflectivity,


value Of varies from 0 to 1

3
2.Diffuse Reflection:Diffuse reflection occurs on the surfaces which are rough or grainy. In this
reflection the brightness of a point depends upon the angle made by the light source and the
surface.The reflected intensity Idiff of a point on the surface is:

I diff = Kd Ip cos(ß) = Kd I p (N - L)

VVhere , I:the point light intensity,K : the surface diffuse reflectivity, value of Kd varies from O to 1,N :
the surface normal,L : the light direction

3. Specular Reflection:When light falls on any shiny or glossy surface most of it is reflected back, such
reflection is known as Specular Reflection.Phong Model is an empirical model for Specular Reflection
which provides us with the formula for calculation the reflected intensity

vvhere. VV(O) : K
direction Of light source

N ; to the direction Of reflected ray direction Of obseozorAngle


bet»veen L and R : angle betvveen and V

Rendering in Computer Graphics:Rendering is the process of visualization image from 2D or 3D with


a computer program. Rendering process based on geometry, viewpoint, texture, lighting, and
shading information describing the virtual scene that used to give the concept of an artist's
impression of a scene. rendering is mostly used in architectural designs, video games, and animated
movies, simulators, TV special effects and design visualization. The techniques and features used
vary according to the project. Rendering helps increase efficiency and reduce cost in design.

Types of Rendering in Computer Graphics:1.Real Time Rendering:The prominent rendering


technique using in interactive graphics and gaming where images must be created at a rapid pace.
Because user interaction is high in such environments, real-time image creation is
required.Dedicated graphics hardware and pre-compiling of the available information has improved
the performance of real-time rendering.

2.Pre-Rendering:This rendering technique is used in environments where speed is not a concern and
the image calculations are performed using multi-core central processing units rather than dedicated
graphics hardware.This rendering technique is mostly used in animation and visual effects, where
photorealism needs to be at the highest standard possible.Techniques for Computing Rendering:

1. Rasterization And Scanline:Geometrically projects objects in the scene to an image plane, without
advanced optical effects. There are two approach, pixel-by-pixel (image order) and primitive-by-
primitive (object order).

In high-level representation of an image necessarily contains


elements are referred to as primitives in a different domain from pixels. In a schematic drawing, for
instance, line segments and curves might be primitives.In rendering of 3D models, triangles and
polygons in space might be primitives. Pixel-bypixel approach is impractical or too slow, for instance,
large areas of the image may be empty of primitives, this approach must pass through them.In
rasterization will ignore those areas, this approach is the rendering method by one loop through
each of the primitives, determines which pixels in the image it affects, and modifies those pixels

4
accordingly.This method used by all current graphics cards. Rasterization usually becomes an option
when interactive rendering is needed, however, the pixel-by-pixel approach can often produce
higher-quality images and more flexible.For the older form of rasterization, entire face (primitive) is
rendered by a single color. It's more complicated, because we must render the vertices of a face by
first and then rendering the pixels of that face as a blending of the vertex colors.

2. Ray Casting:Geometry model in ray casting is parsed pixel by pixel, line by line, from the point of
view outward, as if casting rays out from the point of view. The color value ofthe Object at the point
of intersection may be evaluated using several methods.The simplest method, its color value
becomes the value of that pixel. The color may be determined from a texture-map. The more
sophisticated method is to modify the color value by an illumination factor.To reduce artifacts, a
number of rays in slightly different directi ons may be averaged.

This technique is considered quite faster than ray tracing,


because geometric rays are traced from the eye of the observer, then tracing is carried out from the
object where the light originated from and the Object is looking for the light source.However
compared to ray tracing, the images generated with ray casting are not very realistic. Due to the
geometric constraints involved in the process, not all shapes can be rendered by ray casting.

3. Ray Tracing:Rendering technique by tracing the path of light as pixels in an image plane and
reproduces the path that each light ray follows in reverse direction from the eye back to its point of
origin. The process will continue to repeat until all pixels are formed.This technique involves
reflection, refraction, or shadow effects from points within the scene. Ray tracing also accumulate
the color value of the light and the value of the reflection coefficient of the object in determining the
color of the depiction on the screen.By using this ray tracing technique, effects such as reflection,
refraction, scattering, and chromatic aberration can be obtained.

Often, ray tracing methods are utilized to


approximate the solution to the rendering equation by applying Monte Carlo methods to it.Some of
the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but
also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids.

4. Radiosity:This technique is not usually implemented as a rendering technique, but instead


calculates the passage of light as it leaves the light source and illuminates surfaces that usually
rendered to the display using one of the other three techniques.

This techniques is a rendering


techniques based on detailed analysis of light reflection from diffusion surfaces.This techniques

5
divide field into smaller field to find color details so that the process is slow, but the resulting
visualization is neat and smooth. Radiosity is more precisely used for the final result of an object.

Rendering Method:1.Hidden Line Rendering:This method is used to represent objects whose surface
is covered or blocked by other objects with lines representing the sides of the object, but some lines
are not visible because of the surface that blocks them.

2.Ray Tracing Rendering:This method produces photorealistic images. The basic concept of this
method is to follow the process experienced by a light on its way from the light source to the screen
and estimate what kind of color is displayed on the pixel where the light falls.The process will be
repeated until all the required pixels are formed. The idea of this method originated from Rene
Descartes's experiment, in which he showed the formation of a rainbow using a glass ball filled with
water by resuming the direction of light.

Shaded Rendering:In this method, the computer is required to perform various calculations both
lighting, surface characteristics, shadow casting, etc. This method produces a very realistic image,
but the disadvantage is the long rendering time required.

Wireframe Rendering:In wireframe rendering, an object is formed only visible lines that describe the
edges of an object.This method can be done by a computer very quickly, the only drawback is the
absence Of a surface, so that an Object looks transparent. so there is often a misunderstanding
between the front and back side of an object.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy