0% found this document useful (0 votes)
330 views62 pages

Visible Surface Determination

This document discusses algorithms for visible surface determination and hidden surface elimination. It describes the image precision and object precision approaches. The key algorithms covered are the depth-buffer method, scan line method, and depth-sorting method. The depth-buffer method uses two buffers (depth and frame) to determine the visible surface at each pixel by comparing depth values. The scan line method examines surfaces intersecting each scan line to determine the nearest surface. The depth-sorting method sorts surfaces from back to front before rendering.

Uploaded by

qwdfgh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
330 views62 pages

Visible Surface Determination

This document discusses algorithms for visible surface determination and hidden surface elimination. It describes the image precision and object precision approaches. The key algorithms covered are the depth-buffer method, scan line method, and depth-sorting method. The depth-buffer method uses two buffers (depth and frame) to determine the visible surface at each pixel by comparing depth values. The scan line method examines surfaces intersecting each scan line to determine the nearest surface. The depth-sorting method sorts surfaces from back to front before rendering.

Uploaded by

qwdfgh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Visible Surface

Determination

Introduction
To determine which lines or surfaces of the
objects are visible, either from the center of
projection or along the direction of projection,
so we can display only the visible lines or
surface this process if know as visible surface
detection, hidden surface elimination.
Mainly two approaches:
Image precision
Object Precision

Introduction
Image precision and object precision algorithm
Pseudocode for image precision algorithm
For(each pixel in the image)
{
determine the object closest to the viewer that is
pierced by the projector through the pixel;
draw the pixel in the appropriate color,
}
Time complexity : proportional to np where p is over
1 million for a high resolution display and n is no of
objects

Image Space Methods


Based on the pixels to be drawn on 2D. Try to
determine which object should contribute to that
pixel.
Running time complexity is the number of pixels
times number of objects.
Space complexity is two times the number of
pixels:
One array of pixels for the frame buffer
One array of pixels for the depth buffer

Coherence properties of surfaces can be used.


Depth-buffer and ray casting methods.

Cont.
Object precision pseudo code
For(each object in the world) {
determine those parts of the object whose
view is unobstructed by other parts of it or
any other object;
draw those parts in the appropriate color,
}
Time complexity : proportional to n2
This approach is good when N<p but its individual
step are more complex and time consuming

Object Space Methods


Algorithms to determine which parts of the shapes
are to be rendered in 3D coordinates.
Methods based on comparison of objects for their
3D positions and dimensions with respect to a
viewing position.
For N objects, may require N*N comparision
operations.
Efficient for small number of objects but difficult to
implement.
Depth sorting, area subdivision methods.

Cont.
Image precision

Object precision

Performed at resolution of the


display device

Performed at the precision with


which each object is defined

Dependent on the resolution

Independent on the resolution

Operate on sampled data

Operate on original continuous


object data

Aliasing problem

No Aliasing problem

Cont.
Many algorithms combine both
object and image precision
calculations
Object precision has accuracy and
image precision has speed

Techniques for Efficient VisibleSurface Algorithms


We must organize visible-surface
algorithms so that costly operations
are performed as efficiently and
infrequently as possible.
Coherence:
The degree to which parts of an
environment of its projection exhibit
local similarities.

Cont.
Different kinds of coherence:
Object Coherence: if one object is
entirely separate from another,
comparisons may need to be done only
between two objects, and not between
their component faces or edges.
Face Coherence: surface properties
typically vary smoothly across a face,
allowing computations for one part of
face to be modified incrementally to
apply to adjacent part.

Cont.
Edge Coherence: An edge may
change visibility only where it
crosses behind a visible edge or
penetrates a visible face.
Area Coherence: a group of
adjacent pixel is often covered by the
same visible face.

Cont.
Depth coherence: once depth at
one point of the surface is calculated,
the depth of the points on the rest of
the surface can often be determined
by simple difference equation.
Frame coherence: calculations
made for one picture can be reused
for the next in sequence.

Back-Face Detection
Back-face detection of 3D polygon
surface is easy
Recall the polygon surface equation:

Ax By Cz D 0

We need to also consider the viewing


direction when determining whether a
surface is back-face or front-face.
The normal of the surface is given by:

N ( A, B, C )

Back-Face Detection
A polygon surface is a back face if:

Vview N 0
However, remember that after
application of the viewing
transformation we are looking down
the negative z-axis. Therefore a
,0,1)face
N 0if:
polygon is a(0back
V.N=Vz.C or if C 0

Back-Face Detection

We will also be unable to see surfaces with


C=0. Therefore, we can identify a polygon
surface as a back-face if:

C0

Back-Face Detection
Back-face detection can identify all
the hidden surfaces in a scene that
contain non-overlapping convex
polyhedra.
But we have to apply more tests that
contain overlapping objects along the
line of sight to determine which
objects obscure which objects.

Example,back-faces
Assume a slice through an object where
N1-N6 are the surface normals of the
boundary polygons (left-handed
system!)

Depth-Buffer Method
Also known as z-buffer method.
It is an image space approach
Each surface is processed separately
one pixel position at a time across the
surface
The depth values for a pixel are
compared and the closest (smallest z)
surface determines the color to be
displayed in the frame buffer.
Applied very efficiently on polygon

Depth-Buffer Method
Two buffers are used
Frame Buffer
Depth Buffer

The z-coordinates (depth values) are


usually normalized to the range [0,1]

Depth-Buffer method,contd
Requirements:
Two buffers are needed representing
all pixel positions:
a depth buffer keeping the (current)
z-value (depth) of each pixel
a refresh buffer, typically the frame
buffer, keeping the (current) intensity
value of each pixel

Depth-Buffer method,contd
Initializations:
the refresh buffer is initiated with the
background color
the depth buffer is initiated with z=0
(corresponds to the background
depth)

Depth-Buffer method,contd
Strategy:
Each object is processed at a time, each of
its projected polygon surfaces is then
processed (scan-converted along scanlines) separately, one point (pixel) at a
time.
A pixel is written in the refresh buffer only if
its depth position is nearer than the
currently registered value for that position
in the depth buffer

Depth-Buffer Method

Depth-Buffer Algorithm
Initialize the depth buffer and frame
buffer so that for all buffer positions
(x,y),
depthBuff (x,y) = 1.0, frameBuff (x,y)
=bgColor

Process each polygon in a scene, one


at a time
For each projected (x,y) pixel position of a
polygon, calculate the depth z.
If z < depthBuff (x,y), compute the surface

Calculating depth values


efficiently
We know the depth values at the
vertices. How can we calculate the
depth at any other point on the
surface of the polygon.
Using the polygon surface equation:

Ax By D
z
C

Calculating depth values


efficiently
For any scan line adjacent horizontal
x positions or vertical y positions
differ by 1 unit.
The depth value of the next position
(x+1,y) on the scan line can be
obtained using
A( x 1) By D

A
z
C

Calculating depth values


efficiently
For adjacent scan-lines we can
compute the x value using the slope
of the projected line and the previous
x value.
1

x x

m
A/ m B
z z
C

Depth-Buffer method,contd

Depth-Buffer method,contd
After all objects with their projected
polygon surfaces have been scanconverted in this way, the correct
image is in the refresh buffer
Time-consuming (problem in real-time
applications) and memory
demanding (less of a problem today)

Depth-Buffer method,contd
Assume plane equation:
Ax + By + Cz + D = 0
Then, the depth in (x1,y1):
z1 = (-Ax1 - By1 -D)/C
Next point on the same scan-line is (x1+1,y1)
with depth:
z2 = [-A(x1+1) - By1 -D]/C
This gives: z2 = z1 - A/C, where A/C is
constant for the whole polygon!

Depth-Buffer method,contd
Similar when changing scan-line
(assume down the left side of the
polygon with an edge slope m):
z1 = (-Ax1 - By1 - D)/C and
z2 = [-A(x1 - 1/m) - B(y1 - 1) - D]/C
that gives: z2 = z1 + (A/m + B)/C
(if vertical edge, then z2 = z1 + B/C)

Advantages
It is easy to implement.
No sorting of the surfaces is required.
It reduces the speed problem if
implemented in hardware.
It processes one object at a time.
Disadvantages
It requires large memory.
It is time consuming process.

Scan Line Method

Scan Line Method


This is image-space based method. This is
used for removing hidden surface
In this algorithm we deal with multiple
surfaces, not just a single surface.
To determine the visible surfaces, all
polygon surfaces intersecting the scan line
are examined.
Across each scan line, surface nearest to
the view plane is determined by making
depth calculations.

After that, intensity value for that position


is entered into refresh buffer.
Edge Table and Polygon Table are set up
for various surfaces.
Edge table contains: The coordinate end points for each line in
the scene
The inverse slope of each line
The polygon identification number
indicating the polygon to which the edge
belongs.

Figure : Edge Table Entry

Polygon table contains: The coefficients of the plane


equation.
Shading or color information for the
polygon.
Pointers to the edge table
An in-out boolean flag initialized to
FALSE and used during the scan line
processing.

Active list of edges are formed by


using the information's in the edge
table
The active list will contain the edges
that crosses the current scan line
In addition, we define flag for each
surface that is set on or off to
indicate whether the position along a
scan line is inside or outside

The active list for scan line 1 contains information


from the edge table for edges AB, BD, EH and FG.
For positions along this scan line between edges
AB and BD, only the flag for surface S2 is on.
Therefore, no depth calculation is required, and
intensity information for surfaces S2.

Figure : Scan Line Method for Hidden Surface


Removal

Depth-Sorting Method or Painter's


Algorithm
This method uses both Image-space method
and objects-space method.
It is also known as Painter's Algorithm because
of the similarity between the Painting creation
and this algorithm execution.
In this Algorithm we perform these given basic
tasks
1. All the surfaces are sorted in the order of
decreasing depths. For this we use the deepest
point on each surface for comparison.
2. Surfaces are scan converted in order, First
we start with the surface of largest depth.

By refering the figure shown below. We perform certain


tests that are as follows for each surface that overlaps
with S (Shown in figure).

Figure : Cyclically Overlapping Surface


On performing following tests. if any of starting three
tests is true no reasoning is necessary for that surface.

Test 1: The bounding rectangle of the


two surfaces on xy plane do not
overlap,
Test 2: Surface S us Completely
behind the overlapping surface
relative to the viewing position.

Test 3: The overlapping surface is


completely in front of S relative to
the viewing positions.

Test 4: The projection of the two


surfaces on the view plane do not
overlap.

Binary Space Partition BSP Trees


Binary space partitioning is used to calculate
visibility.
To build the BSP trees, one should start with
polygons and label all the edges. Dealing with
only one edge at a time, extend each edge so
that it splits the plane in two.
Place the first edge in the tree as root. Add
subsequent edges based on whether they are
inside or outside.
Edges that span the extension of an edge that
is already in the tree are split into two and
both are added to the tree.

From the above figure, first take A as a root.


Put all the nodes that are in front of root A to the left
side of node A and put all those nodes that are
behind the root A to the right side as shown in
figure b.
Process all the front nodes first and then the nodes at the
back.
As shown in figure c, we will first process the node B. As
there is nothing in front of the node B, we have
put NIL. However, we have node C at back of node
B, so node C will go to the right side of node B.
Repeat the same process for the node D.

RAY TRACING

4/13/16

46

TOPICS
OVERVIEW OF RAY TRACING.

INTERSECTING RAYS WITH OTHER PRIMITIVES.

4/13/16

47

INTRODUCTION
Ray tracing is a technique for generating an image by
tracing the path of light through pixels in an image plane.
The technique is capable of producing a very high
degree of visual realism, usually higher than that of
typical scan line rendering methods, but at a
greater computational cost.

4/13/16

48

Cont.,
EYE
(or)

Ray tracing Provides a


related, but even more
powerful, approach to
rendering scenes.
A Ray is cast from the eye
through the center of the
pixel is traced to see what
object it hits first and at
what point.

4/13/16

Pixel
(or)
Frame
Buffer
49

Cont.,
The Resulting color is
then displayed at the
pixel, the path of a ray
traced through the scene,
interesting visual effects
such as shadowing,
reflection and refraction
are easy to incorporate
and producing dazzling
images.
4/13/16

50

PICTURES
Ray tracing can create
realistic images.
In addition to the high
degree of realism, ray
tracing can simulate the
effects of a camera due
to depth of field and
aperture shape

4/13/16

51

Cont.,
This makes ray tracing
best suited for
applications where the
image can be rendered
slowly ahead of time, such
as in still images and film
and television visual
effects, and more poorly
suited for real-time
applications.
4/13/16

52

Cont.,
Ray tracing is capable of
simulating a wide variety
of optical effects, such as
reflection and refraction,
scattering,and dispersion
phenomena (such
as chromatic aberration).

4/13/16

53

Cont.,
Optical ray tracing
describes a method for
producing visual images
constructed in3D computer
graphics environments, with
more photorealism than
eitherray casting
orscanline rendering
techniques.
It works by tracing a path
from an imaginary eye
through eachpixel in a
virtual screen, and
calculating the color of the
object visible through it.
4/13/16

54

Object List
Descriptions of all then
Objects are stored in an
object list.
The ray that interacts the
Sphere and the Cylinder.
The hit spot (PHIT) is easily
found wit the ray itself.
The ray of Equation at the
Hit time tbit :
PHIT=eye + dirr,ctbit

4/13/16

EYE
(or)

PHIT

Pixel
(or)
Frame
Buffer
55

Pseudocode of a Ray Tracer


define the objects and light sources in the scene set up the camera
for(int r=0 ; r < nRows ; r++)
for(int c=0 ; c < nCols ; c++)
{
1.Build the rc-th ray.
2.Find all interactions of the rc-th ray with objects in the scene.
3.Identify the intersection that lies closest to and infront of the eye.
4.Compute the Hit Point.
5.Find the color of the light returning to the eye along the ray from
the point of intersection.
6.Place the color in the rc-th pixel.
}
4/13/16

56

INTERACTION OF A RAY
We need to Develop the hit() method for other shape
classes.
Intersecting with a square:
A square is useful generic shape.
The generic square lies in the z=0 plane and extends
from -1 to +1 in both x and y axis.
The implicit form of the equation of the square is
F(P)=PZ for |PX| <= 1 and |PY| <= 1.
The Square can be transformed into any parallelogram
positioned in space & provide thin, flat surface like Walls,
Windows, etc.
4/13/16

57

Cont.,

The function hit() finds


where the ray hits the
generic plane and
then tests whether the
Hit spots lie s within
the square.

4/13/16

58

INTERACTION OF A RAY
Intersecting with a Cube or any Convex Polyhedron:
Convex Polyhedron is useful in many graphics
situations.
It is centered at the origin and has corners, using all six
combinations of +1 and -1.
The edges are aligned with the coordinate axes, and its
six faces lie in the Planes.

4/13/16

59

Cont.,

4/13/16

PLANE

NAME

EQUATION

OUTWARD
NORMAL

SPOT

TOP

Y=1

(0,1,0)

(0,1,0)

BOTTOM

Y=-1

(0,-1,0)

(0,-1,0)

RIGHT

X=1

(1,0,0)

(1,0,0)

LEFT

X=-1

(-1,0,0)

(-1,0,0)

FRONT

Z=1

(0,0,0)

(0,0,0)

BACK

Z=-1

(0,0,-1)

(0,0,-1)

60

Cont.,

4/13/16

61

Cont.,
The generic cube is important for 2 reasons:
1. A Large variety of interesting boxes can be Modeled
and Placed in a scene by applying an affine
transformation to a generic cube. In Ray Tracing each
ray can be inverse transformed into the generic cubes
coordinate system.
2. The generic cube can be used as an extent for the other
geometric primitives in the sense of a Bounding box.

4/13/16

62

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy