Visible Surface Determination
Visible Surface Determination
Determination
Introduction
To determine which lines or surfaces of the
objects are visible, either from the center of
projection or along the direction of projection,
so we can display only the visible lines or
surface this process if know as visible surface
detection, hidden surface elimination.
Mainly two approaches:
Image precision
Object Precision
Introduction
Image precision and object precision algorithm
Pseudocode for image precision algorithm
For(each pixel in the image)
{
determine the object closest to the viewer that is
pierced by the projector through the pixel;
draw the pixel in the appropriate color,
}
Time complexity : proportional to np where p is over
1 million for a high resolution display and n is no of
objects
Cont.
Object precision pseudo code
For(each object in the world) {
determine those parts of the object whose
view is unobstructed by other parts of it or
any other object;
draw those parts in the appropriate color,
}
Time complexity : proportional to n2
This approach is good when N<p but its individual
step are more complex and time consuming
Cont.
Image precision
Object precision
Aliasing problem
No Aliasing problem
Cont.
Many algorithms combine both
object and image precision
calculations
Object precision has accuracy and
image precision has speed
Cont.
Different kinds of coherence:
Object Coherence: if one object is
entirely separate from another,
comparisons may need to be done only
between two objects, and not between
their component faces or edges.
Face Coherence: surface properties
typically vary smoothly across a face,
allowing computations for one part of
face to be modified incrementally to
apply to adjacent part.
Cont.
Edge Coherence: An edge may
change visibility only where it
crosses behind a visible edge or
penetrates a visible face.
Area Coherence: a group of
adjacent pixel is often covered by the
same visible face.
Cont.
Depth coherence: once depth at
one point of the surface is calculated,
the depth of the points on the rest of
the surface can often be determined
by simple difference equation.
Frame coherence: calculations
made for one picture can be reused
for the next in sequence.
Back-Face Detection
Back-face detection of 3D polygon
surface is easy
Recall the polygon surface equation:
Ax By Cz D 0
N ( A, B, C )
Back-Face Detection
A polygon surface is a back face if:
Vview N 0
However, remember that after
application of the viewing
transformation we are looking down
the negative z-axis. Therefore a
,0,1)face
N 0if:
polygon is a(0back
V.N=Vz.C or if C 0
Back-Face Detection
C0
Back-Face Detection
Back-face detection can identify all
the hidden surfaces in a scene that
contain non-overlapping convex
polyhedra.
But we have to apply more tests that
contain overlapping objects along the
line of sight to determine which
objects obscure which objects.
Example,back-faces
Assume a slice through an object where
N1-N6 are the surface normals of the
boundary polygons (left-handed
system!)
Depth-Buffer Method
Also known as z-buffer method.
It is an image space approach
Each surface is processed separately
one pixel position at a time across the
surface
The depth values for a pixel are
compared and the closest (smallest z)
surface determines the color to be
displayed in the frame buffer.
Applied very efficiently on polygon
Depth-Buffer Method
Two buffers are used
Frame Buffer
Depth Buffer
Depth-Buffer method,contd
Requirements:
Two buffers are needed representing
all pixel positions:
a depth buffer keeping the (current)
z-value (depth) of each pixel
a refresh buffer, typically the frame
buffer, keeping the (current) intensity
value of each pixel
Depth-Buffer method,contd
Initializations:
the refresh buffer is initiated with the
background color
the depth buffer is initiated with z=0
(corresponds to the background
depth)
Depth-Buffer method,contd
Strategy:
Each object is processed at a time, each of
its projected polygon surfaces is then
processed (scan-converted along scanlines) separately, one point (pixel) at a
time.
A pixel is written in the refresh buffer only if
its depth position is nearer than the
currently registered value for that position
in the depth buffer
Depth-Buffer Method
Depth-Buffer Algorithm
Initialize the depth buffer and frame
buffer so that for all buffer positions
(x,y),
depthBuff (x,y) = 1.0, frameBuff (x,y)
=bgColor
Ax By D
z
C
A
z
C
x x
m
A/ m B
z z
C
Depth-Buffer method,contd
Depth-Buffer method,contd
After all objects with their projected
polygon surfaces have been scanconverted in this way, the correct
image is in the refresh buffer
Time-consuming (problem in real-time
applications) and memory
demanding (less of a problem today)
Depth-Buffer method,contd
Assume plane equation:
Ax + By + Cz + D = 0
Then, the depth in (x1,y1):
z1 = (-Ax1 - By1 -D)/C
Next point on the same scan-line is (x1+1,y1)
with depth:
z2 = [-A(x1+1) - By1 -D]/C
This gives: z2 = z1 - A/C, where A/C is
constant for the whole polygon!
Depth-Buffer method,contd
Similar when changing scan-line
(assume down the left side of the
polygon with an edge slope m):
z1 = (-Ax1 - By1 - D)/C and
z2 = [-A(x1 - 1/m) - B(y1 - 1) - D]/C
that gives: z2 = z1 + (A/m + B)/C
(if vertical edge, then z2 = z1 + B/C)
Advantages
It is easy to implement.
No sorting of the surfaces is required.
It reduces the speed problem if
implemented in hardware.
It processes one object at a time.
Disadvantages
It requires large memory.
It is time consuming process.
RAY TRACING
4/13/16
46
TOPICS
OVERVIEW OF RAY TRACING.
4/13/16
47
INTRODUCTION
Ray tracing is a technique for generating an image by
tracing the path of light through pixels in an image plane.
The technique is capable of producing a very high
degree of visual realism, usually higher than that of
typical scan line rendering methods, but at a
greater computational cost.
4/13/16
48
Cont.,
EYE
(or)
4/13/16
Pixel
(or)
Frame
Buffer
49
Cont.,
The Resulting color is
then displayed at the
pixel, the path of a ray
traced through the scene,
interesting visual effects
such as shadowing,
reflection and refraction
are easy to incorporate
and producing dazzling
images.
4/13/16
50
PICTURES
Ray tracing can create
realistic images.
In addition to the high
degree of realism, ray
tracing can simulate the
effects of a camera due
to depth of field and
aperture shape
4/13/16
51
Cont.,
This makes ray tracing
best suited for
applications where the
image can be rendered
slowly ahead of time, such
as in still images and film
and television visual
effects, and more poorly
suited for real-time
applications.
4/13/16
52
Cont.,
Ray tracing is capable of
simulating a wide variety
of optical effects, such as
reflection and refraction,
scattering,and dispersion
phenomena (such
as chromatic aberration).
4/13/16
53
Cont.,
Optical ray tracing
describes a method for
producing visual images
constructed in3D computer
graphics environments, with
more photorealism than
eitherray casting
orscanline rendering
techniques.
It works by tracing a path
from an imaginary eye
through eachpixel in a
virtual screen, and
calculating the color of the
object visible through it.
4/13/16
54
Object List
Descriptions of all then
Objects are stored in an
object list.
The ray that interacts the
Sphere and the Cylinder.
The hit spot (PHIT) is easily
found wit the ray itself.
The ray of Equation at the
Hit time tbit :
PHIT=eye + dirr,ctbit
4/13/16
EYE
(or)
PHIT
Pixel
(or)
Frame
Buffer
55
56
INTERACTION OF A RAY
We need to Develop the hit() method for other shape
classes.
Intersecting with a square:
A square is useful generic shape.
The generic square lies in the z=0 plane and extends
from -1 to +1 in both x and y axis.
The implicit form of the equation of the square is
F(P)=PZ for |PX| <= 1 and |PY| <= 1.
The Square can be transformed into any parallelogram
positioned in space & provide thin, flat surface like Walls,
Windows, etc.
4/13/16
57
Cont.,
4/13/16
58
INTERACTION OF A RAY
Intersecting with a Cube or any Convex Polyhedron:
Convex Polyhedron is useful in many graphics
situations.
It is centered at the origin and has corners, using all six
combinations of +1 and -1.
The edges are aligned with the coordinate axes, and its
six faces lie in the Planes.
4/13/16
59
Cont.,
4/13/16
PLANE
NAME
EQUATION
OUTWARD
NORMAL
SPOT
TOP
Y=1
(0,1,0)
(0,1,0)
BOTTOM
Y=-1
(0,-1,0)
(0,-1,0)
RIGHT
X=1
(1,0,0)
(1,0,0)
LEFT
X=-1
(-1,0,0)
(-1,0,0)
FRONT
Z=1
(0,0,0)
(0,0,0)
BACK
Z=-1
(0,0,-1)
(0,0,-1)
60
Cont.,
4/13/16
61
Cont.,
The generic cube is important for 2 reasons:
1. A Large variety of interesting boxes can be Modeled
and Placed in a scene by applying an affine
transformation to a generic cube. In Ray Tracing each
ray can be inverse transformed into the generic cubes
coordinate system.
2. The generic cube can be used as an extent for the other
geometric primitives in the sense of a Bounding box.
4/13/16
62