Love U
Love U
Three-Dimensional Viewing
VTUPulse.com
4.1 Overview of Three-Dimensional Viewing Concepts
When we model a three-dimensional scene, each object in the scene is typically defined
with a set of surfaces that form a closed boundary around the object interior.
In addition to procedures that generate views of the surface features of an object, graphics
packages sometimes provide routines for displaying internal components or cross-
sectional views of a solid object.
Many processes in three-dimensional viewing, such as the clipping routines, are similar
to those in the two-dimensional viewing pipeline.
But three-dimensional viewing involves some tasks that are not present in
twodimensional Viewing
1
Module 4 3D Viewing and Visible surface detection
This coordinate reference defines the position and orientation for a view
plane (or
projection plane) that corresponds to a camera film plane as shown in below figure.
VTUPulse.com
✓ One method for getting the description of a solid object onto a view plane is to project
points on the object surface along parallel lines. This technique, called parallel projection
Three parallel-projection views of an object, showing relative proportions from different viewing
positions
2
Module 4 3D Viewing and Visible surface detection
causes objects farther from the viewing position to be displayed smaller than objects of
the same size that are nearer to the viewing position
Depth Cueing
Depth information is important in a three-dimensional scene so that we can easily identify,
for a particular viewing direction, which is the front and which is the back of each
displayed object.
There are several ways in which we can include depth information in the two-
dimensional representation of solid objects.
A simple method for indicating depth with wire-frame displays is to vary the brightness of
line segments according to their distances from the viewing position which is termed as
depth cueing.
The lines closest to the viewing position are displayed with the
highest intensity, and lines farther away are displayed with decreasing intensities.
Depth cueing is applied by choosing a maximum and a minimum intensity value
and a range of distances over which the intensity is to vary.
Another application of depth cuing is modeling the effect of the
atmosphere on the perceived intensity of objects
3
Module 4 3D Viewing and Visible surface detection
nonvisible lines as dashed lines. Or we could remove the nonvisible lines from the
display
Surface Rendering
We set the lighting conditions by specifying the color and location of the light
sources, and we can also set background illumination effects.
Surface properties of objects include whether a surface is
transparent or opaque and whether the surface is smooth or rough.
We set values for parameters to model surfaces such as glass, plastic, wood-grain
patterns, and the bumpy appearance of an orange.
VTUPulse.com
Three-Dimensional and Stereoscopic Viewing
The vibrations of the mirror are synchronized with the display of the scene on the cathode
ray tube (CRT).
As the mirror vibrates, the focal length varies so that each point in the scene is reflected
to a spatial position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other for
the right eye.
The viewing positions correspond to the eye positions of the viewer. These two views are
typically displayed on alternate refresh cycles of a raster monitor
4
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
Clipping windows, viewports, and display windows are usually specified as rectangles
with their edges parallel to the coordinate axes.
The viewing position, view plane, clipping window, and clipping planes are all specified
within the viewing-coordinate reference frame.
Figure above shows the general processing steps for creating and transforming a three-
dimensional scene to device coordinates.
Once the scene has been modeled in world coordinates, a viewing-coordinate system is
selected and the description of the scene is converted to viewing coordinates
5
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
Select a world-coordinate position P0 =(x0, y0, z0) for the viewing origin, which is called
the view point or viewing position and we specify a view-up vector V, which defines the
yview direction.
Figure below illustrates the positioning of a three-dimensional viewing-coordinate frame
within a world system.
✓right-handed viewing-coordinate system, with axes x view, y view, and z view, relative to a right-
handed world-coordinate frame.
6
Module 4 3D Viewing and Visible surface detection
An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis,
VTUPulse.com
This parameter value is usually specified as a distance from the viewing origin along the
direction of viewing, which is often taken to be in the negative zview direction.
Vector N can be specified in various ways. In some graphics systems, the direction for N
is defined to be along the line from the world-coordinate origin to a selected point
position.
Other systems take N to be in the direction from a reference point Pref to the viewing
origin P0,
7
Module 4 3D Viewing and Visible surface detection
Specifying the view-plane normal vector N as the direction from a selected reference point Pref to
the viewing-coordinate origin P0.
Once we have chosen a view-plane normal vector N, we can set the direction for the
view-up vector V.
This vector is used to establish the positive direction for the yview axis.
VTUPulse.com
Because the view-plane normal vector N defines the direction for the zview axis, vector V
should be perpendicular to N.
But, in general, it can be difficult to determine a direction for V that is precisely
perpendicular to N.
Therefore, viewing routines typically adjust the user-defined orientation of vector V,
8
Module 4 3D Viewing and Visible surface detection
With a left-handed system, increasing zview values are interpreted as being farther from
the viewing position along the line of sight.
But right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame.
Because the view-plane normal N defines the direction for the zview axis and the view-up
vector V is used to obtain the direction for the yview axis, we need only determine the
direction for the xview axis.
Using the input values for N and V,we can compute a third vector, U, that si
perpendicular to both N and V.
Vector U then defines the direction for the positive xview axis.
We determine the correct direction for U by taking the vector cross product of V and N
so as to form a right-handed viewing frame.
The vector cross product of N and U also produces the adjusted value for V,
perpendicular to both N and U, along the positive yview axis.
Following these procedures, we obtain the following set of unit axis vectors for a right-
handed viewing coordinate system.
VTUPulse.com
The coordinate system formed with these unit vectors is often described as a uvn
viewing-coordinate reference frame
9
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
Alternatively, different views of an object or group of objects can
be generated using geometric transformations without changing the viewing
parameters
10
VTUPulse.com
Module 4 3D Viewing and Visible surface detection
For the rotation transformation, we can use the unit vectors u, v, and n to form thecomposite
VTUPulse.com
rotation matrix that superimposes the viewing axes onto the world frame. This
transformation matrix is
where the elements of matrix R are the components of the uvn axis vectors.
The coordinate transformation matrix is then obtained as the product of the preceding
translation and rotation matrices:
Translation factors in this matrix are calculated as the vector dot product of each of the u,
v, and n unit vectors with P0, which represents a vector from the world origin to the
viewing origin.
11
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
oblique angle to the view plane
12
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
Front, side, and rear orthogonal projections of an object are called elevations; and a
toporthogonal projection is called a plan view
13
Module 4 3D Viewing and Visible surface detection
We can also form orthogonal projections that display more than one face of an object.
Such views are called axonometric orthogonal projections.
The most commonly used axonometric projection is the isometric projection, which is
generated by aligning the projection plane (or the object) so that the plane intersects each
coordinate axis in which the object is defined, called the principal axes, at the same
distance from the origin
VTUPulse.com
Orthogonal Projection Coordinates
With the projection direction parallel to the zview axis, the transformation equations for an
orthogonal projection are trivial. For any position (x, y, z) in viewing coordinates, as in
Figure below, the projection coordinates are xp = x, yp = y
14
Module 4 3D Viewing and Visible surface detection
For three-dimensional viewing, the clipping window is positioned on the view plane with
its edges parallel to the xview and yview axes, as shown in Figure below . If we want to
use some other shape or orientation for the clipping window, we must develop our own
viewing procedures
VTUPulse.com
The edges of the clipping window specify the x and y limits for the part of the scene that
we want to display.
These limits are used to form the top, bottom, and two sides of a clipping region called
the orthogonal-projection view volume.
Because projection lines are perpendicular to the view plane, these four boundaries are
planes that are also perpendicular to the view plane and that pass through the edges of the
clipping window to form an infinite clipping region, as in Figure below.
15
Module 4 3D Viewing and Visible surface detection
These two planes are called the near-far clipping planes, or the front-back clipping
planes.
The near and far planes allow us to exclude objects that are in front of or behind the part
of the scene that we want to display.
When the near and far planes are specified, we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as shown in Figure below along with one possible
placement for the view plane
VTUPulse.com
Once we have established the limits for the view volume, coordinate descriptions inside
this rectangular parallelepiped are the projection coordinates, and they can be mapped
into a normalized view volume without any further projection processing.
Some graphics packages use a unit cube for this normalized view volume, with each of
the x, y, and z coordinates normalized in the range from 0 to 1.
16
Module 4 3D Viewing and Visible surface detection
Also, z-coordinate positions for the near and far planes are denoted as znear and zfar,
respectively. Figure below illustrates this normalization transformation
VTUPulse.com
The normalization transformation for the orthogonal view volume is
17
Module 4 3D Viewing and Visible surface detection
Objects are then displayed with foreshortening effects, and projections of distant objects
are smaller than the projections of objects of the same size that are closer to the view
plane
VTUPulse.com
The projection line intersects the view plane at the coordinate position (xp, yp, zvp), where
zvp is some selected position for the view plane on the zview axis.
We can write equations describing coordinate positions along this perspective-projection
line in parametric form as
18
Module 4 3D Viewing and Visible surface detection
On the view plane, z’ = zvp and we can solve the z’ equation for parameter u at this
position along the projection line:
Substituting this value of u into the equations for x’ and y’, we obtain the general
perspective-transformation equations
VTUPulse.com
Case 2:
Sometimes the projection reference point is fixed at the coordinate origin, and
(xprp, yprp, zprp) = (0, 0, 0) :
Case 3:
If the view plane is the uv plane and there are no restrictions on the placement of the
projection reference point, then we
have zvp = 0:
19
Module 4 3D Viewing and Visible surface detection
Case 4:
With the uv plane as the view plane and the projection reference point on the zview axis,
the perspective equations are
xprp = yprp = zvp = 0:
The view plane is usually placed between the projection reference point and the scene,
but, in general, the view plane could be placed anywhere except at the projection point.
If the projection reference point is between the view plane and the scene, objects are
inverted on the view plane (refer below figure)
VTUPulse.com
Perspective effects also depend on the distance between the projection reference point
and the view plane, as illustrated in Figure below.
20
Module 4 3D Viewing and Visible surface detection
If the projection reference point is close to the view plane, perspective effects are
emphasized; that is, closer objects will appearmuchlarger thanmore distant objects of the
same size.
Similarly, as the projection reference point moves farther from the view
palne,the difference in the size of near and far objects decreases
VTUPulse.com
Principal vanishing points for
perspective-projection views of a cube.
When the cube in (a) is projected to a
view plane that intersects only the
z axis, a single vanishing point in the
z direction (b) is generated. When the
cube is projected to a view plane that
intersects both the z and x axes, two
vanishing points (c) are produced.
Perspective-Projection View Volume
21
Module 4 3D Viewing and Visible surface detection
The displayed view of a scene includes only those objects within the pyramid, just
as wecannot see objects beyond our peripheral vision, which are outside the cone of vision.
By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspectiveprojection view
volume to form a truncated pyramid, or frustum, view volume
VTUPulse.com
But with a perspective projection, we could also use the near clipping plane to take out
large objects close to the view plane that could project into unrecognizable shapes within
the clipping window.
Similarly, the far clipping plane could be used to cut out objects far from the
projectionreference point that might project to small blots on the view plane.
22
Module 4 3D Viewing and Visible surface detection
Where,
Ph is the column-matrix representation of the homogeneous point (xh, yh, zh, h)
and P is the column-matrix representation of the coordinate position (x, y, z, 1).
Second, after other processes have been applied, such as the normalization
transformation and clipping routines, homogeneous coordinates are divided by parameter h to
obtain the true transformation-coordinate positions.
The following matrix gives one possible way to formulate a perspective-
projection matrix.
VTUPulse.com
Parameters sz and tz are the scaling and translation factors for normalizing the
projectedvalues of z-coordinates.
Specific values for sz and tz depend on the normalization range we select.
23
Module 4 3D Viewing and Visible surface detection
Because the frustum centerline intersects the view plane at the coordinate location (xprp,
yprp, zvp), we can express the corner positions for the clipping window in terms of the
window dimensions:
VTUPulse.com
Another way to specify a symmetric perspective projection is to use parameters that
approximate the properties of a camera lens.
A photograph is produced with a symmetric perspective projection of a scene onto the
film plane.
Reflected light rays from the objects in a scene are collected on the film plane from
within the “cone of vision” of the camera.
This cone of vision can be referenced with a field-of-view angle, which is a measure of
the size of the camera lens.
A large field-of-view angle, for example, corresponds to a wide-angle lens.
In computer graphics, the cone of vision is approximated with a symmetric frustum, and
we can use a field-of-view angle to specify an angular size for the frustum.
24
Module 4 3D Viewing and Visible surface detection
For a given projection reference point and view-plane position, the field-of view angle
determines the height of the clipping window from the right triangles in the diagram of
Figure below, we see that
VTUPulse.com
Therefore, the diagonal elements with the value zprp −zvp could be replaced by either of
the following two expressions
25
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
In this case, we can first transform the view volume to a symmetric frustum and then to a
normalized view volume.
An oblique perspective-projection view volume can be converted to a sym metric frustum
by applying a z-axis shearing-transformation matrix.
This transformation shifts all positions on any plane that is perpendicular to the z axis by
an amount that is proportional to the distance of the plane from a specified z- axis
reference position.
The computations for the shearing transformation, as well as for the perspective and
normalization transformations, are greatly reduced if we take the projection reference
point to be the viewing-coordinate origin.
26
Module 4 3D Viewing and Visible surface detection
Taking the projection reference point as (xprp, yprp, zprp) = (0, 0, 0), we obtain the
elements of the required shearing matrix as
VTUPulse.com
Similarly, with the projection reference point at the viewing-coordinate origin and with
the near clipping plane as the view plane, the perspective-projection matrix is simplified
to
Concatenating the simplified perspective-projection matrix with the shear matrix we have
27
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
Because the centerline of the rectangular parallelepiped view volume is now the zview
axis, no translation is needed in the x and y normalization transformations: We require
only the x and y scaling parameters relative to the coordinate origin.
The scaling matrix for accomplishing the xy normalization is
28
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
And the elements of the normalized transformation matrix for a general perspective-
projection are
29
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
In normalized coordinates, the znorm = −1 face of the symmetric cube corresponds to
the clipping-window area. And this face of the normalized cube is mapped to the
rectangular viewport, which is now referenced at zscreen = 0.
Thus, the lower-left corner of the viewport screen area is at position (xvmin, yvmin, 0) and
the upper-right corner is at position (xvmax, yvmax, 0).
glMatrixMode (GL_MODELVIEW);
a matrix is formed and concatenated with the current modelview matrix, We set the
modelview mode with the statement above
gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
Viewing parameters are specified with the above GLU function.
30
Module 4 3D Viewing and Visible surface detection
This function designates the origin of the viewing reference frame as the world-
coordinate position P0 = (x0, y0, z0), the reference position as Pref =(xref, yref, zref), and
the view-up vector as V = (Vx, Vy, Vz).
If we do not invoke the gluLookAt function, the default OpenGL viewing
parametersareP0 = (0, 0, 0)
Pref = (0, 0, −1)
V = (0, 1, 0)
VTUPulse.com
All parameter values in this function are to be assigned double-precision, floating point
Numbers
Function glOrtho generates a parallel projection that is perpendicular to the view plane
Parameters dnear and dfar denote distances in the negative zview direction from the
viewing-coordinate origin
We can assign any values (positive, negative, or zero) to these parameters, so long as
dnear<dfar.
Exa: glOrtho (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
31
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
To maintain the proportions of objects in a scene, we set the aspect ratio of the viewport
equal to the aspect ratio of the clipping window.
Display windows are created and managed with GLUT routines. The default viewport in
OpenGL is the size and position of the current display window
#include <GL/glut.h>
GLint winWidth = 600, winHeight = 600; // Initial display-window size.
GLfloat x0 = 100.0, y0 = 50.0, z0 = 50.0; // Viewing-coordinate origin.
GLfloat xref = 50.0, yref = 50.0, zref = 0.0; // Look-at point.
GLfloat Vx = 0.0, Vy = 1.0, Vz = 0.0; // View-up vector.
/* Set coordinate limits for the clipping window: */
GLfloat xwMin = -40.0, ywMin = -60.0, xwMax = 40.0, ywMax =
60.0; /* Set positions for near and far clipping planes: */ GLfloat dnear
= 25.0, dfar = 125.0;
32
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
glVertex3f (100.0, 0.0, 0.0);
glVertex3f (100.0, 100.0, 0.0);
glVertex3f (0.0, 100.0, 0.0);
glEnd ( );
glFlush ( );
}
void reshapeFcn (GLint newWidth, GLint newHeight)
{
glViewport (0, 0, newWidth, newHeight);
winWidth = newWidth;
winHeight = newHeight;
}
void main (int argc, char** argv)
{
glutInit (&argc, argv);
33
Module 4 3D Viewing and Visible surface detection
glutDisplayFunc (displayFcn);
glutReshapeFunc (reshapeFcn);
glutMainLoop ( );
}
VTUPulse.com
scene definition to determine which surfaces, as a whole, we should label as visible.
Image-space methods: visibility is decided point by point at each pixel position on the
projection plane.
Although there are major differences in the basic approaches taken by the various visible-
surface detection algorithms, most use sorting and coherence methods to improve
performance.
Sorting is used to facilitate depth comparisons by ordering the individual surfaces in a
scene according to their distance from the view plane.
Coherence methods are used to take advantage of regularities in a scene.
34
Module 4 3D Viewing and Visible surface detection
We can simplify the back-face test by considering the direction of the normal vector N for
a polygon surface. If Vview is a vector in the viewing direction from our camera position,
as shown in Figure below, then a polygon is a back face if
Vview . N > 0
In a right-handed viewing system with the viewing direction along the negative zv axis
(Figure below), a polygon is a back face if the z component, C, of its normal vector N
satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value that satisfies the inequality
C <=0
VTUPulse.com
Similar methods can be used in packages that employ a left-handed viewing system. nI
these packages, plane parameters A, B, C, and D can be calculated from polygon vertex
coordinates specified in a clockwise direction.
Inequality 1 then remains a valid test for points behind the polygon.
By examining parameter C for the different plane surfaces describing an object, we can
immediately identify all the back faces.
For other objects, such as the concave polyhedron in Figure below, more tests must be
carried out to determine whether there are additional faces that are totally or partially
obscured by other faces
35
Module 4 3D Viewing and Visible surface detection
In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
VTUPulse.com
This visibility-detectionapproach is also frequently alluded to as the z-buffer method,
because object depth is usually measured along the z axis of a viewing system
Figure above shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.
36
Module 4 3D Viewing and Visible surface detection
Depth-Buffer Algorithm
➔Initialize the depth buffer and frame buffer so that for all buffer positions (x, y),
depthBuff (x, y) = 1.0, frameBuff (x, y) = backgndColor
➔Process each polygon in a scene, one at a time, as follows:
For each projected (x, y) pixel position of a polygon, calculate the depth z (if not already
known).
8) If z < depthBuff (x, y), compute the surface color at that position and set
Given the depth values for the vertex positions of any polygon in a scene,
wecan calculate the depth at any other point on the plane containing the polygon.
At surface position (x, y), the depth is calculated from the plane equation as
If the depth of position (x, y) has been determined to be z, then the depth
z’of the next position (x + 1, y) along the scan line is obtained as
37
Module 4 3D Viewing and Visible surface detection
The ratio −A/C is constant for each surface, so succeeding depth values across a scan
line are obtained from preceding values with a single addition.
We can implement the depth-buffer algorithm by starting at a top vertex of the polygon.
Then, we could recursively calculate the x-coordinate values down a left edge of the
polygon.
The x value for the beginning position on each scan line can be calculated from the
beginning (edge) x value of the previous scan line as
VTUPulse.com
Depth values down this edge are obtained recursively as
If we are processing down a vertical edge, the slope is infinite and the recursive
calculations reduce to
One slight complication with this approach is that while pixel positions are at integer (x,
coordinates, the actual point of intersection of a scan line with the edge of a polygon
may not be.
As a result, it may be necessary to adjust the intersection point by rounding its fractional
part up or down, as is done in scan-line polygon fill algorithms.
An alternative approach is to use a midpoint method or Bresenham-type algorithm for
determining the starting x values along edges for each scan line.
38
Module 4 3D Viewing and Visible surface detection
The method can be applied to curved surfaces by determining depth and color values at
each surface projection point.
In addition, the basic depth-buffer algorithm often performs needless calculations.
Objects are processed in an arbitrary order, so that a color can be computed for a surface
point that is later replaced by a closer surface.
VTUPulse.com
OpenGL Depth-Buffer Functions
To use the OpenGL depth-buffer visibility-detection routines, we first need to modify
theGL Utility Toolkit (GLUT) initialization function for the display mode to include
a request for the depth buffer, as well as for the refresh buffer
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
Depth buffer values can then be initialized with
glClear (GL_DEPTH_BUFFER_BIT);
the preceding initialization sets all depth-buffer values to the maximum
value 1.0 by default
The OpenGL depth-buffer visibility-detection routines are activated with the following
function:
glEnable (GL_DEPTH_TEST);
And we deactivate the depth-buffer routines with
glDisable (GL_DEPTH_TEST);
39
Module 4 3D Viewing and Visible surface detection
We can also apply depth-buffer visibility testing using some other initial value for
themaximumdepth, and this initial value is chosen with theOpenGLfunction:
glClearDepth (maxDepth);
Parameter maxDepth can be set to any value between 0.0 and 1.0.
Projection coordinates in OpenGL are normalized to the range from −1.0
to 1.0, and the depth values between the near and far clipping planes are
further normalized to the range from 0.0 to 1.0.
As an option, we can adjust these normalization values with
glDepthRange (nearNormDepth, farNormDepth);
By default, nearNormDepth = 0.0 and farNormDepth = 1.0.
But with the glDepthRange function, we can set these two parameters to
any values within the range from 0.0 to 1.0, including nearNormDepth >
farNormDepth
Another option available in OpenGL is the test condition that is to be used for the depth-
buffer routines.We specify a test condition with the following function:
glDepthFunc (testCondition);
VTUPulse.com
O Parameter testCondition can be assigned any one of the following eight symbolic
constants: GL_LESS, GL_GREATER, GL_EQUAL, GL_NOTEQUAL,
GL_LEQUAL, GL_GEQUAL, GL_NEVER (no points are processed), and
GL_ALWAYS.
O The default value for parameter testCondition is GL_LESS.
We can also set the status of the depth buffer so that it is in a read-only state or in a read-
write state. This is accomplished with
glDepthMask (writeStatus);
O When writeStatus = GL_TRUE (the default value), we can both read from and
write to the depth buffer.
O With writeStatus = GL_FALSE, the write mode for the depth buffer is disabled
and we can retrieve values only for comparison in depth testing.
40
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
OpenGL Depth-Cueing Function
We can vary the brightness of an object as a function of its distance from the viewing
position with
glEnable (GL_FOG);
glFogi (GL_FOG_MODE, GL_ LINEAR);
This applies the linear depth function to object colors using dmin = 0.0 and dmax = 1.0. But we
can set different values for dmin and dmax with the following function calls:
glFogf (GL_FOG_START, minDepth);
glFogf (GL_FOG_END, maxDepth);
In these two functions, parameters minDepth and maxDepth are assigned floating-
point values, although integer values can be used if we change the function suffix to i.
We can use the glFog function to set an atmosphere color that is to be combined
with thecolor of an object after applying the linear depthcueing function
41
Module 4 3D Viewing and Visible surface detection
VTUPulse.com
4.14 Questions
1. Explain 3D viewing pipeline?
2. Explain 3D Viewing parameters?
3. Explain the process of transformation from world to viewing coordinates
4. Explain the process of transformation from world to viewing coordinates
5. Expalin perspective projections
6. Explain the different perspective projection view volumes?
7. Explain openGL 3D viewing functions?
8. Explain classification of visible surface detection algorithm
9. Explain depth buffer algorithm
10. Explain openGL function for visible surface detection algorithm?
42