0% found this document useful (0 votes)
2 views23 pages

CG Module4 On

Module 4 discusses classical and computer viewing techniques in graphics, emphasizing the importance of understanding geometric optics in rendering images. It covers various types of projections, including orthographic, axonometric, oblique, and perspective views, detailing how each method affects the representation of objects. The module also highlights the interaction of light with surfaces and introduces the Bidirectional Reflection Distribution Function (BRDF) to describe how light behaves when it encounters different materials.

Uploaded by

anjalianju200115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views23 pages

CG Module4 On

Module 4 discusses classical and computer viewing techniques in graphics, emphasizing the importance of understanding geometric optics in rendering images. It covers various types of projections, including orthographic, axonometric, oblique, and perspective views, detailing how each method affects the representation of objects. The module also highlights the interaction of light with surfaces and introduces the Bidirectional Reflection Distribution Function (BRDF) to describe how light behaves when it encounters different materials.

Uploaded by

anjalianju200115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Geometric Objects and Transformations Module 4

Module 4
Viewing
4.1 CLASSICAL AND COMPUTER VIEWING
There are two reasons for examining classical viewing. First, many of the jobs that were formerly done by hand
drawing such as animation in movies, architectural rendering, drafting, and mechanical-parts design are now
routinely done with the aid of computer graphics. Practitioners of these fields need to be able to produce classical
views such as isometrics, elevations, and various perspectives and thus must be able to use the computer system
to produce such renderings. Second, the relationships between classical and computer viewing show many
advantages of, and a few difficulties with, the approach used by most APIs.
We have objects, a viewer, projectors, and a projection plane (Figure 5.1). The projectors meet at the center of
projection (COP). The COP corresponds to the center of the lens in the camera or in the eye, and in a computer
graphics system, it is the origin of the camera frame for perspective views. All standard graphics systems follow
the model that we described in Chapter 1, which is based on geometric optics. The projection surface is a plane,
and the projectors are straight lines. This situation is the one we usually encounter and is straightforward to
implement, especially with our pipeline model.
Both classical and computer graphics allow the viewer to be an infinite distance from the objects. Note that as we
move the COP to infinity, the projectors become parallel and the COP can be replaced by a direction of
projection (DOP), as shown in Figure 5.2. Note also that as the COP moves to infinity, we can leave the
projection plane fixed and the size of the image remains about the same, even though the COP is infinitely far
from the objects. Views with a finite COP are called perspective views; views with a COP at infinity are called
parallel views. For parallel views, the origin of the camera frame usually lies in the projection plane.

Dept. of CSE, ATMECE, Mysuru Page No. 1


Geometric Objects and Transformations Module 4

4.1.1 Classical Viewing


When an architect draws an image of a building, she knows which side she wishes to display and thus where she
should place the viewer in relationship to the building. Each classical view is determined by a specific relationship
between the objects and the viewer.
In classical viewing, there is the underlying notion of a principal face. The types of objects viewed in real-world
applications, such as architecture, tend to be composed of a number of planar faces, each of which can be thought
of as a principal face. For a rectangular object, such as a building, there are natural notions of the front, back, top,
bottom, right, and left faces. In addition, many real-world objects have faces that meet at right angles; thus, such
objects often have three orthogonal directions associated with them.
Figure 5.3 shows some of the main types of views. We start with the most restrictive view for each of the parallel
and perspective types, and then move to the less restrictive conditions.

4.1.2 Orthographic Projections


Our first classical view is the orthographic projection shown in Figure 5.4. In all orthographic (or orthogonal)
views, the projectors are perpendicular to the projection plane. In a multiview orthographic projection, we
make multiple projections, in each case with the projection plane parallel to one of the principal faces of the
object.
Usually, we use three views such as the front, top, and right to display the object. The reason that we produce
multiple views should be clear from Figure 5.5. For a box-like object, only the faces parallel to the projection
plane appear in the image. A viewer usually needs more than two views to visualize what an object looks like
from its multiview orthographic projections. Visualization from these images can require skill on the part of the

Dept. of CSE, ATMECE, Mysuru Page No. 2


Geometric Objects and Transformations Module 4

viewer. The importance of this type of view is that it preserves both distances and angles, and because there is no
distortion of either distance or shape, multiview orthographic projections are well suited for working drawings.

4.1.3 Axonometric Projections

If we want to see more principal faces of our box-like object in a single view, we must remove
one of our restrictions. In axonometric views, the projectors are still orthogonal to the projection
plane, as shown in Figure 5.6, but the projection plane can have any orientation with respect to
the object. If the projection plane is placed symmetrically with respect to the three principal faces
that meet at a corner of our rectangular object, then we have an isometric view. If the projection
plane is placed symmetrically with respect to two of the principal faces, then the view is dimetric.

Dept. of CSE, ATMECE, Mysuru Page No. 3


Geometric Objects and Transformations Module 4

The general case is a trimetric view. These views are shown in Figure 5.7. Note that in an
s shorter than its length measured in
the object space. This foreshortening of distances is the same in the three principal directions, so
we can still make distance measurements. In the dimetric view, however, there are two different
foreshortening ratios; in the trimetric view, there are three. Also, although parallel lines are
preserved in the image, angles are not. A circle is projected into an ellipse. This distortion is the
price we pay for the ability to see more than one principal face in a view that can be produced
easily either by hand or by computer. Axonometric views are used extensively in architectural
and mechanical design.

4.1.4 Oblique Projections


The oblique views are the most general parallel views. We obtain an oblique projection by allowing the projectors
to make an arbitrary angle with the projection plane, as shown in Figure 5.8. Consequently, angles in planes
parallel to the projection plane are preserved. A circle in a plane parallel to the projection plane is projected into
a circle, yet we can see more than one principal face of the object. Oblique views are the most difficult to construct
by hand. They are also somewhat unnatural. Most physical viewing devices, including the human visual system,
have a lens that is in a fixed relationship with the image plane usually, the lens is parallel to the plane. Although
these devices produce perspective views, if the viewer is far from the object, the views are approximately parallel,
but orthogonal, because the projection plane is parallel to the lens. The bellows camera that we used to develop
the synthetic-camera model.
e is no significant difference among the different parallel
views. The application programmer specifies a type of view parallel or perspective and a set of parameters that
describe the camera. The problem for the application programmer is how to specify these parameters in the
viewing procedures so as best to view an object or to produce a specific classical view.

Dept. of CSE, ATMECE, Mysuru Page No. 4


Geometric Objects and Transformations Module 4

4.1.5 Perspective Viewing


All perspective views are characterized by diminution of size. When objects are moved farther from the viewer,
their images become smaller. This size change gives perspective views their natural appearance; however,
because the amount by which a line is foreshortened depends on how far the line is from the viewer, we cannot
make measurements from a perspective view. Hence, the major use of perspective views is in applications such
as architecture and animation, where it is important to achieve natural-looking images.

In the classical perspective views, the viewer is located symmetrically with respect to the projection plane, as
shown in Figure 5.9. Thus, the pyramid determined by the window in the projection plane and the center of
projection is a symmetric or right pyramid. This symmetry is caused by the fixed relationship between the back
(retina) and lens of the eye for human viewing, or between the back and lens of a camera for standard cameras,

Dept. of CSE, ATMECE, Mysuru Page No. 5


Geometric Objects and Transformations Module 4

and by similar fixed relationships in most physical situations. Some cameras, such as the bellows camera, have
movable film backs and can produce general perspective views. The model used in computer graphics includes
this general case.
The classical perspective views are usually known as one-, two-, and three-point perspectives. The differences
among the three cases are based on how many of the three principal directions in the object are parallel to the
projection plane. Consider the three perspective projections of the building shown in Figure 5.10. Any corner of
the building includes the three principal directions. In the most general case the three-point perspective parallel
lines in each of the three principal directions converges to a finite vanishing point (Figure 5.10(a)). If we allow
one of the principal directions to become parallel to the projection plane, we have a two-point projection (Figure
5.10(b)), in which lines in only two of the principal directions converge. Finally, in the one-point perspective
(Figure 5.10(c)), two of the principal directions are parallel to the projection plane, and we have only a single
vanishing point.

4.2 VIEWING WITH A COMPUTER


Viewing in computer graphics is based on the synthetic-camera model, we should be able to construct any of the
classical views. However, there is a fundamental difference. All the classical views were based on a particular
relationship among the objects, the viewer, and the projectors. In computer graphics, we stress the independence
of the object specifications and camera parameters. In OpenGL, we have the choice of a perspective camera or
an orthogonal camera. Whether a perspective view is a one-, two-, or three-point perspective is not something
that is understood by OpenGL, as it would require knowing the relationships between objects and the camera. On
balance, we prefer this independence, but if an application needs a particular type of view, the application
programmer may well have to determine where to place the camera.

In terms of the pipeline architecture, viewing consists of two fundamental operations. First, we must position and
orient the camera. This operation is the job of the model-view transformation. After vertices pass through this
transformation, they are represented in eye or camera coordinates. The second step is the application of the

Dept. of CSE, ATMECE, Mysuru Page No. 6


Geometric Objects and Transformations Module 4

projection transformation. This step applies the specified projection orthographic or perspective to the vertices
and puts objects within the specified clipping volume in a normalized clipping volume.

OpenGL starts with the camera at the origin of the object frame, pointing in the negative z-direction. This camera
is set up for orthogonal views and has a viewing volume that is a cube, centered at the origin and with sides of
length 2. The default projection plane is the plane z = 0 and the direction of the projection is along the zaxis.
Thus, objects within this box are visible and projected as shown in Figure 5.11. Until now, we were able to ignore
any complex viewing procedures by exploiting our knowledge of this camera. Thus, we were able to define
objects in the application programs that fit inside this cube and we knew that they would be visible. In this
approach, both the model-view and projection matrices were left as the default identity matrices.

Subsequently, we altered the model-view matrix, initially an identity matrix, by rotations and translations, so as
to place the camera where we desired. The parameters that we set in glOrtho alter the projection matrix, also
initially an identity matrix, so as to allow us to see objects inside an arbitrary right parallelepiped.

4.3 LIGHT AND MATTER


From a physical perspective, a surface can either emit light by self-emission, as a light bulb does, or reflect light
from other surfaces that illuminate it. Some surfaces may both reflect light and emit light from internal physical
processes. When we look at a point on an object, the color that we see is determined by multiple interactions
among light sources and reflective surfaces. These interactions can be viewed as a recursive process. Consider
the simple scene illustrated in Figure 6.1. Some light from the source that reaches surface A is scattered. Some
of this reflected light reaches surface B, and some of it is then scattered back to A, where some of it is again
reflected back to B, and so on. This recursive scattering of light between surfaces accounts for subtle shading
effects, such as the bleeding of colors between adjacent surfaces. Mathematically, this recursive process results

Dept. of CSE, ATMECE, Mysuru Page No. 7


Geometric Objects and Transformations Module 4

in an integral equation, the rendering equation, which in principle we could use to find the shading of all surfaces
in a scene. Unfortunately, this equation generally cannot be solved analytically. Numerical methods are not fast
enough for real-time rendering. There are various approximate approaches, such as radiosity and ray tracing, each
of which is an excellent approximation to the rendering equation for particular types of surfaces.

Rather than looking at a global energy balance, we follow rays of light from light-emitting (or self-luminous)
surfaces that we call light sources. We then model what happens to these rays as they interact with reflecting
surfaces in the scene.
This approach is similar to ray tracing, but we consider only single interactions between light sources and surfaces
and do not consider the possibility that light from a source may be blocked from reaching the surface by another
surface. There are two independent parts of the problem. First, we must model the light sources in the scene. Then
we must build a reflection model that describes the interactions between materials and light.
To get an overview of the process, we can start following rays of light from a point source, as shown in Figure
6.2. As we noted in Chapter 1, our viewer sees only the light that leaves the source and reaches her eyes perhaps
through a complex path and multiple interactions with objects in the scene. If a ray of light enters her eye directly
from the source, she sees the color of the source. If the ray of light hits a surface visible to our viewer, the color
she sees is based on the interaction between the source and the surface material: She sees the color of the light
reflected from the surface toward her eyes.
In terms of computer graphics, we can place the projection plane between the center of projection and the objects,
as shown in Figure 6.3. Conceptually, the clipping window in this plane is mapped to the display; thus, we can
think of the projection plane as ruled into rectangles, each corresponding to a pixel on the display.
Because we only need to consider light that enters the camera by passing through the center of projection, we can
start at the center of projection and follow a projector though each pixel in the clipping window. If we assume
that all our surfaces are opaque, then the color of the first surface intersected along each projector determines the
color of the corresponding pixel in the frame buffer.

Dept. of CSE, ATMECE, Mysuru Page No. 8


Geometric Objects and Transformations Module 4

The interactions between light and materials can be classified into the three groups depicted in Figure 6.4.
1. Specular surfaces appear shiny because most of the light that is reflected or scattered is in a narrow
range of angles close to the angle of reflection. Mirrors are perfectly specular surfaces; the light from an
incoming light ray may be partially absorbed, but all reflected light emerges at a single angle, obeying
the rule that the angle of incidence is equal to the angle of reflection.
2. Diffuse surfaces are characterized by reflected light being scattered in all directions. Walls painted with
matte or flat paint are diffuse reflectors, as are many natural materials, such as terrain viewed from an
airplane or a satellite. Perfectly diffuse surfaces scatter light equally in all directions and thus appear the
same to all viewers.
3. Translucent surfaces allow some light to penetrate the surface and to emerge from another location on
the object. This process of refraction characterizes glass and water. Some incident light may also be
reflected at the surface.

Dept. of CSE, ATMECE, Mysuru Page No. 9


Geometric Objects and Transformations Module 4

From a physical perspective, the reflection, absorption, and transmission of light at the surface of a material is
described by a single function called the Bidirectional Reflection Distribution Function (BRDF). Consider a
point on a surface. Light energy potentially can arrive at this location from any direction and be reflected and
leave in any direction. Thus, for every pair of input and output directions, there will be a value that is the fraction
of the incoming light that is reflected in the output direction. The BDRF is a function of five variables: the
frequency of light, the two angles required to describe the direction of the input vector, and the two angles required
to describe the direction of the output vector. For a perfectly diffuse surface, the BDRF is simplified because it
will have the same value for all possible output vectors. For a perfect reflector, the BRDF will only be nonzero
when the angle of incidence is equal to the angle of reflection.

4.4 LIGHT SOURCES


Light can leave a surface through two fundamental processes: self-emission and reflection. We usually think of
a light source as an object that emits light only through internal energy sources. However, a light source, such
as a light bulb, can also reflect some light that is incident on it from the surrounding environment. We neglect
the emissive term in our simple models.
If we consider a source such as the one shown in Figure 6.5, we can look at it as an object with a surface. Each
point (x, y, z) on the surface can emit light
intensity of energy al light source can be characterized by a six-
). Note that we need two angles to specify a direction, and we are
assuming that each frequency can be considered independently. From the perspective of a surface illuminated by
this source, we can obtain the total contribution of the source (Figure 6.6) by integrating over its surface, a process
that accounts for the emission angles that reach this surface and must also account for the distance between the
source and the surface. For a distributed light source, such as a light bulb, the evaluation of this integral is difficult,
whether we use analytic or numerical methods. Often, it is easier to model the distributed source with polygons,
each of which is a simple source, or with an approximating set of point sources.

Dept. of CSE, ATMECE, Mysuru Page No. 10


Geometric Objects and Transformations Module 4

The four basic types of sources: ambient lighting, point sources, spotlights, and distant light. These four lighting
types are sufficient for rendering most simple scenes.

4.4.1 Color Sources


Not only do light sources emit different amounts of light at different frequencies, but their directional properties
can vary with frequency. Consequently, a physically correct model can be complex. However, our model of the
human visual system is based on three-color theory that tells us we perceive three tristimulus values, rather than
a full color distribution. For most applications, we can thus model light sources as having three components red,
green, and blue and can use each of the three color sources to obtain the corresponding color component that a
human observer sees. Thus, we can describe a color source through a three-component intensity or illumination
function
L = (Lr , Lg , Lb),
each of whose components is the intensity of the independent red, green, and blue components. Thus, we use the
red component of a light source for the calculation of the red component of the image. Because light material
computations involve three similar but independent calculations, we tend to present a single scalar equation, with
the understanding that it can represent any of the three color components. Rather than write what will turn out to
be identical expressions for each component of L, we will use the the scalar L to denote any of the its components.
That is,
L {Lr , Lg , Lb}.

4.4.2 Ambient Light


Our ambient source has red, green, and blue color components, Lar , Lag , and Lab. We will use the scalar La to
denote any one of the three components. Although every point in our scene receives the same illumination from
La, each surface can reflect
Ambient light depends on the color of the the light sources in the environment. For example, a red light bulb in
a white room creates red ambient light. Hence, if we turn off the light, the ambient contribution disappears.
OpenGL permits us to add a global ambient term, which does not depend on any of the light sources and is
reflected from surfaces. The advantage of adding such a term is that there will always be some light in the
environment so that objects in the viewing volume that are not blocked by other objects will always appear in the
image.

Dept. of CSE, ATMECE, Mysuru Page No. 11


Geometric Objects and Transformations Module 4

4.4.3 Point Sources


An ideal point source emits light equally in all directions.We can characterize a point source located at a point p0
by a three-component color function
L(p0) = (Lr(p0), Lg(p0), Lb(p0)).

We use L(p0) to refer to any of the components. The intensity of illumination received from a point source located
at p0 at a point p is proportional to the inverse square of the distance from the source. Hence, at a point p (Figure
6.7), any component of the intensity of light received from the point source is given by function of the form

The use of point sources in most applications is determined more by their ease of use than by their resemblance
to physical reality. Scenes rendered with only point sources tend to have high contrast; objects appear either
bright or dark. In the real world, it is the large size of most light sources that contributes to softer scenes, as we
can see from Figure 6.8, which shows the shadows created by a source of finite size. Some areas are fully in
shadow, or in the umbra, whereas others are in partial shadow, or in the penumbra. We can mitigate the high-
contrast effect from point source illumination by adding ambient light to a scene.

Dept. of CSE, ATMECE, Mysuru Page No. 12


Geometric Objects and Transformations Module 4

The distance term also contributes to the harsh renderings with point sources. Although the inverse-square
distance term is correct for point sources, in practice it is usually replaced by a term of the form
(a + bd + cd2) 1, where d is the distance between p and p0. The constants a, b, and c can be chosen to soften the
lighting. In addition, a small amount of ambient light also softens the effect of point sources. Note that if the light
source is far from the surfaces in the scene, then the intensity of the light from the source is sufficiently uniform
that the distance term is almost constant over the surfaces.

4.4.4 Spotlights
Spotlights are characterized by a narrow range of angles through which light is emitted. We can construct a simple
spotlight from a point source by limiting the angles at which light from the source can be seen.We can use a cone
whose apex is at ps, which points in the direction ls, and whose width is determined by an an
otlight becomes a point source. More realistic spotlights are characterized by the
distribution of light within the cone usually with most of the light concentrated in the center of the cone. Thus,
the inten tion of the source and a vector s to a point on the surface
(as long as this angle is less this function could be defined in many ways, it is
usual the exponent e (Figure 6.11) determines how rapidly the light intensity drops
off. As we will see throughout this chapter, cosines are convenient functions for lighting calculations. If u and v
are any unit-length vectors, we can compute the

a calculation that requires only three multiplications and two additions.

4.4.5 Distant Light Sources


Most shading calculations require the direction from the point on the surface to the light source position. As we
move across a surface, calculating the intensity at each point, we should recompute this vector repeatedly a
computation that is a significant part of the lighting calculation. However, if the light source is far from the
surface, the vector does not change much as we move from point to point, just as the light from the sun strikes

Dept. of CSE, ATMECE, Mysuru Page No. 13


Geometric Objects and Transformations Module 4

all objects that are in close proximity to one another at the same angle. Figure 6.12 illustrates that we are
effectively replacing a point source of light with a source that illuminates objects with parallel rays of light a
parallel source. In practice, the calculations for distant light sources are similar to the calculations for parallel
projections; they replace the location of the light source with the direction of the light source. Hence, in
homogeneous coordinates, the location of a point light source at p0 is represented internally as a four-dimensional
column matrix:

In contrast, the distant light source is described by a direction vector whose representation in homogeneous
coordinates is the matrix

The graphics system can carry out rendering calculations more efficiently for distant light sources than for near
ones. Of course, a scene rendered with distant light sources looks different than a scene rendered with near light
sources. Fortunately, OpenGL allows both types of sources.

4.5 THE PHONG LIGHTING MODEL


The lighting model that we present was introduced by Phong and later modified by Blinn. It has proved to be
efficient and to be a close enough approximation to physical reality to produce good renderings under a variety
of lighting conditions and material properties. The Phong-Blinn or (modified Phong) model is the basis for
lighting and shading in graphics APIs and is implemented on virtually all graphics cards.
The Phong lighting model uses the four vectors shown in Figure 6.13 to calculate a color for an arbitrary point p
on a surface. If the surface is curved, all four vectors can change as we move from point to point. The vector n is
the normal at p; we discuss its calculation in Section 6.4. The vector v is in the direction from p to the viewer or
COP. The vector l is in the direction of a line from p to an arbitrary point on the source for a distributed light

Dept. of CSE, ATMECE, Mysuru Page No. 14


Geometric Objects and Transformations Module 4

source or, as we are assuming for now, to the point-light source. Finally, the vector r is in the direction that a
perfectly reflected ray from l would take. Note that r is determined by n and l.
The Phong model supports the three types of light material interactions ambient, diffuse, and specular. Suppose
that we have a set of point sources. Each source can have separate ambient, diffuse, and specular components for
each of the three primary colors.
The ambient source color represents the the interaction of a light source with the surfaces in the environment
whereas the the specular source color is designed to produce the desired color of a specular highlight.
Thus, if our light-source model has separate ambient, diffuse, and specular terms, we need nine coefficients to
characterize a light source at any point p on the surface. We can place these nine coefficients in a 3 × 3
illumination array for the ith light source:

The first row of the matrix contains the ambient intensities for the red, green, and blue terms from source i. The
second row contains the diffuse terms; the third contains the specular terms.We assume that any distance-
attenuation terms have not yet been applied.
We build our lighting model by summing the contributions for all the light sources at each point we wish to light.
For each light source, we have to compute the amount of light reflected for each of the nine terms in the
illumination array. For example, for the red diffuse term from source i, Lird, we can compute a reflection term
at p is RirdLird. The value of Rird depends on the material
properties, the orientation of the surface, the direction of the light source, and the distance between the light
source and the viewer. Thus, for each point, we have nine coefficients that we can place in an array of reflection
terms:

We can then compute the contribution for each color source by adding the ambient, diffuse, and specular
components. For example, the red intensity that we see at p from source i is the sum of red ambient, red diffuse,
and red specular intensities from this source:

Dept. of CSE, ATMECE, Mysuru Page No. 15


Geometric Objects and Transformations Module 4

We obtain the total intensity by adding the contributions of all sources and, possibly, a global ambient term. Thus,
the red term is

where Iar is the red component of the global ambient light.

We can simplify our notation by noting that the necessary computations are the same for each source and for each
primary color. They differ depending on whether we are considering the ambient, diffuse, or specular terms.
Hence, we can omit the subscripts i, r, g, and b. We write

4.5.1 Ambient Reflection


The intensity of ambient light Ia is the same at every point on the surface. Some of this light is absorbed and some
is reflected. The amount reflected is given by the ambient reflection coefficient, Ra = ka. Because only a positive
fraction of the light is reflected,
we must have

and thus
Ia = kaLa.
Here La can be any of the individual light sources, or it can be a global ambient term. A surface has, of course,
three ambient coefficients kar, kag, and kab and they can differ. Hence, for example, a sphere appears yellow
under white ambient light if its blue ambient coefficient is small and its red and green coefficients are large.

4.5.2 Diffuse Reflection


A perfectly diffuse reflector scatters the light that it reflects equally in all directions. Hence, such a surface appears
the same to all viewers. However, the amount of light reflected depends both on the material because some of the
incoming light is absorbed and on the position of the light source relative to the surface. Diffuse reflections are
characterized by rough surfaces. If we were to magnify a cross section of a diffuse surface, we might see an image
like that shown in Figure 6.14. Rays of light that hit the surface at only slightly different angles are reflected back
at markedly different angles. Perfectly diffuse surfaces are so rough that there is no preferred angle of reflection.
Such surfaces, sometimes called Lambertian surfaces, can be modelled

Dept. of CSE, ATMECE, Mysuru Page No. 16


Geometric Objects and Transformations Module 4

Consider a diffuse planar surface, as shown in Figure 6.15, illuminated by the sun. The surface is brightest at
noon and dimmest at dawn and dusk because, according
of the incoming light.
One way to understand this law is to consider a small parallel light source striking a plane, as shown in Figure
6.16. As the source is lowered in the (artificial) sky, the same amount of light is spread over a larger area, and the
surface appears dimmer. Returning to the point source of Figure 6.15, we can characterize diffuse reflections

Rd cos ,
angle between the normal at the point of interest n and the direction of the light source l. If both l
and n are unit-length vectors, then

If we add in a reflection coefficient kd representing the fraction of incoming diffuse light that is reflected, we
have the diffuse reflection term:
Id = kd(l . n)Ld.
If we wish to incorporate a distance term, to account for attenuation as the light travels a distance d from the
source to the surface, we can again use the quadratic attenuation term:

Dept. of CSE, ATMECE, Mysuru Page No. 17


Geometric Objects and Transformations Module 4

4.5.3 Specular Reflection


If we employ only ambient and diffuse reflections, our images will be shaded and will appear three-dimensional,
but all the surfaces will look dull, somewhat like chalk. What we are missing are the highlights that we see
reflected from shiny objects. These highlights usually show a color different from the color of the reflected
ambient and diffuse light. For example, a red plastic ball viewed under white light has a white highlight that is
the reflection of some of the light from the source in the direction of the viewer (Figure 6.17).

Whereas a diffuse surface is rough, a specular surface is smooth. The smoother the surface is, the more it
resembles a mirror. Figure 6.18 shows that as the surface gets smoother, the reflected light is concentrated in a
smaller range of angles centered about the angle of a perfect reflector a mirror or a perfectly specular surface.
Modeling specular surfaces realistically can be complex because the pattern by which the light is scattered is not
symmetric. It depends on the wavelength of the incident light, and it changes with the reflection angle.
Phong proposed an approximate model that can be computed with only a slight increase over the work done for
diffuse surfaces. The model adds a term for specular reflection. Hence, we consider the surface as being rough
for the diffuse term and smooth for the specular term. The amount of light that the viewer sees depends on the
or, and v, the direction of the viewer. The Phong model uses
the equation Is
incoming specular light that is
shininess coefficient. increased, the reflected light is concentrated in a narrower
region centered on the ity, we get a mirror; values in the
range 100 to 500 correspond to most metallic surfaces, and smaller values (< 100) correspond to materials that
show broad highlights.
The computational advantage of the Phong model is that if we have normalized r and n to unit length, we can
again use the dot product, and the specular term becomes
Is = ksLsmax((r . v) , 0).
We can add a distance term, as we did with diffuse reflections. What is referred to as the Phong model, including
the distance term, is written

Dept. of CSE, ATMECE, Mysuru Page No. 18


Geometric Objects and Transformations Module 4

This formula is computed for each light source and for each primary.

4.5.4 The Modified Phong Model


If we use the Phong model with specular reflections in our rendering, the dot product r . v should be recalculated
at every point on the surface. We can obtain a different approximation for the specular term by using the unit
vector halfway between the view vector and the light-source vector, the halfway vector

Note that if the normal is in the direction of the halfway vector, then the maximum reflection from the surface is
in the direction of the viewer. Figure 6.20 shows all five
and h, the halfway angle.

When we use the halfway vector in the calculation of the specular term, we are using the Blinn-Phong, or modified
Phong, shading model. This model is the default in OpenGL and is the one carried out on each vertex as it passes
down the pipeline.
If we replace r . v with n . h, we avoid calculation of
use the same exponent e in (n . h)e that we used in (r . v)e, then the size of the specular highlights will be smaller.
We can mitigate this problem by replacing the value of the exponent e with a value e so that (n . h)e is closer
to\ (r . v)e. It is clear that avoiding recalculation of r is desirable.
It is important to keep in mind that both the Phong and Blinn-Phong models were created as computationally
feasible approximations to the BRDF rather than as the best physical models.

4.6 POLYGONAL SHADING


For the most part, interactive computer graphics systems are polygon processors. From the hardware perspective,
systems are optimized for passing polygons down the pipeline. Performance is measured in terms of polygons
per second, a measurement that always includes lighting and shading. From the application perspective, a large
class of CAD applications have the user design polygonal meshes. Even if the software supports curved surfaces,
these surfaces are rendered as polygonal meshes that approximate the surface.
Because a polygon has the same normal for the entire surface, normals only need be computed once. Often
normals are stored in an application data structure. Further efficiencies can be obtained for many special
conditions such as a distant light source, and wemust also be careful to avoid visual artifacts.We will investigate

Dept. of CSE, ATMECE, Mysuru Page No. 19


Geometric Objects and Transformations Module 4

three methods for shading a polygonal mesh such as the one shown in Figure 6.23: flat shading, smooth or
interpolative (Gouraud) shading, and Phong shading.

4.6.1 Flat Shading


The three vectors need for shading l, n, and v can vary as we move from point to point on a surface. For a flat
polygon, however, n is constant. If we assume a distant viewer, v is constant over the polygon. Finally, if the light
source is distant, l is constant. Here distant could be interpreted in the strict sense of meaning that the source is
at infinity. The necessary adjustments, such as changing the location of the source to the direction of the source,
could then be made to the shading equations and to their implementation. Distant could also be interpreted in
terms of the size of the polygon relative to how far the polygon is from the source or viewer, as shown in Figure
6.24. Graphics systems or user programs often exploit this definition.

If the three vectors are constant, then the shading calculation needs to be carried out only once for each polygon,
and each point on the polygon is assigned the same shade. This technique is known as flat, or constant, shading.
In OpenGL, we specify flat shading as follows:
glShadeModel(GL_FLAT);

Flat shading will show differences in shading for the polygons in our mesh. If the light sources and viewer are
near the polygon, the vectors l and v will be different for each polygon. However, if our polygonal mesh has been
designed to model a smooth surface, flat shading will almost always be disappointing because we can see even

Dept. of CSE, ATMECE, Mysuru Page No. 20


Geometric Objects and Transformations Module 4

small differences in shading between adjacent polygons, as shown in Figure 6.25. The human visual system has
a remarkable sensitivity to small differences in light intensity, due to a property known as lateral inhibition. If we
see an increasing sequence of intensities, as shown in Figure 6.26, we perceive the increases in brightness as
overshooting on one side of an intensity step and undershooting on the other, as shown in Figure 6.27.We see
stripes, known as Mach bands, along the edges. This phenomenon is a consequence of how the cones in the eye
are connected to the optic nerve, and there is little that we can do to avoid it, other than to look for smoother
shading techniques that do not produce large differences in shades at the edges of polygons.

4.6.2 Smooth and Gouraud Shading


OpenGL interpolates colors assigned to vertices across a polygon. Smooth shading is the default
in OpenGL. We can also set the mode explicitly as follows:
glShadeModel(GL_SMOOTH);
Suppose that we have enabled both smooth shading and lighting and that we assign to each vertex
the normal of the polygon being shaded. The lighting calculation is made at each vertex using the
material properties and the vectors v and l computed for each vertex. Note that if the light source

Dept. of CSE, ATMECE, Mysuru Page No. 21


Geometric Objects and Transformations Module 4

is distant, and either the viewer is distant or there are no specular reflections, then smooth (or
interpolative) shading shades a polygon in a constant color.
If we consider our mesh, the idea of a normal existing at a vertex should cause concern to anyone
worried about mathematical correctness. Because multiple polygons meet at interior vertices of
the mesh, each of which has its own normal, the normal at the vertex is discontinuous. Although
this situation might complicate the mathematics, Gouraud realized that the normal at the vertex
could be defined in such a way as to achieve smoother shading through interpolation. Consider
an interior vertex, as shown in Figure 6.28, where four polygons meet. Each has its own normal.
In Gouraud shading, we define the normal at a vertex to be the normalized average of the normals
of the polygons that share the vertex. For our example, the vertex normal is given by

What we need, of course, is a data structure for representing the mesh that contains the information about which
polygons meet at each vertex. Traversing this data structure can generate the averaged normals. Such a data
structure should contain, at a minimum, polygons, vertices, normals, and material properties. One possible
structure is shown in Figure 6.29. The structure is a modified vertex list. Each node on the left points to a list of
the polygons that meet at the vertex. This data structure could be accessed by each polygon or by a data structure
that represents the mesh.

Dept. of CSE, ATMECE, Mysuru Page No. 22


Geometric Objects and Transformations Module 4

4.6.3 Phong Shading


Even the smoothness introduced by Gouraud shading may not prevent the appearance of Mach bands. Phong
proposed that instead of interpolating vertex intensities, as we do in Gouraud shading, we interpolate normals
across each polygon. The lighting model can then be applied at every point within the polygon. Note that because
the normals give the local surface orientation, by interpolating the normals across the polygon, as far as shading
is concerned, the surface appears to be curved rather than flat. This fact accounts for the smooth appearance of
Phong-shaded images.
Consider a polygon that shares edges and vertices with other polygons in the mesh, as shown in Figure 6.30. We
can compute vertex normals by interpolating over the normals of the polygons that share the vertex. Next, we can
use bilinear interpolation, as we did in Chapter 4, to interpolate the normals over the polygon. Consider Figure
6.31. We can use the interpolated normals at vertices A and B to
interpolate normals along the edge between them:

We can do a similar interpolation on all the edges. The normal at any interior point can be obtained from points
on the edges by

Once we have the normal at each point, we can make an independent shading calculation. Usually, this process
can be combined with rasterization of the polygon. Until recently, Phong shading could only be carried out off-
line because it requires the interpolation of normals across each polygon. In terms of the pipeline, Phong shading
requires that the lighting model be applied to each fragment; hence, the name per fragment shading.

Dept. of CSE, ATMECE, Mysuru Page No. 23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy