Overview of Computer Graphics System - 2
Overview of Computer Graphics System - 2
The primary use of clipping in computer graphics is to remove objects, lines, or line segments
that are outside the viewing pane. The viewing transformation is insensitive to the position of
points relative to the viewing volume − especially those points behind the viewer − and it is
necessary to remove these points before generating the view.
Point Clipping
Clipping a point from a given window is very easy. Consider the following figure, where the
rectangle indicates the window. Point clipping tells us whether the given point X,YX,Y is
within the given window or not; and decides whether we will use the minimum and
maximum coordinates of the window.
The X-coordinate of the given point is inside the window, if X lies in between Wx1 ≤ X ≤
Wx2. Same way, Y coordinate of the given point is inside the window, if Y lies in between
Wy1 ≤ Y ≤ Wy2.
Line Clipping
The concept of line clipping is same as point clipping. In line clipping, we will cut the portion
of line which is outside of window and keep only the portion that is inside the window.
Cohen-Sutherland Line Clippings
This algorithm uses the clipping window as shown in the following figure. The minimum
coordinate for the clipping region is (XWmin,YWmin)(XWmin,YWmin) and the maximum
coordinate for the clipping region is (XWmax,YWmax)(XWmax,YWmax).
We will use 4-bits to divide the entire region. These 4 bits represent the Top, Bottom, Right,
and Left of the region as shown in the following figure. Here, the TOP and LEFT bit is set to
1 because it is the TOP-LEFT corner.
There are 3 possibilities for the line −
• Line can be completely inside the window This line should be accepted.
• Line can be completely outside of the
window ThislinewillbecompletelyremovedfromtheregionThislinewillbecompletelyre
movedfromtheregion.
• Line can be partially inside the
window Wewillfindintersectionpointanddrawonlythatportionoflinethatisinsideregion
Wewillfindintersectionpointanddrawonlythatportionoflinethatisinsideregion.
Region code
– A four-digit binary code assigned to every line endpoint in a picture.
– Numbering the bit positions in the region code as 1 through 4 from right to left.
Text Clipping
Various techniques are used to provide text clipping in a computer graphics. It depends on the
methods used to generate characters and the requirements of a particular application. There
are three methods for text clipping which are listed below −
• All or none string clipping
• All or none character clipping
• Text clipping
The following figure shows all or none string clipping −
In all or none string clipping method, either we keep the entire string or we reject entire string
based on the clipping window. As shown in the above figure, STRING2 is entirely inside the
clipping window so we keep it and STRING1 being only partially inside the window, we
reject.
The following figure shows all or none character clipping −
This clipping method is based on characters rather than entire string. In this method if the
string is entirely inside the clipping window, then we keep it. If it is partially outside the
window, then −
• You reject only the portion of the string being outside
• If the character is on the boundary of the clipping window, then we discard that entire
character and keep the rest string.
The following figure shows text clipping −
This clipping method is based on characters rather than the entire string. In this method if the
string is entirely inside the clipping window, then we keep it. If it is partially outside the
window, then
• You reject only the portion of string being outside.
• If the character is on the boundary of the clipping window, then we discard only that
portion of character that is outside of the clipping window.
Curve Clipping
Curve-clipping procedures will involve nonlinear equations and this requires more processing
than for objects with linear boundaries. The bounding rectangle for a circle or other curved
object can be used first to test for overlap with a rectangular clip window.
If the bounding rectangle for the object is completely inside the window, we save the
object.
If the rectangle is determined to be completely outside window, we discard the object. In
either case, there is no further computation necessary. But if the bounding rectangle test fails,
we can look for other computation-saving approaches.
For a circle, we can use the coordinate extents of individual quadrants and then octants for
preliminary testing before calculating curve-window intersections.
For an ellipse, we can test the coordinate extents of individual quadrants Fig: Clipping a
filled Circle
Graphical User Interfaces and Interactive Input Methods: The User Dialogue – Input of
Graphical Data – Input Functions – Interactive Picture Construction Techniques – Three
Dimensional Concepts: 3D-Display Methods – #Three Dimensional Graphics Packages
GUI is an interface that allows users to interact with different electronic devices using icons
and other visual indicators. The graphical user interfaces were created because command line
interfaces were quite complicated and it was difficult to learn all the commands in it.
In today’s times, graphical user interfaces are used in many devices such as mobiles, MP3
players, gaming devices, smartphones etc.
The below diagram provides the position of the graphical user interface with respect to the
computer system:
Graphical User Interface makes use of visual elements mostly. These elements define the
appearance of the GUI. Some of these are described in detail as follows:
Window
This is the element that displays the information on the screen. It is very easy to manipulate a
window. It can be opened or closed with the click of an icon. Moreover, it can be moved to any
area by dragging it around. In a multitasking environment, multiple windows can be open at the
same time, all of them performing different tasks.
There are multiple types of windows in a graphical user interface, such as container window,
browser window, text terminal window, child window, message window etc.
Menu
A menu contains a list a choices and it allows users to select one from them. A menu bar is
displayed horizontally across the screen such as pull down menu. When any option is clicked
in this menu, then the pull down menu appears.
Another type of menu is the context menu that appears only when the user performs a specific
action. An example of this is pressing the right mouse button. When this is done, a menu will
appear under the cursor.
Icons
Files, programs, web pages etc. can be represented using a small picture in a graphical user
interface. This picture is known as an icon. Using an icon is a fast way to open documents, run
programs etc. because clicking on them yields instant access.
Controls
Information in an application can be directly read or influences using the graphical control
elements. These are also known as widgets. Normally, widgets are used to display lists of
similar items, navigate the system using links, tabs etc. and manipulating data using check
boxes, radio boxes etc.
Tabs
A tab is associated with a view pane. It usually contains a text label or a graphical icon. Tabs
are sometimes related to widgets and multiple tabs allow users to switch between different
widgets. Tabs are used in various web browsers such as Internet Explorer, Firefox, Opera,
Safari etc. Multiple web pages can be opened in a web browser and users can switch between
them using tabs.
In order to be able to interact with the graphical image input methods are required. These can
be used to just change the location and orientation of the camera, or to change specific settings
of the rendering itself. Different devices are more suitable for changing some settings then
others. In this chapter we will specify different types of these devices and discuss their
advantages.
Input methods Input methods can be classified using the following categories: – Locator –
Stroke – String – Valuator – Choice – Pick
Input methods
Locator
A device that allows the user to specify one coordinate position. Different methods can be used,
such as a mouse cursor, where a location is chosen by clicking a button, or a cursor that is
moved using different keys on the keyboard. Touch screens can also be used as locators; the
user specifies the location by inducing force onto the desired coordinate on the screen.
Stroke
A device that allows the user to specify a set of coordinate positions. The positions can be
specified, for example, by dragging the mouse across the screen while a mouse button is kept
pressed. On release, a second coordinate can be used to define a rectangular area using the first
coordinate in addition.
String
A device that allows the user to specify text input. A text input widget in combination with the
keyboard is used to input the text. Also, virtual keyboards displayed on the screen where the
characters can be picked using the mouse can be used if keyboards are not available to the
application.
Valuator
A device that allows the user to specify a scalar value. Similar to string inputs, numeric values
can be specified using the keyboard. Often, up-down-arrows are added to increase or decrease
the current value. Rotary devices, such as wheels can also be used for specifying numerical
values. Often times, it is useful to limit the range of the numerical value depending on the value.
Choice
A device that allows the user to specify a menu option. Typical choice devices are menus or
radio buttons which provide various options the user can choose from. For radio buttons, often
only one option can be chosen at a time. Once another option is picked, the previous one gets
cleared.
Pick
A device that allows the user to specify a component of a picture. Similar to locator devices, a
coordinate is specified using the mouse or other cursor input devices and then back-projected
into the scene to determine the selected 3-D object. It is often useful to allow a certain “error
tolerance” so that an object is picked even though the user did not exactly onto the object but
close enough next to it. Also, highlighting objects within the scene can be used to traverse
through a list of objects that fulfill the proximity criterion.
Certain applications do not allow the use of mouse or keyboard. In particular, 3-D
environments, where the user roams freely within the scene, mouse or keyboard would
unnecessarily bind the user to a certain location. Other input methods are required in these
cases, such as a wireless gamepad or a 3-D stylus, that is tracked to identify its 3-D location.
Input Modes
In addition to multiple types of logical input devices, we can obtain the measure of a device in
three distinct modes: 1) Request mode, 2) Sample mode, and 3) Event mode. It defined by the
relationship between the measure process and the trigger. Normally, the initialization of an
input device starts a measure process.
1) Request mode:
In this mode the measure of the device is not returned to the program until the device is
triggered. A locator can be moved to different point of the screen. The Windows system
continuously follows the location of the pointer, but until the button is depressed, the location
3) Event mode:
The previous two modes are not sufficient for handling the variety of possible human-computer
interactions that arise in a modern computing environment. The can be done in three steps: 1)
Show how event mode can be described as another mode within the measure trigger paradigm.
2) Learn the basics of client-servers when event mode is preferred, and 3) Learn how OpenGL
uses GLUT to do this. In an environment with multiple input devices, each with its own trigger
and each running a measure process. Each time that a device is triggered, an event is generated.
The
device measure, with the identifier for the device, is placed in an event queue. The user program
executes the events from the queue. When the queue is empty, it will wait until an event appears
there to execute it. Another approach is to associate a function called a callback with a specific
type of event. This is the approach we are taking.
Interactive picture construction techniques
Interactive picture- construction methods are commonly used in variety of applications,
including design and painting packages. These methods provide user with the capability to
position objects, to constrain fig. to predefined orientations or alignments, to sketch fig., and
to drag objects around the screen. Grids, gravity fields, and rubber band methods are used to
aid in positioning and other picture construction operations. The several techniques used for
interactive picture construction that are incorporated into graphics packages are:
(1) Basic positioning methods: - coordinate values supplied by locator input are often used
with positioning methods to specify a location for displaying an object or a character string.
Coordinate positions are selected interactively with a pointing device, usually by positioning
the screen cursor.
(2) Constraints: -A constraint is a rule for altering input coordinates values to produce a
specified orientation or alignment of the displayed coordinates. The most common constraint
is a horizontal or vertical alignment of straight lines.
(3) Grids: - Another kind of constraint is a grid of rectangular lines displayed in some part of
the screen area. When a grid is used, any input coordinate position is rounded to the nearest
intersection of two grid lines.
(4) Gravity field: - When it is needed to connect lines at positions between endpoints, the
graphics packages convert any input position near a line to a position on the line. The
conversion is accomplished by creating a gravity area around the line. Any related position
within the gravity field of line is moved to the nearest position on the line. It illustrated with a
shaded boundary around the line.
(5) Rubber Band Methods: - Straight lines can be constructed and positioned using rubber
band methods which stretch out a line from a starting position as the screen cursor.
(6) Dragging: - These methods move object into position by dragging them with the screen
cursor.
(7) Painting and Drawing: - Cursor drawing options can be provided using standard curve
shapes such as circular arcs and splices, or with freehand sketching procedures. Line widths,
line styles and other attribute options are also commonly found in painting and drawing
packages.
THREE-DIMENSIONAL DISPLAY METHODS
To obtain display of a three-dimensional scene that has been modeled in world coordinates.
We must first set up a coordinate reference for the "camera". This coordinate reference defines
the position and orientation for the plane of the camera film which is the plane we want to us
to display a view of the objects in the scene. Object descriptions are then transferred to the
camera reference coordinates and projected onto the selected display plane. We can then
display the objects in wireframe (outline) form, or we can apply lighting surface rendering
techniques to shade the visible surfaces.
In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is
added. 3D graphics techniques and their application are fundamental to the entertainment,
games, and computer-aided design industries. It is a continuing area of research in scientific
visualization.
Furthermore, 3D graphics components are now a part of almost every personal computer and,
although traditionally intended for graphics-intensive software such as games, they are
increasingly being used by other applications.
Parallel Projection
Parallel projection discards z-coordinate and parallel lines from each vertex on the object are
extended until they intersect the view plane. In parallel projection, we specify a direction of
projection instead of center of projection.
The appearance of the solid object can be reconstructed from the major views
In parallel projection, the distance from the center of projection to project plane is infinite. In
this type of projection, we connect the projected vertices by line segments which correspond
to connections on the original object.
Parallel projections are less realistic, but they are good for exact measurements. In this type
of projections, parallel lines remain parallel and angles are not preserved. Various types of
parallel projections are shown in the following hierarchy.
Orthographic Projection
In orthographic projection the direction of projection is normal to the projection of the plane.
There are three types of orthographic projections −
• Front Projection
• Top Projection
• Side Projection
Oblique Projection
In oblique projection, the direction of projection is not normal to the projection of plane. In
oblique projection, we can view the object better than orthographic projection.
There are two types of oblique projections − Cavalier and Cabinet. The Cavalier projection
makes 45° angle with the projection plane. The projection of a line perpendicular to the view
plane has the same length as the line itself in Cavalier projection. In a cavalier projection, the
foreshortening factors for all three principal directions are equal.
The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection,
lines perpendicular to the viewing surface are projected at ½ their actual length. Both the
projections are shown in the following figure −
Isometric Projections
Orthographic projections that show more than one side of an object are called axonometric
orthographic projections. The most common axonometric projection is an isometric
projection where the projection plane intersects each coordinate axis in the model coordinate
system at an equal distance. In this projection parallelism of lines are preserved but angles are
not preserved. The following figure shows isometric projection −
Perspective Projection
In perspective projection, the distance from the center of projection to project plane is finite
and the size of the object varies inversely with distance which looks more realistic.
The distance and angles are not preserved and parallel lines do not remain parallel. Instead,
they all converge at a single point called center of projection or projection reference point.
There are 3 types of perspective projections which are shown in the following chart.
• One point perspective projection is simple to draw.
• Two point perspective projection gives better impression of depth.
• Three point perspective projection is most difficult to draw.
The following figure shows all the three types of perspective projection −
DEPTH CUEING
A simple method for indicating depth with wireframe displays is to vary the intensity of objects
according to their distance from the viewing position. The viewing position are displayed with
the highest intensities, and lines farther away are displayed with decreasing intensities.
Visible Line and Surface Identification
We can also clarify depth relation ships in a wireframe display by identifying visible lines in
some way. The simplest method is to highlight the visible lines or to display them in a different
color. Another technique, commonly used for engineering drawings, is to display the nonvisible
lines as dashed lines. Another approach is to simply remove the nonvisible lines
Surface Rendering
Added realism is attained in displays by setting the surface intensity of objects according to
the lighting conditions in the scene and according to assigned surface characteristics. Lighting
specifications include the intensity and positions of light sources and the general background
illumination required for a scene. Surface properties of objects include degree of transparency
and how rough or smooth the surfaces are to be. Procedures can then be applied to generate the
correct illumination and shadow regions for the scene.
Exploded and Cutaway View
Exploded and cutaway views of such objects can then be used to show the internal structure
and relationship of the object Parts
Three-Dimensional and Stereoscopic View
Three-dimensional views can be obtained by reflecting a raster image from a vibrating flexible
mirror. The vibrations of the mirror are synchronized with the display of the scene on the CRT.
As the mirror vibrates, the focal length varies so that each point in the scene is projected to a
position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other for the
right eye.
Graphics Packages
A graphics package is an application that can be used to create and manipulate images on a
computer.
There are two main types of graphics package:
• painting packages
• drawing packages
Painting packages
• A painting package produces images by changing the colour of pixels on the screen.
• These are coded as a pattern of bits to create a bitmapped graphics file.
• Bitmapped graphics are used for images such as scanned photographs or pictures taken
with a digital camera.
Advantages
• The main advantage offered by this type of graphic is that individual pixels can be
changed which makes very detailed editing possible.
Disadvantages of painting packages
• Individual parts of an image cannot be resized;
• only the whole picture can be increased or decreased in size.
• Information has to be stored about every pixel in an image which produces files that
use large amounts of backing storage space.
Examples of graphics packages that produce bitmapped images include:- MS Paint, PC
Paintbrush, Adobe Photoshop and JASC’s Paint Shop Pro.
Drawing packages
• A drawing package produces images that are made up from coloured lines and shapes
such as circles, squares and rectangles.
• When an image is saved it is stored in a vector graphics file as a series of instructions,
which can be used to recreate it.
Main advantages of vector graphics are:
• They use less storage space than bitmap graphics;
• Each part of an image is treated as a separate object, which means that individual parts
can be easily modified.
Disadvantages of drawing packages
• They don’t look as realistic as bitmap graphics.
Examples of drawing graphics packages include CorelDraw, Micrographix Designer and
computer aided design (CAD) packages such as AutoCAD.
3D Geometric and Modeling Transformations: Translation – Scaling – Rotation – Other
Transformations. Visible Surface Detection Methods: Classification of Visible Surface
Detection Algorithm –Back face Detection – Depth-Buffer Method – A Buffer Method – Scan-
Line Method –Applications of Computer Graphics.
Point shown in fig is (x, y, z). It become (x1,y1,z1) after translation. Tx Ty Tz are translation
vector.
Example: A point has coordinates in the x, y, z direction i.e., (5, 6, 7). The translation is done
in the x-direction by 3 coordinate and y direction. Three coordinates and in the z- direction by
two coordinates. Shift the object. Find coordinates of the new position.
Solution: Co-ordinate of the point are (5, 6, 7)
Translation vector in x direction = 3
Translation vector in y direction = 3
Translation vector in z direction = 2
Translation matrix is
Multiply co-ordinates of point with translation matrix
= [5+0+0+30+6+0+30+0+7+20+0+0+1] = [8991]
x becomes x1=8
y becomes y1=9
z becomes z1=9
Scaling
Scaling is used to change the size of an object. The size can be increased or decreased. The
scaling three factors are required Sx Sy and Sz.
Sx=Scaling factor in x- direction
Sy=Scaling factor in y-direction
Sz=Scaling factor in z-direction
Reflection
It is also called a mirror image of an object. For this reflection axis and reflection of plane is
selected. Three-dimensional reflections are similar to two dimensions. Reflection is 180° about
the given axis. For reflection, plane is selected (xy,xz or yz). Following matrices show
reflection respect to all these three planes.
Reflection relative to XY plane
5. These were developed for vector 5. These are developed for raster devices.
graphics system.
7. Vector display used for object method 7. Raster systems used for image space
has large address space. methods have limited address space.
8. Object precision is used for application 8. There are suitable for application where
where speed is required. accuracy is required.
10. If the number of objects in the scene 10. In this method complexity increase
increases, computation time also with the complexity of visible parts.
increases.
1. Edge coherence: The visibility of edge changes when it crosses another edge or it also
penetrates a visible edge.
2. Object coherence: Each object is considered separate from others. In object, coherence
comparison is done using an object instead of edge or vertex. If A object is farther from object
B, then there is no need to compare edges and faces.
3. Face coherence: In this faces or polygons which are generally small compared with the size
of the image.
4. Area coherence: It is used to group of pixels cover by same visible face.
5. Depth coherence: Location of various polygons has separated a basis of depth. Depth of
surface at one point is calculated, the depth of points on rest of the surface can often be
determined by a simple difference equation.
6. Scan line coherence: The object is scanned using one scan line then using the second scan
line. The intercept of the first line.
7. Frame coherence: It is used for animated objects. It is used when there is little change in
image from one frame to another.
8. Implied edge coherence: If a face penetrates in another, line of intersection can be
determined from two points of intersection.
Algorithms used for hidden line surface detection
When we view a picture containing non-transparent objects and surfaces, then we cannot see
those objects from view which are behind from objects closer to eye. We must remove these
hidden surfaces to get a realistic screen image. The identification and removal of these surfaces
is called Hidden-surface problem.
There are two approaches for removing hidden surface problems − Object-Space method
and Image-space method. The Object-space method is implemented in physical coordinate
system and image-space method is implemented in screen coordinate system.
When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen
that are visible from a chosen viewing position.
Depth Buffer Z−Buffer Method
This method is developed by Cutmull. It is an image-space approach. The basic idea is to test
the Z-depth of each surface to determine the closest visiblevisible surface.
In this method each surface is processed separately one pixel position at a time across the
surface. The depth values for a pixel are compared and the closest smallestzsmallestz surface
determines the color to be displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order.
To override the closer polygons from the far ones, two buffers named frame buffer and depth
buffer, are used.
Depth buffer is used to store depth values for x,yx,y position, as surfaces are
processed 0≤depth≤10≤depth≤1.
The frame buffer is used to store the intensity value of color value at each position x,yx,y.
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate
indicates back clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm
Step-1 − Set the buffer values −
Depthbuffer x,yx,y = 0
Framebuffer x,yx,y = background color
Step-2 − Process each polygon OneatatimeOneatatime
For each projected x,yx,y pixel position of a polygon, calculate depth z.
If Z > depthbuffer x,yx,y
Compute surface color,
set depthbuffer x,yx,y = z,
framebuffer x,yx,y = surfacecolor x,yx,y
Advantages
• It is easy to implement.
• It reduces the speed problem if implemented in hardware.
• It processes one object at a time.
Disadvantages
• It requires large memory.
• It is time consuming process.
Scan-Line Method
It is an image-space method to identify visible surface. This method has a depth information
for only single scan-line. In order to require one scan-line of depth values, we must group and
process all polygons intersecting a given scan-line at the same time before processing the next
scan-line. Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope
of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other
surface data, and may be pointers to the edge table.
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed.
The active list stores only those edges that cross the scan-line in order of increasing x. Also a
flag is set for each surface to indicate whether a position along a scan-line is either inside or
outside the surface.
Pixel positions across each scan-line are processed from left to right. At the left intersection
with a surface, the surface flag is turned on and at the right, the flag is turned off. You only
need to perform depth calculations when multiple surfaces have their flags turned on at a certain
scan-line position.
Back-Face Detection
A fast and simple object-space method for identifying the back faces of a polyhedron is based
on the "inside-outside" tests. A point x,y,zx,y,z is "inside" a polygon surface with plane
parameters A, B, C, and D if When an inside point is along the line of sight to the surface, the
polygon must be a back
face weareinsidethatfaceandcannotseethefrontofitfromourviewingpositionweareinsidethatface
andcannotseethefrontofitfromourviewingposition.
We can simplify this test by considering the normal vector N to a polygon surface, which has
Cartesian components A,B,CA,B,C.
In general, if V is a vector in the viewing direction from the
eye or"camera"or"camera" position, then this polygon is a back face if
V.N > 0
Furthermore, if object descriptions are converted to projection coordinates and your viewing
direction is parallel to the viewing z-axis, then −
V = (0, 0, Vz) and V.N = VZC
So that we only need to consider the sign of C the component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative ZVZV axis, the
polygon is a back face if C < 0. Also, we cannot see any face whose normal has z component
C = 0, since your viewing direction is towards that polygon. Thus, in general, we can label any
polygon as a back face if its normal vector has a z component value −
C <= 0
Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex coordinates
specified in a clockwise
direction unlikethecounterclockwisedirectionusedinaright−handedsystemunlikethecounterclo
ckwisedirectionusedinaright−handedsystem.
Also, back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive ZvZv axis. By examining
parameter C for the different planes defining an object, we can immediately identify all the
back faces.
A-Buffer Method
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a
visibility detection method developed at Lucas film Studios for the rendering system Renders
Everything You Ever Saw REYESREYES.
The A-buffer expands on the depth buffer method to allow transparencies. The key data
structure in the A-buffer is the accumulation buffer.
If depth >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field
then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes
−
• RGB intensity components
• Opacity Parameter
• Depth
• Percent of area coverage
• Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are
used to determine the final color of a pixel.
Application of Computer Graphics
1. Education and Training: Computer-generated model of the physical, financial and
economic system is often used as educational aids. Model of physical systems, physiological
system, population trends or equipment can help trainees to understand the operation of the
system.
For some training applications, particular systems are designed. For example Flight Simulator.
Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend much
of their training not in a real aircraft but on the ground at the controls of a Flight Simulator.
Advantages:
1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world's airports.
2. Use in Biology: Molecular biologist can display a picture of molecules and gain insight into
their structure with the help of computer graphics.
3. Computer-Generated Maps: Town planners and transportation engineers can use
computer-generated maps which display data useful to them in their planning work.
4. Architect: Architect can explore an alternative solution to design problems at an interactive
graphics terminal. In this way, they can test many more solutions that would not be possible
without the computer.
5. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs, pie
charts and other displays showing relationships between multiple parameters. Presentation
Graphics is commonly used to summarize
o Financial Reports
o Statistical Reports
o Mathematical Reports
o Scientific Reports
o Economic Data for research reports
o Managerial Reports
o Consumer Information Bulletins
o And other types of reports
6. Computer Art: Computer Graphics are also used in the field of commercial arts. It is used
to generate television and advertising commercial.
7. Entertainment: Computer Graphics are now commonly used in making motion pictures,
music videos and television shows.
8. Visualization: It is used for visualization of scientists, engineers, medical personnel,
business analysts for the study of a large amount of information.
9. Educational Software: Computer Graphics is used in the development of educational
software for making computer-aided instruction.
10. Printing Technology: Computer Graphics is used for printing technology and textile
design.
Example of Computer Graphics Packages:
1. LOGO
2. COREL DRAW
3. AUTO CAD
4. 3D STUDIO
5. CORE
6. GKS (Graphics Kernel System)
7. PHIGS
8. CAM (Computer Graphics Metafile)
9. CGI (Computer Graphics Interface)