0% found this document useful (0 votes)
105 views92 pages

Computer Graphics Chapter 1

Uploaded by

Dagi Man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views92 pages

Computer Graphics Chapter 1

Uploaded by

Dagi Man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 92

Computer Graphics Chapter 1

Introduction to graphics applications

1
What is computer graphics?

 The computer is an information processing machine. It is a tool


for storing, manipulating, and correlating data. There are many
ways to communicate the processed information to the user.
 Computer graphics has emerged as a sub-field of computer
science which studies methods for digitally synthesizing and
manipulating visual content. It means drawing pictures on a
computer screen with the help of programming.

2
Cont….
 The picture or graphics object may be an engineering drawing,
business graph, architectural structures, a single frame form an
animated movie or a machine parts illustrated for service manual …
 The computer graphics is one of the most effective and commonly
used way to communicate the processed information to the user. It
displays the information in the form of graphics objects such as
pictures, charts, graphs and diagrams instead of simple text. Thus we
can say that computer graphics makes it possible to express data in
pictorial form.

3
Cont…
Therefore it is important to understand.
 How picture or graphics objects are presented in computer
graphics?
 How pictures or graphics objects are prepared for presentation?
 How previously prepared pictures or graphics are presented?
 How interaction with the picture or graphics object is
accomplished?

4
How images represented?
 In computer graphics pictures or graphics objects are presented
as a collection of discrete picture elements called pixels. The
smallest visible element the display hardware can put on the
screen which we can control. The control is achieved by the
intensity and color of pixel which compose the screen.

5
Graphics: Terminology

 Pixels -- picture elements in digital images


 Image Resolution -- number of pixels in a digital image (Higher
resolution always yields better quality.)
 width x height (e.g., 640X480)
 Most common Aspect ratio: 3:4 (lines:columns)
 Dots (pixels) per inch, dpi or ppi (e.g., 72 dpi)
 Bitmap: The two-dimensional array of pixel values that represents
the graphics/image data.
 Bits/pixel – also contributes to the quality of the image

6
What is an image?
Rectangular grid of pixels- 5x5 grid
If we are using 1 bit per cell, how many bits are needed to present the
picture?
What is a pixel?
Point/Cell in the image that contains color data
Each pixel is made up of bits
Resolution: Details contained in an image
Defined by the number of pixels
5 x5 grid

[0,0] [0,1] [0,2] [0,3] [0,4]

[1,0] [1,1] [1,2] [1,3] [1,4]

[2,0] [2,1] [2,2] [2,3] [2,4]

[3,0] [3,1] [3,2] [3,3] [3,4]

[4,0] [4,1] [4,2] [4,3] [4,4]


Digital images

George Seurat: Sunday afternoon on the island of La


Grande Jatte (1884-1886)
Representing Color

Red Green Blue


Basics of Color (…Contd)
 The Human Retina
 The eye functions on the same principle as a camera
 Each neuron is either a rod or a cone.
 The rods contain the elements that are sensitive to light intensities. Rods are not
sensitive to color.
 Cones come in 3 types: red, green and blue. Each responds differently to various
frequencies of light. The following figure shows the spectral-response functions of the
cones and the luminous-efficiency function of the human eye

10
Cones and Color

 The cones provide humans with vision during the daylight and are
believed to be separated into three types, where each type is more
sensitive to a particular wavelength
 The color signal to the brain comes from the response of the 3 cones to
the spectra being observed. That is, the signal consists of 3 numbers:

where E is the light and S is the sensitivity function

11
Color Composition

 A color can be specified as the sum of three colors. So colors form a 3


dimensional vector space.
 The following figure shows the amounts of three primaries needed to
match all the wavelengths of the visible spectrum.

12
Color Models for Images
RGB Additive Model CMY Subtractive Model
 CRT displays have three phosphors (RGB)  Cyan, Magenta, and Yellow
which produce a combination of (CMY) are complementary colors
wavelengths when excited with electrons of RGB.
 A color image is a 2-D array of (R,G,B)  CMY model is mostly used in
integer triplets. These triplets encode how printing devices where the color
much the corresponding phosphor should pigments on the paper absorb
be excited in devices such as a monitor. certain colors (e.g., no red light is
reflected from cyan ink).

Blue Yellow Red


Cyan
Black(1,1,1)
Magenta White(1,1,1) Green

Magenta
Black(0,0,0) Green White(0,0,0)

Cyan Blue
Red Yellow
13
Color in Images and Video
Basics of Color
 Light and Spectra
 Visible light is an electromagnetic wave in the 400 nm - 700 nm
range.

 Most light we see is not one wavelength, it's a


combination of many wavelengths.

14
Representing Color

 Computer graphics/Images: RGB

R: 0 to 255, G: 0 to 255, B: 0 to 255


Binary Images

 Remember, everything on a computer is stored as 0s and 1s.


 Thus, we must interpret these numbers as different forms of data.
 One bit (binary digit) can be either a 0 or a 1.
 Therefore, it can only represent two possibilities: hot or cold, black or white, on
or off, etc…

000000110011100111001
100001100111010000111
000111000110001111000
011100011110000111000
110111001110011011000
101001100010101000110
001010111011101000110
100101010100001110000
101010100000000001110

1 bit per pixel


Bit Color Depth

1 bit 2 bits 4 bits 8 bits 24 bits

1= ON 0
=OFF
24 bit True Color can represent
00 01 10 11 more than 16.7 million unique
Different shades of gray colors. More colors than the
human eye can distinguish!
Raster vs Vector Graphics
 Raster graphics: made up of pixels
Resolution dependent
Cannot be scaled without losing quality
Can represent photo realistic elements better
than vector graphics

 Vector graphics: geometric primitives, composed


of paths
Mathematical equations
Resolution independent
Can be scaled to any size without losing quality
Best for cartoon-like images
3D modeling
Raster vs Vector Graphics

 Raster graphics - Image formats:


BMP
GIF
JPEG
PNG
 Vector graphics - Image formats:
Flash
Scalable vector graphics (SVG)
CDR (corelDraw)
AI (Adobe Illustrator)
Raster Graphics
 BMP (bitmaps)
 Simple structure
 Pixel color values left to right, top to bottom
 Can be compressed using run-length encoding

 GIF (graphics interchange format)


 8-bit palette (any 256 colors)
 Small size
 Simple images: line art, shapes, logos
 Lossless compression: covering areas with single color

 JPEG (joint photographic experts group)


 Is a compression method stored in JFIF (JPEG file interchange format)
 Lossy compression: Averages color hues over short distances
 Taking advantage of limitations of our visual system, discarding
invisible information
 Compression ratio is usually 0.1
 Structure: sequence of segments. Marker followed by a definition of the
marker
Vector Graphics
 SVG (Scalable Vector Graphics)
Text based scripts
<rect class=“bluebox" x="10" y="0"
width="460" height="50"/>

Text compression
Compression ratio can be as small as 0.2
Great for web-based imaging
Monochrome vs. Grayscale
Monochrome(1 bit image): Grayscale(8 bit gray level image):
 Each pixel is stored as a
 Each pixel is usually stored as a
single bit (0 or 1)
byte (value between 0 to 255)
 A 640 x 480 monochrome
 A 640 x 480 grayscale image
image requires 37.5 KB of
requires over 300 KB of storage.
storage.

22
Color Images (24 vs. 8 bit)

24-bit: 8-bit:
 Each pixel is represented  One byte for each pixel
by three bytes (e.g., RGB)  Supports 256 out of the millions
 Supports 256 x 256 x 256 colors possible, acceptable color
possible combined colors quality
(16,777,216)  Requires Color Look-Up Tables
 A 640 x 480 24-bit color (LUTs) -- Pallete
image would require 921.6  A 640 x 480 8-bit color image
KB of storage requires 307.2 KB of storage
 Many 24-bit color images (the same as 8-bit grayscale)
are stored as 32-bit
images, the extra byte of
data for each pixel is used
to store an alpha value
representing special effect
information
23
24-bit color
(60KB jpeg)

8-bit color
(30KB gif)

24
System Independent Formats
GIF(GIF87a,GIF89a): JPEG:
 Graphics Interchange Format  A standard for photographic
(GIF) devised by the UNISYS image compression created by
Corp. and Compuserve, initially the Joint Photographics Experts
for transmitting graphical Group
images over phone lines via  Takes advantage of limitations
modems. in the human vision system to
 Uses the Lempel-Ziv Welch achieve high rates of
algorithm (compression). compression
 Supports only 8-bit (256) color  Lossy compression which
images. allows user to set the desired
 Supports interlacing level of quality/compression
 GIF89a supports simple
animation

25
…Contd
TIFF: Graphics Animation Files:
 Tagged Image File Format (TIFF),  FLC -- main animation or moving
stores many different types of picture file format, originally
images (e.g., monochrome, created by Animation Pro
grayscale, 8-bit & 24-bit RGB, etc.)  FLI -- similar to FLC
 Developed by the Aldus Corp. in  GL -- better quality moving
the 1980's and later supported by pictures, usually large file sizes
Microsoft Postscript/ PDF:
 TIFF is a lossless format (when not
 A typesetting language which
utilizing the new JPEG tag which includes text as well as
allows for JPEG compression) vector/structured graphics and bit-
 It does not provide any major mapped images
advantages over JPEG and is not as  Used in several popular graphics
user-controllable it appears to be programs (Illustrator, FreeHand)
declining in popularity
 Does not provide compression, files
are often large  

26
System Dependent Formats
Windows(BMP): X-windows(XBM):
 A system standard graphics file  Primary graphics format for the X
format for Microsoft Windows Window system
 It is capable of storing 24-bit bitmap
 Supports 24-bit color bitmap
images
 Many public domain graphic
 Used in PC Paintbrush and other
programs editors, e.g., xv
 Used in X Windows for storing

Macintosh(PAINT, PICT): icons, pixmaps, backdrops, etc.  


 PAINT was originally used in
MacPaint program, initially only for
1-bit monochrome images.
 PICT format is used in MacDraw (a
vector based drawing program) for
storing structured graphics

27
PNG: The Future
 The Portable Network Graphics (PNG) format was designed to replace the
older and simpler GIF format and, to some extent, the much more complex
TIFF format.
 Advantages over GIF:
 Alpha channels (variable transparency)
Also known as a mask channel, it is simply a way to associate variable
transparency with an image.
 Gamma correction (cross-platform control of image brightness)
 Two-dimensional interlacing (a method of progressive display)
GIF uses 1-D interlacing. (see the difference in the example at
http://data.uta.edu/~ramesh/multimedia/examples/interlacing.html )
 Better Compression (5-25% better)
 Features:
 Supports three main image types: truecolor, grayscale and palette-based
(``8-bit''). JPEG only supports the first two; GIF only the third.
 Shortcomings:
 No Animation

28
Applications of Computer Graphics
Applications of Computer Graphics are:
1. Display of information
2. Design
3. Simulation
4. User Interfaces

1. Displaying Information
- Architectures - floor plans of buildings
- Geographical information - maps
- Medical diagnoses - medical images
- Design engineers - chip layout
- etc.

2. Design
- There are two types of design problems: Overdetermined (posses no
optimal solution) or underdetermined (have multiple solutions).
- Thus, the design is an iterative process.
- Computer-aided Design (CAD) tools assist architectures and designer of
29 mechanical parts with their design.
Applications of Computer Graphics - cont
3. Simulations
- Real-life examples: flight simulators, Command and conquer simulators,
motion-pictures, virtual reality, and medical imaging.

4. User Interface
- Interaction with the computer via visual paradigm that includes windows,
icons, menus, and pointing devices.

A Graphics System
A computer graphics system with the following components:
1. Processor
2. Memory
3. Frame buffer
4. Output devices
5. Input devices

30
A Typical Graphics System

Imaged formed here

31
Pixels and the Frame Buffer
All graphics systems are raster-based. A picture is produced as an array – the
raster – of picture elements, pixels. Each pixel corresponds to a small area of
the image.

Pixels are stored in a part of memory called the frame buffer.

A) Image of Yeti the cat. B) Detail of area around one eye


32 showing individual pixels.
A sample Image in Pixel_Gray_Map (PGM) Format
P2
# test
4 4
255
0 50 127 255
255 200 127 0
10 60 100 250
20 70 80 200

This is a 4_by_4 8 bit image, i.e., 255


represents white and 0 represents black,
other numbers are used for different
shades of gray.

33
Pixels and the Frame Buffer-cont.
Frame buffers may be implemented with Video Random-access Memory
(VRAM) or Dynamic Random-access Memory (DRAM).

The depth of the frame buffer, defines the number of bits that are used for each
pixel and defines the number of colors.

In full-color (true color or RGB) systems, there are 24 (or more) bits per pixel.

A 24 bit system can have up to 16,777,216 different colors.


The resolution is the number of pixels in the frame buffer and determines the
detail that you can see in the image.

The conversion of geometric entities to pixel assignments in the frame buffer is


known as rasterization, or scan conversion.
34
Output Devices
The dominant type of display used is cathode-ray tube (CRT).

How does it work?


- Electrons strike the phosphor coating on the tube, light gets emitted,
- The direction of the beam is controlled by the two pairs of deflection plates.
- The output of a computer, digital, is converted to voltage, analog, across the
deflection plates using a digital-to-analog converter.

Since the beam can be moved directly from one position to any other
35 position, sometimes this device is called random-scan or calligraphic CRT.
Output Devices – cont.
A typical CRT device emits light for only short time - a few milliseconds after
the phosphor is excited by the electron beam.
• Human eyes can see a steady image on most CRT displays when the same
path is retraced or refreshed by the beam at least 50 times per second (50 HZ).
• In a raster system, the graphics system takes pixels from the frame buffer and
displays them as points on the surface of the display. The rate of display must
be high enough to avoid flicker. This is known as the refresh rate.
• There are two fundamental ways that pixels are displayed:
1. noninterlaced: pixels are displayed row-by-row, at refresh rate of
50-85 times per second.
2. interlaced: odd rows and even rows are refreshed alternately. For a
system operating at 60 HZ, the entire display is redrawn 30 times per second.

36
Output Devices – cont.
A typical color CRT have three different colored phosphors (red, green, and
blue) arranged in small groups.

A common style arranges the phosphors in triangular groups called triads, each
consisting of three phosphors (RGB).
The shadow mask ensures that an electron beam excites only phosphors of the
same color.
37
Input Devices
Most graphics systems provide a keyboard and at least one other input device.
The most common devices are the mouse, the joystick, and the data tablet.

Often called pointing devices, these devices allow a user to indicate a particular
location on the display.

Images: Physical and Synthetic


A computer generated image is a synthetic or artificial, in the sense that the
object being imaged does not exist physically.
In order to understand how synthetic images are generated, we first look into
the ways traditional imaging systems such as cameras form images.

38
Objects and Viewers
Our world is the world of three dimensional (3-D) objects. We refer to a
location of a point on an object in terms of some convenience reference
coordinate system.
There is a fundamental link between the physics and the mathematics of image
formation. We exploit this link in our development of computer image
formation.

There are two entities that are part of image formation: object and viewer. The
object exists in space independent of any image-formation process, and of any
viewer.
In computer graphics, we form objects by specifying the positions in space of
various geometric primitives, such as points, lines, and polygons.
In most graphics systems, a set of locations in space, or of vertices, is sufficient
to define, or approximate, most objects.
For example: A line can be defined by two vertices.
39
How many circles do you see on the screen?

How many circles are there on the screen?

40
Objects and Viewers – cont.
To form an image, we must have someone, or something, that is viewing our
objects, be it a human, a camera, or a digitizer. It is the viewer that forms the
image of our objects.
In human visual system, the image is formed on the back of the eye, on the
retina. In a camera, the image is formed on the film plane.
An image may be confused with an object.
Image
object

As viewed by A As viewed by B As viewed by C


41
How many circles are there on the screen?

Or you could see:

42
Objects and Viewers – cont.
The object and the viewer exist in a three-dimensional
world. The image that they define, what we find on the
film, is two-dimensional.

The process by which the specification of the


object is combined with the specification
of the viewer to produce a two-dimensional
image is the essence of image formation.

43
Light and Images
In our previous discussion of image-formation, we didn’t talk about light. If
there is no light source, an object would be dark and there won’t be anything
visible of the image.

Light usually strikes various parts of


the object and a portion of the
reflected light will enter the camera
through the lens.
The details of the interaction between
light and the surfaces of the object
determine how much light enters the
camera.
Light is formed of electromagnetic
radiation characterized by its
wavelength or frequency.

44
Light and Images – cont.
The electromagnetic spectrum includes radio waves, infrared (heat), and a
portion that causes a response in our visual systems, visible light spectrum.

The color of light source is determined by the energy that it emits at various
wavelengths. In graphics, we use the geometric optics which models light sources
as emitters of light energy that have a fixed rate or intensity.
An ideal point source emits energy from a single location at one or more
frequencies equally in all directions. We also assume purely monochromatic
lighting - a source of a single frequency.
45
Ray Tracing
We can model an image by following light from a source. The light that
reaches the viewer, will determine the image-formation.

A ray is a semi-infinite line that emanates


from a point and travels to infinity in a
particular direction.
Light travels in straight line, thus only a
portion the light will get to the viewer.
The light from the light source can interact
with the surface of the object differently,
depending on the orientation and the
luminosity of the surface.
A diffuse surface scatters light in all
directions while a transparent surface
passes light through which may result in
the light being bent or refracted.
46
Ray Tracing – cont.
Ray tracing is an image-formation technique used as the basis for producing
computer-generated images.

The ray tracing model can be used to


simulate physical effects as complex as
we wish. Bare in mind that complexity
will cost the computing time and
resources.
Although, ray tracing presents a close
approximation to the physical world, it
is not well suited for fast computation.
We simplify our objects by assuming
that all our objects are equally bright.
Why do we need such an assumption?

47
The Human Visual System
Lights enters the eye through Cornea and Lens.

The Iris opens and closes to adjust the


amount of light.

The lens forms an image on a 2-D


structure called Retina.

There are rods and cones that


Act like light sensors on the
Retina.

Rods are low-level


light sensors, night vision.

Cones are responsible


for our day vision.
48
The Human Visual System – cont.
The sensors in the human eye do not react uniformly to light energy at different
wavelengths. There are three types of cones and a single type of rod.
Intensity is a physical measure of light energy, brightness is a measure of how
intense we perceive the light emitted from an object to be.
How it is vs. how we see it …

Relative sensitivity Human color-vision capabilities are due to the


of human visual system different sensitivities of the three types of cones.
CIE standard observer One is centered in the blue range, one in the green,
curve. and one in the yellow.
Most sensitive to: Green
Least sensitive: Red & Yellow
49
The Pinhole Camera

Projection Point of (x,y,z)

y
yp  
z/d
x
Projection Point of (y,z) xp  
z/d

50
The Pinhole Camera – cont.
The field or angle view of our camera is the angle made by the largest object
that the camera can image on its film
plane. It can be computed using:
h/2
tan( )  
d
1 h
  2 tan
2d
The ideal pinhole camera has an infinite depth of field. This camera has two
disadvantages:
1) the pinhole is very small, it admits only a single ray from a point source,
2) the camera cannot be adjusted to have a different angle of view.

For our purposes, we work with a pinhole camera whose focal length is the
distance d from the front of the camera to the film plane. Like the pinhole camera,
computer graphics produces images in which all objects are in focus.
51
The Synthetic – Camera Model
We look at creating a computer-generated image as being similar to forming an
image using an optical system. This paradigm is known as synthetic-camera
model.
The image is formed on the back of
the camera, so we can emulate this
process to create artificial images.
Some basics principals:
1) The specification of the objects
is independent of the specification
An imaging system :
- a viewer (bellows camera)
of the viewer. Expect separate - an object
functions for specifying the objects
and the viewers in the graphic library that you will use.
52
The Synthetic – Camera Model -cont.
2) We can compute the image using simple trigonometric calculations.

We find the image of a point on the object by


drawing a line, projector, from the point to the
center of the lens, or the center of projection.
In our synthetic camera, the film plane
is moved in front of the lens and is called
Projection plane.

53
projector

image plane p
projection of p

center of projection

54
The Synthetic – Camera Model -cont.
We must consider the limited size of the image. Not all objects can be imaged
onto the pinhole camera’s film plane.
The angle of view expresses this limitation.
In our synthetic camera, we place this limitation by placing a clipping
rectangle or clipping window in the projection plane.

Given the location of the center of the projection, the location and orientation of
the projection plane, and the size of the clipping rectangle, we can determine
55which objects will appear in the image.
56
Rendering

 Many think/thought graphics synonymous with rendering


 Well researched
Working on second and third order effects
Fundamentals largely in place
Rendering

 Major areas:
Ealiest: PhotoRealism
Recent: Non-Photorealistic Graphics (NPR)
Recent: Image-based Rendering (IBR)
Rendering

 Ray Tracing has become practical


Extremely high quality images
Photorealism, animation, special effects
 Accurate rendering, not just pretty
Rendering Realism
Rendering Realism
Rendering Realism
Is this real?
Growth Models
Rendering/Modeling Hair

QuickTime™ and a QuickTime™ and a


Photo decompressor Video decompressor
are needed to see this picture. are needed to see this picture.
Is Photorealism Everything?
Is Photorealism Everything?
Non-Photorealistic Rendering
Tone Shading
NonPhotorealistic Rendering
Image Based Rendering

 Model light field


 Do not have to model geometry
 Good for complex 3D scenes
 Can leave holes where no data is available
3D Scene Recreation
360o Scan
Interaction

 Way behind rest of graphic's spectacular advances


 Still doing WIMP:
Windows, icons, menus, pull-downs/pointing
 Once viewed as “soft” research
Turns out to be one of hardest problems
Interaction still needs...

 Better input devices


 Better output devices
 Better interaction paradigms
 Better understanding of HCI
Bring in psychologists
Modeling

 Many model reps


Bezier, B-spline, box splines, simplex splines, polyhedral
splines, quadrics, super-quadrics, implicit, parametric,
subdivision, fractal, level sets, etc (not to mention
polygonal)
Modeling

 Physically based
Newton
Behavior as well as geometry
 Materials
Metal, cloth, organic forms, fluids, etc
 Procedural (growth) models
Modeling... is hard

 Complexity
 Shape
 Specifying
 Realistic constraints
 Detail vs concept
 Tedious, slow
Modeling is hard

 Mathematical challenge
 Computational challenge
 Interaction challenge
 Display challenge (want 3D)
 Domain knowledge, constraints
Growth Models
Models

D Johnson and
J D St Germain, Utah

Russ Fish et al., Utah


Scientific Visualization

QuickTime™ and a
YUV420 codec decompressor
are needed to see this picture.

National Library of
Medicine
The Programmer’s Interface
There are many sophisticated commercial software products with nice graphical
interfaces. Every one of them has been developed by someone.
Some of us still need to develop graphics applications to interface with these
sophisticated software products.

83
Application Programmer’s Interfaces
The interface between an application program and a graphics system can be
specified through a set of functions that resides in a graphics library. These
specifications are called the application programmer’s interface (API).

The synthetic-camera model is the basis for a number of popular APIs, including
OpenGL, PHIGS, Direct3D, VRML, and JAVA-3D. In order to model
synthetic-camera model we need functions in the API to specify:
Objects
Viewer
Light sources
Material properties
84
The Programmer’s Interface – cont.
Objects are defined by sets of vertices. For simple objects, line, rectangle, and
polygon, there is a simple relationship between a list of vertices and the object.
For more complex objects, there may be multiple ways of defining the object
from a set of vertices.
Most APIs provide similar sets of primitive objects for the user. OpenGL
defines primitives objects through lists of vertices.
Here is an example of how a triangular polygon is defined:
glBegin(GL_POLYGON);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd();
There are five function calls in this code segment.
85
Create a rectangle of size 2 by 1. The top left (1st point) is still at (0, 0, 0)
OpenGL function format

function name
dimensions

glVertex3f(x,y,z)

x,y,z are floats


belongs to GL library

glVertex3fv(p)

p is a pointer to an array
86
The Programmer’s Interface – cont.
We can define a viewer or camera in a variety of ways. Looking at the camera
given here, we can identify four types of necessary specifications:
1. Position: The camera location usually is given by the position of the center
of the lens (center of the projection).
2. Orientation: Once we have positioned the camera, we can place a camera
coordinate system with its origin at the center of projection. We can then rotate
the camera independently around the three axes of this system.
3. Focal length: The focal length of the lens determines the
size of the image on the film plane or, equivalently,
the portion of the world the camera sees.
4. Film plane: The back of the camera has a
height, h, and width, w, on the bellows camera,
and in some APIs, the orientation of the back of

87the camera can be adjusted independently of the orientation of the lens.


The Programmer’s Interface – cont.
To develop the specifications for the camera location and orientation, one can
use a series of coordinate system transformations.
These transformations convert object positions represented in the coordinate
system that specifies object vertices to object positions in a coordinate system
centered at the center of projection.
This may require setting and adjusting many parameters which will make it
difficult to get a desired image. Part of the problem lies with the synthetic-
camera model.
Classical viewing techniques stress the relationship between object and the
viewer, rather than the independence that the synthetic-camera model
emphasizes.

Two-point perspective
of a box.
88
The Programmer’s Interface – cont.
OpenGL API allows us to set transformations with complete freedom.
gluLookAt(cop_x, cop_y, cop_z, at_x, at_y, at_z, …);
This function call points the camera from a center of projection toward a
desired point.
gluPerspective(field_of_view, …);
This function selects a lens for a perspective view.

A light source can be defined by its location, strength, color, and directionality.
APIs provide a set of functions to specify these parameters for each source.
Material properties are the attributes of an object. Such properties are defined
through a series of functions. Both the light sources and material properties
depend on the models of light-material interactions supported by the API.

89
Sequence of Images and the Modeling-Rendering Paradigm
OpenGL allows you to write graphical application programs. The images
defined by your programs will be formed automatically by the hardware and
software implementation of the image-formation process.
Sometimes a scene consists of several objects. Although, programmer may
have used sophisticated data structures to model each object and the
relationship among the objects, the rendered scene shows only the outlines of
the objects. This type of image is known as a wireframe image, because we
can see only the edge of surfaces.
A common approach in developing realistic images is to separate the modeling
of the scene from the production of the image – the rendering of the scene.

Thus, the modeler and renderer are done with different software and hardware.

90
Graphics Architectures
On one side of the API is the application program. On the other is some
combinations of hardware and software that implements the functionality of the
API.

The advances in graphics architectures parallel closely the advances in


workstations.

Special-purpose VLSI circuits have significantly improved the graphics


technology.
91
Graphics Architectures – cont.
The most important use of custom VLSI circuits has been in creating pipeline
architectures.
This architecture
may not make any
difference when computing
a single multiplication and addition, but it will make a significant difference
when we do this operations for many sets.
A set of primitives are defined by a set of vertices. Because our representation
is given in terms of locations in space, we can refer to the set of primitive types
and vertices as the geometry of the data.
The four major steps in imaging process are:

Step 1 Step 2 Step 3 Step 4


92

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy