0% found this document useful (0 votes)
7 views108 pages

Fundamentals of Computer Graphics (Text Book)

Uploaded by

Jesuispeter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views108 pages

Fundamentals of Computer Graphics (Text Book)

Uploaded by

Jesuispeter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

ǒ Ȳ ɍ Ɏ ɩɃ ǚ ɍ ɋʂ ɉ ɸɳƒ Ɉ Ȼ ɔ Ʌ ƎɌ ƺ Ɂ ɩ

Phnom Penh International University

Ɋ Ǔ Ɏ ɩ Ƀ ǚ ɍ ɋʂ Ɏ ɩ Ƀ ǚ ǒ Ȝ ɑ Ǝ នង
ិ Ɉ ʂɁ ɾNj Ʌ Ɏ ɩ Ƀ ǚ
Faculty of Science & Information Technology

Computer Graphics

Slide Handout
Year IV, Semester II
2016-2017
Computer Graphics

ƙ Ȳ ȩɊ ǒ Ȝ ǒ ƎƸ Ɍ Ƙ Ɋ ɭȳ Ɏ ƺɩ ƅ
Computer Graphics
Contents

Computer Graphics 1: Introduction to Computer Graphics

Computer Graphics 2: Maths Preliminaries

Computer Graphics 3: 2D Transformations

Computer Graphics 4: Viewing In 2D

Computer Graphics 5: Line Drawing Algorithms

Computer Graphics 6: Bresenham Line Drawing Algorithm, Circle Drawing & Polygon Filling

Computer Graphics 7: Viewing In 3D

Computer Graphics 8: Perspective Projections

Computer Graphics 9: Clipping In 3D

Computer Graphics 10: 3D Object Representations 1

Computer Graphics 11: 3D Object Representations 2

Computer Graphics 11: 3D Object Representations 2

Computer Graphics 12: Spline Representations

Computer Graphics 13: Surface Detection Methods

Computer Graphics 14: More Surface Detection Methods

Computer Graphics 15: Illumination


3
of
19
What Is Computer Graphics?
The term computer graphics was coined in
1960 by William Fetter to describe new
Computer Graphics 1: design methods he was pursuing at Boeing
Introduction Fetter created a series
of widely reproduced
images on a pen plotter
exploring cockpit
design, using a 3D
model of a human body

2 4
of
19
Contents of
19
What Is Computer Graphics? (cont…)

Computer Graphics – What’s It All About? “Perhaps the best way to define computer graphics
Application Areas is to find out what it is not. It is not a machine. It is
not a computer, nor a group of computer
Course Outline programs. It is not the know-how of a graphic
Lectures & Labs designer, a programmer, a writer, a motion picture
specialist, or a reproduction specialist.
Exams & CA
Books Computer graphics is all these – a consciously
managed and documented technology directed
Miscellanea toward communicating information accurately and
Questions? descriptively.”
Computer Graphics, by William A. Fetter, 1966

1
5 7
of
19
Interactive Computer Graphics of
19
So?
Takes things a step further by allowing users If we add interactivity, Fetter’s definition pretty
rapid visual feedback from their actions much still holds
Typically we have the following cycle: So much of modern computing involves some
graphical aspect that computer graphics is now
DISPLAY/ ubiquitous
INPUT PROCESSING
OUTPUT
So let’s say computer graphics encompasses
Mouse Screen
Tablet and stylus Paper-based printer anything achieved visually on computers
Force feedback device Video recorder
Scanner Projector
Live video streams VR/AR headset

This area is the focus of this course

6 8 Some Applications Of Computer


of Interactive Computer Graphics (cont…) of
19 19 Graphics
Sketchpad, developed in the Some of the application areas which make
1960s, was the first interactive heavy use of computer graphics are:
computer graphics application – Computer aided design
Using a light pen, key pad and – Scientific visualisation
monitor it allowed users create
– Films
accurate design drawings
Dr. Ivan E. Sutherland – Games
developed Sketchpad as
part of his PhD work. He – Virtual/Augmented Reality
went on to be a hugely
influential computer NOTE: There are lots more and there is
scientist working in areas
as diverse as graphics,
huge overlap between these different areas
circuit design, robotics
and computer hardware

2
9 11
of
19
Computer Aided Design of
19
Films

10 12
of
19
Scientific Visualisation of
19
Games

3
13 15
of
19
Virtual/Augmented Reality of
19
Exams & CA
Exams:
– Part A of end of year exam will cover all
theoretical material
CA:
– Marks awarded for completing each lab
exercise
– Two significant graphics projects using
OpenGL

14 16
of
19
Course Outline of
19
Books
The course will follow this broad-strokes “Computer Graphics with OpenGL”, D. Hearn
outline: & M. P. Baker, Prentice Hall, 2003
Most of the course follows this book
– Maths Preliminaries
– Viewing in 2D
Practical Work in

“Introduction to Computer Graphics”, J.D.


– Raster Graphics Foley, A. van Dam, S.K. Feiner, J.F. Hughes &
OpenGL

– Viewing in 3D R.L. Phillips, Addison Wesley, 1997


– 3D Object Modelling
– Illumination and Surface “Computer Graphics: Principles and Practice”,
Rendering J.D. Foley, A. van Dam, S.K. Feiner & J.F.
– Animation Hughes, Addison Wesley, 1995
Great for really in-depth theory

4
17
of
19
Questions

Any Questions?

5
2
of
35
Introduction
Computer graphics is all about maths!
None of the maths is hard, but we need to
Computer Graphics 2: understand it well in order to be able to
Maths Preliminaries understand certain techniques
Today we’ll look at the following:
– Coordinate reference frames
– Points & lines
– Vectors
– Matrices

3 4
of
35
Big Idea of
35
Coordinate Reference Frames – 2D
When setting up a scene in computer
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

graphics we define the scene using simple


geometry
y axis
For 2D scenes we use
simple two dimensional y
P

Cartesian coordinates
All objects are defined
using simple coordinate
pairs x x axis

1
5 Coordinate Reference Frames – 2D 6
of of Coordinate Reference Frames – 3D
35 (cont…) 35

For three dimensional scenes we simply add


y
an extra coordinate
(2, 7) (7, 7)
7 y axis

3
(2, 3) (7, 3)
P

z
x x
2 7

z axis x axis

7 8
of
35
Left Handed Or Right Handed? of
35
Points & Lines
There are two different ways in which we Points:
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

can do 3D coordinates – left handed or right – A point in two dimensional space is given as
handed an ordered pair (x, y)
– In three dimensions a point is given as an
ordered triple (x, y, z)

We will mostly use Lines:


the right-handed – A line is defined using a start point and
system an end-point
• In 2d: (xstart, ystart) to (xend, yend)
Right-Hand Left-Hand
Reference System Reference System • In 3d: (xstart, ystart , zstart) to (xend, yend , zend)

2
9 10
of
35
Points & Lines (cont…) of
35
The Equation of A Line
y The slope-intercept y

equation of a line is:


(2, 7) (6, 7) yend

The line from


where:
(2, 7) to (7, 3) y0
(2, 3) (7, 3)
x
x0 xend
(7, 1)

x The equation of the line gives us the


corresponding y point for every x point

11 12
of
35
A Simple Example of
35
A Simple Example (cont…)
Let’s draw a portion of the line given by the y
equation: 6

4
Just work out the y coordinate for each x
3
coordinate
2

0 1 2 3 4 5 6 7 8 9 10 x

3
13 14
of
35
A Simple Example (cont…) of
35
Vectors
For each x value just work out the y value: Vectors:
– A vector is defined as the difference between
two points
y
5 – The important thing is that a vector has a
direction and a length
What are vectors for?
2
– A vector shows how to move from one point
to another
x
2 3 4 5 6 7
– Vectors are very important in graphics -
especially for transformations

15 16
of
35
Vectors (2D) of
35
Vectors (3D)
To determine the vector between two points In three dimensions a vector is calculated in
simply subtract them much the same way
P2 (7, 10) y axis
y axis

V P (6, 7) P2 (10, 7)
2

P1 (2, 6)
P2
V V So for (2, 1, 3) to (7, 10, 5)
we get
P1 (1, 3) P1 (5, 3)

P1
x axis

WATCH OUT: Lots of pairs of points share the same z axis x axis
vector between them
4
17 18
of
35
Vector Operations of
35
Vector Operations: Vector Length
There are a number of important operations Vector lengths are easily calculated in two
we need to know how to perform with dimensions:
vectors:
– Calculation of vector length
– Vector addition and in three dimensions:
– Scalar multiplication of vectors
– Scalar product
– Vector product

19 20
of
35
Vector Operations: Vector Addition of
35
Vector Operations: Scalar Multiplication

The sum of two vectors is calculated by Multiplication of a vector by a scalar proceeds


simply adding corresponding components by multiplying each of the components of the
vector by the scalar
y axis y axis

V1 + V2
V2 y axis y axis (sVx, sVy)
V2

(Vx, Vy) sV
V1
V1 V
x axis x axis

Performed similarly in three dimensions x axis x axis

5
21 22
of
35
Other Vector Operations of
35
Matrices
There are other important vector operations A matrix is simply a grid of numbers
that we will cover as we come to them
These include:
– Scalar product (dot product)
– Vector product (cross product)
However, by using matrix operations we can
perform a lot of the maths operations
required in graphics extremely quickly

23 24
of
35
Matrix Operations of
35
Matrix Operations: Scalar Multiplication

The important matrix operations for this To multiply the elements of a matrix by a
course are: scalar simply multiply each one by the scalar
– Scalar multiplication
– Matrix addition
– Matrix multiplication
– Matrix transpose
Example:
– Determinant of a matrix
– Matrix inverse

6
25 26
of
35
Matrix Operations: Addition of
35
Matrix Operations: Matrix Multiplication

To add two matrices simply add together all We can multiply two matrices A and B
corresponding elements together as long as the number of columns
in A is equal to the number of rows in B
So, if we have an m by n matrix A and a p
by q matrix B we get the multiplication:
Example: C=AB
where C is a m by q matrix whose elements
are calculated as follows:

Both matrices have to be the same size

27
of
Matrix Operations: Matrix Multiplication 28
of
Matrix Operations: Matrix Multiplication
35 (cont…) 35 (cont…)
Examples: Watch Out! Matrix multiplication is not
commutative, so:

7
29 30
of
35
Matrix Operations: Transpose of
35
Other Matrix Operations

The transpose of a matrix M, written as MT There are some other important matrix
is obtained by simply interchanging the rows operations that we will explain as we need
and columns of the matrix them
For example: These include:
– Determinant of a matrix
– Matrix inverse

31 32
of
35
Summary of
35
Exercises 1
In this lecture we have taken a brief tour Plot the line y = ½x + 2 from x = 1 to x = 9
through the following: y

– Basic idea 6

– The mathematics of points, lines and vectors 5

– The mathematics of matrices 4

These tools will equip us to deal with the


3
computer graphics techniques that we will
begin to look at, starting next time 2

0 1 2 3 4 5 6 7 8 9 10 x

8
33 34
of
35
Exercises 2 of
35
Exercises 3
Perform the following matrix additions: Perform the following matrix multiplications:

35
of
35
Exercises 4
Perform the following multiplication of a
matrix by a scalar

Calculate the transpose of the following


matrix

9
3
of
27
Why Transformations?
In graphics, once we have an object

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
described, transformations are used to move
Computer Graphics 3: that object, scale it and rotate it
2D Transformations

2 4
of
27
Contents of
27
Translation
In today’s lecture we’ll cover the following: Simply moves an object from one position to
– Why transformations another
– Transformations xnew = xold + dx ynew = yold + dy
• Translation
• Scaling y 6

• Rotation 5

– Homogeneous coordinates 3

2
– Matrix multiplications 1

– Combining transformations 0
1 2 3 4 5 6 7 8 9 10
x
Note: House shifts position relative to origin
5 7
of
27
Translation Example of
27
Scaling Example
y y

6 6

5 5

4 4
(2, 3) (2, 3)
3 3

2 2

1 1
(1, 1) (3, 1) (1, 1) (3, 1)

0 1 2 3 4 5 6 7 8 9 10 x 0 1 2 3 4 5 6 7 8 9 10

6 8
of
27
Scaling of
27
Rotation
Scalar multiplies all coordinates Rotates all coordinates by a specified angle
WATCH OUT: Objects grow and move! xnew = xold × cosθ – yold × sinθ
xnew = Sx × xold ynew = Sy × yold ynew = xold × sinθ + yold × cosθ
y
6
Points are always rotated about the origin
5
y
4 6

3 5

2 4

1 3

2
0
1 2 3 4 5 6 7 8 9 10
x 1

Note: House shifts position relative to origin


0
1 2 3 4 5 6 7 8 9 10
x
9 11
of
27
Rotation Example of
27
Why Homogeneous Coordinates?
y Mathematicians commonly use
6
homogeneous coordinates as they allow
scaling factors to be removed from
5
equations
4
We will see in a moment that all of the
(4, 3)
3 transformations we discussed previously
2
can be represented as 3*3 matrices
Using homogeneous coordinates allows us
1
(3, 1) (5, 1) use matrix multiplication to calculate
0 1 2 3 4 5 6 7 8 9 10
transformations – extremely efficient!

10 12
of
27
Homogeneous Coordinates of
27
Homogeneous Translation

A point (x, y) can be re-written in The translation of a point by (dx, dy) can be
homogeneous coordinates as (xh, yh, h) written in matrix form as:
The homogeneous parameter h is a non-
zero value such that:

Representing the point as a homogeneous


column vector we perform the calculation as:
We can then write any point (x, y) as (hx, hy, h)
We can conveniently choose h = 1 so that
(x, y) becomes (x, y, 1)
13 15
of
27
Remember Matrix Multiplication of
27
Homogenous Coordinates (cont…)
Recall how matrix multiplication takes place:
Rotation:

14 16
of
27
Homogenous Coordinates of
27
Inverse Transformations
To make operations easier, 2-D points are Transformations can easily be reversed
written as homogenous coordinate column using inverse transformations
vectors

Translation:

Scaling:
17 19
of
27
Combining Transformations of
27
Combining Transformations (cont…)
A number of transformations can be The three transformation matrices are
combined into one matrix to make things combined as follows
easy
– Allowed by the fact that we use homogenous
coordinates
Imagine rotating a polygon around a point
other than the origin
– Transform to centre point to origin
– Rotate around origin
– Transform back to centre point REMEMBER: Matrix multiplication is not
commutative so order matters

18 20
of
27
Combining Transformations (cont…) of
27
Summary
In this lecture we have taken a look at:
1 2
– 2D Transformations
• Translation
• Scaling
• Rotation
– Homogeneous coordinates
– Matrix multiplications
– Combining transformations
Next time we’ll start to look at how we take
these abstract shapes etc and get them on-
3 4
screen
21 23
of
27
Exercises 1 of
27
Exercises 3
Translate the shape below by (7, 2) Rotate the shape below by 30° about the origin
y y

6 6

5 5

4 4
(2, 3) (7, 3)
3 3

(1, 2) (3, 2) (6, 2) (8, 2)


2 2

1 1
(2, 1) (7, 1)

0 1 2 3 4 5 6 7 8 9 10
x 0 1 2 3 4 5 6 7 8 9 10
x

22 24
of
27
Exercises 2 of
27
Exercise 4
Scale the shape below by 3 in x and 2 in y Write out the homogeneous matrices for the
y
previous three transformations
6
Translation Scaling Rotation
5

4
(2, 3)
3

(1, 2) (3, 2)
2

1
(2, 1)

0 1 2 3 4 5 6 7 8 9 10
x
25 27
of
27
Exercises 5 of
27
Equations
Using matrix multiplication calculate the rotation Translation:
of the shape below by 45° about its centre (5, 3) xnew = xold + dx ynew = yold + dy
y
5
(5, 4)
4 Scaling:
3
(4, 3) (6, 3) xnew = Sx × xold ynew = Sy × yold
Rotation
2
(5, 2)
xnew = xold × cosθ – yold × sinθ
1
ynew = xold × sinθ + yold × cosθ
0 1 2 3 4 5 6 7 8 9 10 x

26
of
27
Scratch

0 1 2 3 4 5 6 7 8 9 10 x
3
of
34
Windowing I
A scene is made up of a collection of objects
specified in world coordinates
Computer Graphics 4:
Viewing In 2D

World Coordinates

2 4
of
34
Contents of
34
Windowing II
Windowing Concepts When we display a scene only those objects
Clipping within a particular window are displayed
Window
– Introduction wymax
– Brute Force
– Cohen-Sutherland Clipping Algorithm
Area Clipping
wymin
– Sutherland-Hodgman Area Clipping
Algorithm

wxmin wxmax
World Coordinates

1
5 7
of
34
Windowing III of
34
Point Clipping
Because drawing things to a display takes Easy - a point (x,y) is not clipped if:
time we clip everything outside the window wxmin ≤ x ≤ wxmax AND wymin ≤ y ≤ wymax
Window
wymax otherwise it is clipped
P4 Clipped
Clipped

Window P2
wymax
Clipped
P5
wymin P1
P7 Points Within the Window
are Not Clipped
P9 P8
wymin
Clipped P10
wxmin wxmax
World Coordinates wxmin wxmax

6 8
of
34
Clipping of
34
Line Clipping
For the image below consider which lines Harder - examine the end-points of each line
and points should be kept and which ones to see if they are in the window or not
should be clipped Situation Solution Example
P4

Both end-points inside


Window Don’t clip
wymax
P2 the window
P6
P3

P7 P5
P1 One end-point inside
the window, one Must clip
P9
P8 outside
wymin
P10 Both end-points
Don’t know!
outside the window
wxmin wxmax

2
9 11
of
34
Brute Force Line Clipping of
34
Cohen-Sutherland Clipping Algorithm

Brute force line clipping can be performed as An efficient line clipping


follows: algorithm
– Don’t clip lines with both The key advantage of the
end-points within the algorithm is that it vastly
window
reduces the number of line
– For lines with one end- intersections that must be Dr. Ivan E. Sutherland
point inside the window co-developed the Cohen-
calculated Sutherland clipping
and one end-point algorithm. Sutherland is
outside, calculate the a graphics giant and
intersection point (using the equation of the includes amongst his
Cohen is something of a mystery – can achievements the
line) and clip from this point out a n y b o d y f i n d o u t wh o h e w a s ? invention of the head
m o u n t e d d i s p l a y.

10 12
of
34
Brute Force Line Clipping (cont…) of
34
Cohen-Sutherland: World Division
– For lines with both end- World space is divided into regions based
points outside the on the window boundaries
window test the line for
– Each region has a unique four bit region code
intersection with all of
the window boundaries, – Region codes indicate the position of the
and clip appropriately regions with respect to the window

However, calculating line intersections is 1001 1000 1010


3 2 1 0
computationally expensive 0000
above below right left 0001 0010
Because a scene can contain so many lines, Region Code Legend
Window
the brute force approach to clipping is much 0101 0100 0110
too slow
3
13 15 Cohen-Sutherland: Lines Outside The
of Cohen-Sutherland: Labelling of
34 34 Window
Every end-point is labelled with the Any lines with a common set bit in the region
appropriate region code codes of both end-points can be clipped
– The AND operation can efficiently check this
P11 [1010]
P4 [1000]
P11 [1010]
P4 [1000]
Window
wymax
P6 [0000] Window
P3 [0001] wymax
P6 [0000]
P5 [0000] P12 [0010] P3 [0001]
P7 [0001] P5 [0000] P12 [0010]
P9 [0000] P8 [0010] P7 [0001]
wymin P9 [0000] P8 [0010]
P10 [0100]
wymin
P13 [0101] P14 [0110]
P10 [0100]
P13 [0101] P14 [0110]
wxmin wxmax
wxmin wxmax

14 16
of
34
Cohen-Sutherland: Lines In The Window of
34
Cohen-Sutherland: Other Lines
Lines completely contained within the Lines that cannot be identified as completely
window boundaries have region code [0000] inside or outside the window may or may not
for both end-points so are not clipped cross the window interior
P4 [1000]
P11 [1010] These lines are processed as follows:
Window – Compare an end-point outside the window to a
wymax
P3 [0001]
P6 [0000] boundary (choose any order in which to
P5 [0000] P12 [0010] consider boundaries e.g. left, right, bottom, top)
P7 [0001]
P9 [0000]
and determine how much can be discarded
P8 [0010]
wymin – If the remainder of the line is entirely inside or
P13 [0101]
P10 [0100]
P14 [0110]
outside the window, retain it or clip it
wxmin wxmax
respectively

4
17 19
of
34
Cohen-Sutherland: Other Lines (cont…) of
34
Cohen-Sutherland Examples (cont…)
– Otherwise, compare the remainder of the line Consider the line P3 to P4 below
against the other window boundaries
– Start at P4 P [1000] 4
– Continue until the line is either discarded or a P ’ [1001]
4
Window
segment inside the window is found – From the region codes max wy

of the two end-points P [0001] 3


We can use the region codes to determine
we know the line
which window boundaries should be
crosses the left
considered for intersection boundary so calculate
wymin

– To check if a line crosses a particular the intersection point to


boundary we compare the appropriate bits in
generate P4’ wxmin wxmax
the region codes of its end-points
– If one of these is a 1 and the other is a 0 then – The line P3 to P4’ is completely outside the
the line crosses the boundary window so is clipped

18 20
of
34
Cohen-Sutherland Examples of
34
Cohen-Sutherland Examples (cont…)

Consider the line P9 to P10 below Consider the line P7 to P8 below


– Start at P10 Window – Start at P7
wymax Window
– From the region codes – From the two region wymax
of the two end-points we codes of the two
know the line doesn’t P [0000]
9
end-points we know P7 [0001]
P7’ [0000]
P8 [0010]
wymin
cross the left or right P ’ [0000]
10 the line crosses the P8’ [0000]
wymin
boundary P [0100]
10 left boundary so
– Calculate the wx min wxmax calculate the
intersection of the line with the bottom boundary intersection point to wxmin wxmax

to generate point P10’ generate P7’


– The line P9 to P10’ is completely inside the
window so is retained
5
21 23
of
34
Cohen-Sutherland Examples (cont…) of
34
Calculating Line Intersections
Consider the line P7’ to P8 Intersection points with the window
– Start at P8 boundaries are calculated using the line-
– Calculate the wymax
Window equation parameters
intersection with the – Consider a line with the end-points (x1, y1)
right boundary to P7’ [0000]
P8 [0010]
and (x2, y2)
P7 [0001]
generate P8’ P8’ [0000]
– The y-coordinate of an intersection with a
wymin
– P7’ to P8’ is inside vertical window boundary can be calculated
the window so is using:
retained wxmin wxmax
y = y1 + m (xboundary - x1)
where xboundary can be set to either wxmin or
wxmax

22 24
of
34
Cohen-Sutherland Worked Example of
34
Calculating Line Intersections (cont…)

– The x-coordinate of an intersection with a


horizontal window boundary can be
Window calculated using:
wymax
x = x1 + (yboundary - y1) / m
where yboundary can be set to either wymin or
wymax
– m is the slope of the line in question and can
wymin be calculated as m = (y2 - y1) / (x2 - x1)

wxmin wxmax

6
25 27 Sutherland-Hodgman Area Clipping
of Area Clipping of
34 34 Algorithm (cont…)
Similarly to lines, areas To clip an area against an individual boundary:
must be clipped to a – Consider each vertex in turn against the
window boundary boundary
Consideration must be – Vertices inside the boundary are saved for
clipping against the next boundary
taken as to which
– Vertices outside the boundary are clipped
portions of the area must
– If we proceed from a point inside the boundary
be clipped
to one outside, the intersection of the line with
the boundary is saved
– If we cross from the outside to the inside
intersection point and the vertex are saved

26 Sutherland-Hodgman Area Clipping 28


of of Sutherland-Hodgman Example
34 Algorithm 34

A technique for clipping areas Sutherland


turns up
Each example S
developed by Sutherland & again. This shows the point
time with S
Hodgman Gary Hodgman with
being processed (P) P
I

Put simply the polygon is clipped


whom he worked at and the previous P
the first ever
by comparing it against each graphics company point (S) Save Point P Save Point I
Evans & Sutherland
boundary in turn Saved points define P S
area clipped to the I P

boundary in
question
S
No Points Saved Save Points I & P
Original Area Clip Left Clip Right Clip Top Clip Bottom

7
29 31
of
34
Other Area Clipping Concerns of
34

Clipping concave areas can be a little more


tricky as often superfluous lines must be
removed

Window Window Window Window

Clipping curves requires more work


– For circles we must find the two intersection
points on the window boundary

30 32
of
34
Summary of
34
Cohen-Sutherland Clipping Algorithm VI

Objects within a scene must be clipped to Let’s consider the lines remaining below
display the scene in a window
Because there are can be so many objects
clipping must be extremely efficient P4 [1000]
P11 [1010]

The Cohen-Sutherland algorithm can be Window


wymax
used for line clipping P3 [0001]
P6 [0000]

P5 [0000] P12 [0010]


The Sutherland-Hodgman algorithm can be P7 [0001]

used for area clipping wymin


P9 [0000] P8 [0010]

P10 [0100]
P13 [0101] P14 [0110]

wxmin wxmax

8
33
of
34
Cohen-Sutherland Clipping Algorithm

Easy - a point (x,y) is not clipped if:


wxmin ≤ x ≤ wxmax AND wymin ≤ y ≤ wymax
otherwise it is clipped
P4 Clipped
Clipped

Window P2
wymax
Clipped
P5
P1
P7 Points Within the
Window are Not Clipped
P9
P8
wymin
Clipped P10

wxmin wxmax

34
of
34
Clipping
Point clipping is easy:
– For point (x,y) the point is not clipped if
wxmin ≤ x ≤Pwxmax AND wymin ≤ y ≤ wymax
4

Window P2
wymax
P6
P3
P1
P7 P5

P9
P8
wymin
P10

wxmin wxmax
Before Clipping

9
3
of
32
Graphics Hardware
It’s worth taking a little look at how graphics

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
hardware works before we go any further
Computer Graphics 5: How do things end up on the screen?
Line Drawing Algorithms

2 4
of
32
Contents of
32
Architecture Of A Graphics System
Graphics hardware
The problem of scan conversion
Display
Frame Video
Considerations Processor
Memory Buffer Controller
Monitor

Line equations
Scan converting algorithms CPU
Display System
Processor Memory
– A very simple solution
– The DDA algorithm
System Bus
Conclusion

1
5 7
of
32
Output Devices of
32
Raster Scan Systems
There are a range of output devices Draw one line at a time

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
currently available:
– Printers/plotters
– Cathode ray tube displays
– Plasma displays
– LCD displays
– 3 dimensional viewers
– Virtual/augmented reality headsets
We will look briefly at some of the more
common display devices

6 8
of
32
Basic Cathode Ray Tube (CRT) of
32
Colour CRT
Fire an electron beam at a phosphor coated An electron gun for each colour – red, green
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
screen and blue

2
9 11
of
32
Plasma-Panel Displays of
32
The Problem Of Scan Conversion
Applying voltages to A line segment in a scene is defined by the
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

crossing pairs of coordinate positions of the line end-points


conductors causes y
the gas (usually a
mixture including (7, 5)
neon) to break
down into a glowing
plasma of electrons
(2, 2)
and ions
x

10 12
of
32
Liquid Crystal Displays of
32
The Problem (cont…)
Light passing But what happens when we try to draw this
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

through the liquid on a pixel based display?


crystal is twisted
so it gets through
the polarizer
A voltage is
applied using the
crisscrossing
conductors to stop
the twisting and
turn pixels off How do we choose which pixels to turn on?

3
13 15
of
32
Considerations of
32
Lines & Slopes
Considerations to keep in mind: The slope of a line (m) is defined by its start
– The line has to look good and end coordinates
• Avoid jaggies The diagram below shows some examples
– It has to be lightening fast! of lines and their slopes
• How many lines need to be drawn in a typical m=∞
scene? m = -4 m=4
m = -2 m=2
• This is going to come back to bite us again and
again m = -1 m=1

m = -1/2 m = 1/2
m = -1/3 m = 1/3

m=0 m=0

14 16
of
32
Line Equations of
32
A Very Simple Solution
Let’s quickly review the equations involved We could simply work out the corresponding
in drawing lines y coordinate for each unit x coordinate
y Slope-intercept line Let’s consider the following example:
equation: y

yend
(7, 5)
where: 5

y0
2
(2, 2)
x
x0 xend
x
2 7

4
17 19
of
32
A Very Simple Solution (cont…) of
32
A Very Simple Solution (cont…)

5
Now just round off the results and turn on
these pixels to draw our line
4
7
6
3
5
4
2
3
2
1
1
0
0
0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7

18 20
of
32
A Very Simple Solution (cont…) of
32
A Very Simple Solution (cont…)
y
5
(7, 5)
First work out m and b: However, this approach is just way too slow
In particular look out for:
– The equation y = mx + b requires the
2
(2, 2) multiplication of m by x

2 3 4 5 6 7
x – Rounding off the resulting y coordinates
Now for each x value work out the y value: We need a faster solution

5
21 23
of
32
A Quick Note About Slopes of
32
A Quick Note About Slopes (cont…)
In the previous example we chose to solve If the slope of a line is between -1 and 1 then
the parametric line equation to give us the y we work out the y coordinates for a line based
coordinate for each unit x coordinate on it’s unit x coordinates
What if we had done it the other way Otherwise we do the opposite – x coordinates
around? are computed based on unit y coordinates
m=∞
So this gives us: m = -4 m=4
m = -2 m=2
m = -1 m=1

where: and m = -1/2 m = 1/2


m = -1/3 m = 1/3

m=0 m=0

22 24
of
32
A Quick Note About Slopes (cont…) of
32
A Quick Note About Slopes (cont…)
Leaving out the details this gives us: 5

We can see easily that 7


3
this line doesn’t look 6
very good! 5
2
4
We choose which way 3
1
to work out the line 2

pixels based on the 1


0
0
slope of the line
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7

6
25 27
of
32
The DDA Algorithm of
32
The DDA Algorithm (cont…)
The digital differential When the slope of the line is between -1 and 1
analyzer (DDA) algorithm begin at the first point in the line and, by
takes an incremental incrementing the x coordinate by 1, calculate
approach in order to the corresponding y coordinates as follows:
speed up scan conversion
The original differential
Simply calculate yk+1 a n a l yze r wa s a p h ys ic a l When the slope is outside these limits,
machine developed by
based on yk Vannevar Bush at MIT in the increment the y coordinate by 1 and calculate
19 3 0’s in ord er to sol ve
ordinary differential equations.
the corresponding x coordinates as follows:
More information here.

26 28
of
32
The DDA Algorithm (cont…) of
32
The DDA Algorithm (cont…)
Consider the list of points that we Again the values calculated by the equations
determined for the line in our previous used by the DDA algorithm must be rounded
example: to match pixel values
(2, 2), (3, 23/5), (4, 31/5), (5, 34/5), (6, 42/5), (7, 5)
Notice that as the x coordinates go up by
one, the y coordinates simply go up by the
(round(xk+ 1/m), yk+1)
slope of the line (xk+1, round(yk+m))

This is the key insight in the DDA algorithm (xk, yk)


(xk+1, yk+m) (xk, yk) (xk+ 1/m, yk+1)

(xk, round(yk)) (round(xk), yk)

7
29 31
of
32
DDA Algorithm Example of
32
The DDA Algorithm Summary
Let’s try out the following examples: The DDA algorithm is much faster than our
y y (2, 7)
previous attempt
7
– In particular, there are no longer any
multiplications involved
(7, 5)
5 However, there are still two big issues:
– Accumulation of round-off errors can make
the pixelated line drift away from what was
2 2
(2, 2) (3, 2) intended
x x
– The rounding operations and floating point
2 7 2 3
arithmetic involved are time consuming

30 32
of
32
DDA Algorithm Example (cont…) of
32
Conclusion

7
In this lecture we took a very brief look at
how graphics hardware works
6 Drawing lines to pixel based displays is time
consuming so we need good ways to do it
5
The DDA algorithm is pretty good – but we
4 can do better
3
Next time we’ll like at the Bresenham line
algorithm and how to draw circles, fill
2 polygons and anti-aliasing
0 1 2 3 4 5 6 7

8
3
of
48
The Bresenham Line Algorithm
The Bresenham algorithm is
Computer Graphics 6: another incremental scan
Bresenham Line conversion algorithm
The big advantage of this
Drawing Algorithm,
algorithm is that it uses only
Circle Drawing & integer calculations
Polygon Filling Jack Bresenham
worked for 27 years at
IBM before entering
academia. Bresenham
developed his famous
algorithms at IBM in
the early 1960s

2 4
of
48
Contents of
48
The Big Idea
In today’s lecture we’ll have a look at: Move across the x axis in unit intervals and
– Bresenham’s line drawing algorithm at each step choose between two different y
– Line drawing algorithm comparisons coordinates
– Circle drawing algorithms For example, from
5
• A simple technique (xk+1, yk+1)
position (2, 3) we
• The mid-point circle algorithm 4
have to choose
– Polygon fill algorithms (xk, yk) between (3, 3) and
– Summary of raster drawing algorithms 3 (3, 4)
(xk+1, yk)
2
We would like the
point that is closer to
2 3 4 5
the original line
1
5 7 Deriving The Bresenham Line Algorithm
of Deriving The Bresenham Line Algorithm of
48 48 (cont…)
At sample position This simple decision is based on the difference
yk+1
xk+1 the vertical dupper between the two pixel positions:
y
separations from the dlower
mathematical line are yk
labelled dupper and dlower Let’s substitute m with ∆y/∆x where ∆x and
xk+1
∆y are the differences between the end-points:
The y coordinate on the mathematical line at
xk+1 is:

6
of
Deriving The Bresenham Line Algorithm 8
of
Deriving The Bresenham Line Algorithm
48 (cont…) 48 (cont…)
So, dupper and dlower are given as follows: So, a decision parameter pk for the kth step
along a line is given by:

and:
The sign of the decision parameter pk is the
same as that of dlower – dupper
If pk is negative, then we choose the lower
We can use these to make a simple decision
pixel, otherwise we choose the upper pixel
about which pixel is closer to the mathematical
line

2
9 Deriving The Bresenham Line Algorithm 11
of of The Bresenham Line Algorithm
48 (cont…) 48

Remember coordinate changes occur along BRESENHAM’S LINE DRAWING ALGORITHM


(for |m| < 1.0)
the x axis in unit steps so we can do
1. Input the two line end-points, storing the left end-point
everything with integer calculations in (x0, y0)
At step k+1 the decision parameter is given 2. Plot the point (x0, y0)
as: 3. Calculate the constants Δx, Δy, 2Δy, and (2Δy - 2Δx)
and get the first value for the decision parameter as:

Subtracting pk from this we get: 4. At each xk along the line, starting at k = 0, perform the
following test. If pk < 0, the next point to plot is
(xk+1, yk) and:

10 Deriving The Bresenham Line Algorithm 12


of of The Bresenham Line Algorithm (cont…)
48 (cont…) 48

But, xk+1 is the same as xk+1 so: Otherwise, the next point to plot is (xk+1, yk+1) and:

5. Repeat step 4 (Δx – 1) times


where yk+1 - yk is either 0 or 1 depending on
the sign of pk ACHTUNG! The algorithm and derivation
The first decision parameter p0 is evaluated above assumes slopes are less than 1. for
at (x0, y0) is given as: other slopes we need to adjust the algorithm
slightly

3
13 15
of
48
Bresenham Example of
48
Bresenham Exercise
Let’s have a go at this Go through the steps of the Bresenham line
Let’s plot the line from (20, 10) to (30, 18) drawing algorithm for a line going from
First off calculate all of the constants: (21,12) to (29,16)
– Δx: 10
– Δy: 8
– 2Δy: 16
– 2Δy - 2Δx: -4
Calculate the initial decision parameter p0:
– p0 = 2Δy – Δx = 6

14 16
of
48
Bresenham Example (cont…) of
48
Bresenham Exercise (cont…)

18 k pk (xk+1,yk+1) 18 k pk (xk+1,yk+1)

17 0 17 0
16 1 16 1

15 2 15 2

14 3 14 3
4 4
13 13
5 5
12 12
6 6
11 11
7 7
10 10
8 8
20 21 22 23 24 25 26 27 28 29 30 9 20 21 22 23 24 25 26 27 28 29 30

4
17 19 A Simple Circle Drawing Algorithm
of Bresenham Line Algorithm Summary of
48 48 (cont…)
The Bresenham line algorithm has the
following advantages:
– An fast incremental algorithm
– Uses only integer calculations
Comparing this to the DDA algorithm, DDA
has the following problems:
– Accumulation of round-off errors can make
the pixelated line drift away from what was
intended
– The rounding operations and floating point
arithmetic involved are time consuming

18 20 A Simple Circle Drawing Algorithm


of A Simple Circle Drawing Algorithm of
48 48 (cont…)
The equation for a circle is: However, unsurprisingly this is not a brilliant
solution!
Firstly, the resulting circle has large gaps
where r is the radius of the circle where the slope approaches the vertical
So, we can write a simple circle drawing Secondly, the calculations are not very
algorithm by solving the equation for y at efficient
– The square (multiply) operations
unit x intervals using:
– The square root operation – try really hard to
avoid these!
We need a more efficient, more accurate
solution

5
21 23
of
48
Eight-Way Symmetry of
48
Mid-Point Circle Algorithm (cont…)
The first thing we can notice to make our circle Assume that we have
drawing algorithm more efficient is that circles just plotted point (xk, yk) (xk, yk) (xk+1, yk)
centred at (0, 0) have eight-way symmetry The next point is a
(-x, y) (x, y) choice between (xk+1, yk) (xk+1, yk-1)
and (xk+1, yk-1)
(-y, x) (y, x) We would like to choose
the point that is nearest to
(-y, -x) (y, -x) the actual circle
So how do we make this choice?
(-x, -y) (x, -y)

22 24
of
48
Mid-Point Circle Algorithm of
48
Mid-Point Circle Algorithm (cont…)
Similarly to the case with lines, Let’s re-jig the equation of the circle slightly
there is an incremental to give us:
algorithm for drawing circles –
the mid-point circle algorithm The equation evaluates as follows:
In the mid-point circle algorithm
we use eight-way symmetry so
The mid-point circle
only ever calculate the points algorithm was
for the top right eighth of a developed by Jack
Bresenham, who we By evaluating this function at the midpoint
circle, and then use symmetry heard about earlier.
to get the rest of the points Bresenham’s patent between the candidate pixels we can make
for the algorithm can our decision
be viewed here.

6
25 27
of
48
Mid-Point Circle Algorithm (cont…) of
48
Mid-Point Circle Algorithm (cont…)
Assuming we have just plotted the pixel at The first decision variable is given as:
(xk,yk) so we need to choose between
(xk+1,yk) and (xk+1,yk-1)
Our decision variable can be defined as:

Then if pk < 0 then the next decision variable


If pk < 0 the midpoint is inside the circle and is given as:
and the pixel at yk is closer to the circle
Otherwise the midpoint is outside and yk-1 is If pk > 0 then the decision variable is:
closer

26 28
of
48
Mid-Point Circle Algorithm (cont…) of
48
The Mid-Point Circle Algorithm
To ensure things are as efficient as possible MID-POINT CIRCLE ALGORITHM
we can do all of our calculations • Input radius r and circle centre (xc, yc), then set the
incrementally coordinates for the first point on the circumference of a
circle centred on the origin as:
First consider:

• Calculate the initial value of the decision parameter as:

or:
• Starting with k = 0 at each position xk, perform the
following test. If pk < 0, the next point along the circle
centred on (0, 0) is (xk+1, yk) and:
where yk+1 is either yk or yk-1 depending on
the sign of pk

7
29 31 Mid-Point Circle Algorithm Example
of The Mid-Point Circle Algorithm (cont…) of
48 48 (cont…)
Otherwise the next point along the circle is (xk+1, yk-1)
and: 10
k pk (xk+1,yk+1) 2xk+1 2yk+1
9
8 0
4. Determine symmetry points in the other seven octants 7
1
6
5. Move each calculated pixel position (x, y) onto the
5 2
circular path centred at (xc, yc) to plot the coordinate
values: 4
3
3
2 4
1 5
6. Repeat steps 3 to 5 until x >= y
0
6
0 1 2 3 4 5 6 7 8 9 10

30 32
of
48
Mid-Point Circle Algorithm Example of
48
Mid-Point Circle Algorithm Exercise
To see the mid-point circle algorithm in Use the mid-point circle algorithm to draw
action lets use it to draw a circle centred at the circle centred at (0,0) with radius 15
(0,0) with radius 10

8
33 Mid-Point Circle Algorithm Example 35
of of Filling Polygons
48 (cont…) 48

16
k pk (xk+1,yk+1) 2xk+1 2yk+1 So we can figure out how to draw lines and
15
14
0
circles
13
12 1 How do we go about drawing polygons?
11 2
10 3 We use an incremental algorithm known as
9
8
4 the scan-line algorithm
5
7
6 6
5 7
4 8
3
9
2
1 10
0 11
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 12

34 36
of
48
Mid-Point Circle Algorithm Summary of
48
Scan-Line Polygon Fill Algorithm
The key insights in the mid-point circle 10 Scan Line
algorithm are:
– Eight-way symmetry can hugely reduce the 8

work in drawing a circle


6
– Moving in unit steps along the x axis at each
point along the circle’s edge we need to
choose between two possible y coordinates 4

0
2 4 6 8 10 12 14 16

9
37 39
of
48
Scan-Line Polygon Fill Algorithm of
48
Line Drawing Summary
The basic scan-line algorithm is as follows: Over the last couple of lectures we have
– Find the intersections of the scan line with all looked at the idea of scan converting lines
edges of the polygon The key thing to remember is this has to be
– Sort the intersections by increasing x FAST
coordinate
For lines we have either DDA or Bresenham
– Fill in all pixels between pairs of intersections
that lie interior to the polygon For circles the mid-point algorithm

38 Scan-Line Polygon Fill Algorithm 40


of of Anti-Aliasing
48 (cont…) 48

10
41 43
of
48
Summary Of Drawing Algorithms of
48
Mid-Point Circle Algorithm (cont…)

M
5

1 2 3 4

42 44
of
48
Mid-Point Circle Algorithm (cont…) of
48
Mid-Point Circle Algorithm (cont…)

6 6
M
5 5

4 4

3 3

1 2 3 4 1 2 3 4

11
45 47
of
48
Blank Grid of
48
Blank Grid

10
9
8
7
6
5
4
3
2
1
0

0 1 2 3 4 5 6 7 8 9 10

46 48
of
48
Blank Grid of
48
Blank Grid

12
3
of
22
3-D Coordinate Spaces
Remember what we mean by a 3-D

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
coordinate space
Computer Graphics 7: y axis

Viewing in 3-D
y

z
x
Right-Hand
z axis x axis Reference System

2 4
of
22
Contents of
22
Translations In 3-D
In today’s lecture we are going to have a To translate a point in three dimensions by

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
look at: dx, dy and dz simply calculate the new
– Transformations in 3-D points as follows:
• How do transformations in 3-D work?
• 3-D homogeneous coordinates and matrix based
x’ = x + dx y’ = y + dy z’ = z + dz
transformations
– Projections
• History
• Geometrical Constructions
(x, y, z)
• Types of Projection (x’, y’, z’)
• Projection in Computer Graphics Translated Position

1
5 7
of
22
Scaling In 3-D of
22
Rotations In 3-D (cont…)

To scale a point in three dimensions by sx, The equations for the three kinds of
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
sy and sz simply calculate the new points as rotations in 3-D are as follows:
follows:
x’ = sx*x y’ = sy*y z’ = sz*z

(x, y, z) (x’, y’, z’) x’ = x·cosθ - y·sinθ x’ = x x’ = z·sinθ + x·cosθ


y’ = x·sinθ + y·cosθ y’ = y·cosθ - z·sinθ y’ = y
Scaled Position z’ = z z’ = y·sinθ + z·cosθ z’ = z·cosθ - x·sinθ

6 8
of
22
Rotations In 3-D of
22
Homogeneous Coordinates In 3-D
When we performed rotations in two Similar to the 2-D situation we can use
dimensions we only had the choice of homogeneous coordinates for 3-D
rotating about the z axis transformations - 4 coordinate y axis
In the case of three dimensions we have column vector
more options All transformations can y

– Rotate about x – pitch then be represented


as matrices P
– Rotate about y – yaw
– Rotate about z - roll z
P(x, y, z) = x

z axis x axis

2
9 11
of
22
3D Transformation Matrices of
22
What Are Projections?
Our 3-D scenes are all specified in 3-D
world coordinates
Translation by Scaling by
dx, dy, dz sx, sy, sz
To display these we need to generate a 2-D
image - project objects onto a picture plane
Picture Plane

Objects in
World Space

Rotate About X-Axis Rotate About Y-Axis Rotate About Z-Axis So how do we figure out these projections?

10 12
of
22
Remember The Big Idea of
22
Converting From 3-D To 2-D
Projection is just one part of the process of
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

converting from 3-D world coordinates to a


2-D image
3-D world
Project onto Transform to
coordinate Clip against 2-D device
projection 2-D device
output view volume coordinates
plane coordinates
primitives

3
13 15
of
22
Types Of Projections of
22
Parallel Projections
There are two broad classes of projection: Some examples of parallel projections
– Parallel: Typically used for architectural and
engineering drawings
– Perspective: Realistic looking and used in
computer graphics

Orthographic Projection

Isometric Projection
Parallel Projection Perspective Projection

14 16
of
22
Types Of Projections (cont…) of
22
Isometric Projections
For anyone who did engineering or technical Isometric projections have been used in
drawing computer games from the very early days of
the industry up to today

Q*Bert Sim City Virtual Magic Kingdom

4
17 19
of
22
Perspective Projections of
22
Elements Of A Perspective Projection

Perspective projections are much more


realistic than parallel projections

Virtual
Camera

18 20
of
22
Perspective Projections of
22
The Up And Look Vectors
There are a number of different kinds of Projection of Up vector The look vector
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Look vector
perspective views up vector
indicates the direction in
The most common are one-point and two which the camera is
point perspectives Position
pointing
The up vector
determines how the
camera is rotated
For example, is the camera held vertically or
Two-Point horizontally
Perspective
One Point Perspective
Projection
Projection

5
21
of
22
Summary
In today’s lecture we looked at:
– Transformations in 3-D
• Very similar to those in 2-D
– Projections
• 3-D scenes must be projected onto a 2-D image
plane
• Lots of ways to do this
– Parallel projections
– Perspective projections
• The virtual camera

22
of
22
Who’s Choosing Graphics?
A couple of quick questions for you:
– Who is choosing graphics as an option?
– Are there any problems with option time-
tabling?
– What do you think of the course so far?
• Is it too fast/slow?
• Is it too easy/hard?
• Is there anything in particular you want to cover?

6
3
of
18
Perspective Projections
Remember the whole point of perspective

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
projections
Computer Graphics 8:
Perspective Projections

2 4
of
18
Contents of
18
Projection Calculations
In today’s lecture we are going to have a
look at how perspective projections work in y axis
computer graphics
P=(x, y, z)

( x p , y p , zp )
(xprp, yprp, zprp)
z axis x axis

View Plane

1
5 7
of
18
Projection Calculations (cont…) of
18
Projection Calculations (cont…)

Any point along the projector (x’, y’, z’) can Armed with this we can restate the equations
be given as: for x’ and y’ for general perspective:

When u = 0 we are at P, while when u = 1


we are at the Projection Reference Point

6 8 Perspective Projection Transformation


of Projection Calculations (cont…) of
18 18 Matrix
At the view plane z’ = zvp so we can solve the Because the x and y coordinates of a
z’ equation for u: projected point are expressed in terms of z
we need to do a little work to generate a
perspective transformation matrix
First we use a homogeneous representation
to give xvp and yvp as:

where:

2
9
of
Perspective Projection Transformation 11
of
Perspective Projection Transformation
18 Matrix (cont…) 18 Matrix (cont…)
From the previous equations for xvp and yvp Setting up the matrix so that we calculate xh
we can see that: and yh is straightforward
However, we also need to preserve the z
values – depth information
Otherwise the z coordinates are distorted by
the homogeneous parameter h
We don’t need to worry about the details
here, but it means extra parameters (sz and tz)
are added to the matrix

10
of
Perspective Projection Transformation 12
of
Perspective Projection Transformation
18 Matrix (cont…) 18 Matrix (cont…)
Now we can set up a transformation matrix, The following is the perspective projection
that only contains perspective parameters, to matrix which arises:
convert a spatial position to homogeneous
coordinates
First we calculate the homogeneous
coordinates using the perspective-
transformation matrix:

where Ph is the homogeneous point (xh, yh, zh,


h) and P is the coordinate position (x, y, z, 1)

3
13 15 Setting Up A Perspective Projection
of Setting Up A Perspective Projection of
18 18 (cont…)
A perspective projection Increasing the field of view angle increases
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
can be set up by the height of the view plane and so
specifying the position increases foreshortening
and size of the view
plane and the position
of the projection
reference point
However, this can be
kind of awkward

14
of
Setting Up A Perspective Projection 16
of
Setting Up A Perspective Projection
18 (cont…) 18 (cont…)
The field of view angle can be a more intuitive The amount of foreshortening that is present
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

way to specify perspective projections can greatly affect the appearance of our
This is analogous to choosing a lense for a scenes
camera

Field of view

4
17
of
Setting Up A Perspective Projection
18 (cont…)
We need one more thing to specify a
perspective projections using the filed of
view angle
The aspect ratio gives the ratio between the
width sand height of the view plane

18
of
18
Summary
In today’s class we looked at the detail of
generating a perspective projection of a
three dimensional scene

5
3
of
24
Nate Robins’ OpenGL Tutorials
Nate Robins has a number of great OpenGL
tutorial applications posted on his website
Computer Graphics 9:
Clipping In 3D

Nate Robin’s OpenGL Tutorials available at: http://www.xmission.com/~nate/tutors.html

2 4
of
24
Contents of
24
3-D Clipping
In today’s lecture we are going to have a Just like the case in two dimensions,
look at some perspective view demos and clipping removes objects that will not be
investigate how clipping works in 3-D visible from the scene
– Nate Robins’ OpenGL tutorials The point of this is to remove computational
– The clipping volume effort
– The zone labelling scheme 3-D clipping is achieved in two basic steps
– 3-D clipping – Discard objects that can’t be viewed
• Point clipping • i.e. objects that are behind the camera, outside
• Line clipping the field of view, or too far away
• Polygon clipping – Clip objects that intersect with any clipping
plane

1
5 7
of
24
Discard Objects of
24
The Clipping Volume
Discarding objects that cannot possibly be After the perspective transformation is

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
seen involves comparing an objects complete the frustum shaped viewing
bounding box/sphere against the volume has been converted to a
dimensions of the view volume parallelopiped - remember we preserved all
– Can be done before or after projection
z coordinate depth information

6 8
of
24
Clipping Objects of
24
Normalisation
Objects that are partially within the viewing The transformed volume is then normalised

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
volume need to be clipped – just like the 2D around position (0, 0, 0) and the z axis is
case reversed

2
9 11
of
24
When Do We Clip? of
24
Dividing Up The World (cont..)
We perform clipping after the projection Because we have a normalised clipping
transformation and normalisation are volume we can test for these regions as
complete follows:
So, we have the following:

Rearranging these we get:

We apply all clipping to these homogeneous


coordinates

10 12
of
24
Dividing Up The World of
24
Region Codes
Similar to the case in two dimensions, we

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
divide the world into regions
This time we use a 6-bit region code to give
us 27 different region codes
The bits in these regions codes are as
follows:

bit 6 bit 5 bit 4 bit 3 bit 2 bit 1


Far Near Top Bottom Right Left

3
13 15
of
24
Point Clipping of
24
Line Clipping Example
Point clipping is trivial so we won’t spend

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
any time on it

14 16 The Equation Of The Line For 3D


of Line Clipping of
24 24 Clipping
To clip lines we first label all end points with For clipping equations for three dimensional
the appropriate region codes line segments are given in their parametric
We can trivially accept all lines with both form
end-points in the [000000] region For a line segment with end points P1(x1h,
We can trivially reject all lines whose end y1h, z1h, h1) and P2(x2h, y2h, z2h, h2) the
points share a common bit in any position parametric equation describing any point on
– This is just like the 2 dimensional case as the line is:
these lines can never cross the viewing
volume
– In the example that follows the line from
P3[010101] to P4[100110] can be rejected

4
17 The Equation Of The Line For 3D 19
of of 3D Line Clipping Example (cont…)
24 Clipping (cont…) 24

From this parametric equation of a line we Since the right boundary is at x = 1 we now
can generate the equations for the know the following holds:
homogeneous coordinates:

which we can solve for u as follows:

using this value for u we can then solve for yp


and zp similarly

18 20
of
24
3D Line Clipping Example of
24
3D Line Clipping Example (cont…)
Consider the line P1[000010] to P2[001001] When then simply continue as per the two
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Because the lines have different values in bit dimensional line clipping algorithm
2 we know the line crosses the right boundary

5
21 23
of
24
3D Polygon Clipping of
24
Cheating with Clipping Planes
However the most common case in 3D For far clipping plane introduce something to
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

clipping is that we are clipping graphics obscure far away objects – fog
objects made up of polygons Make objects very near the camera
transparent

22 24
of
24
3D Polygon Clipping (cont…) of
24
Summary
In this case we first try to eliminate the entire In today’s lecture we examined how clipping
object using its bounding volume is achieved in 3-D
Next we perform clipping on the individual
polygons using the Sutherland-Hodgman
algorithm we studied previously

6
3
of
19
Polyhedra
Objects are simply a set of surface polygons
that enclose an object interior
Computer Graphics 10:
Simplest and fastest way to render objects
3D Object
Often referred to as standard graphics
Representations objects
In many cases packages allow us to define
objects as curved surfaces etc but actually
convert these to polygon meshes for display
To define polyhedra we simply define the
vertices of the polygons required

2 4
of
19
Contents of
19
Polyhedra (cont…)
In today’s lecture we are going to start to

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
look at how objects are modelled in 3D
– Polyhedra
– Quadric surfaces
– Sweep representations
– Constructive solid geometry methods

1
5 7
of
19
Quadric Surfaces of
19
Quadric Surfaces – Spheres (cont…)

A frequently used class of objects are


z axis
quadric surfaces
These are 3D surfaces described using
quadratic equations
Quadric surfaces include: P ( x, y, z )
r
– Spheres
φ
– Ellipsoids θ

– Tori
x axis y axis
– Paraboloids
– Hyperboloids

6 8
of
19
Quadric Surfaces - Spheres of
19
Sweep Representations
A spherical surface with radius r centred on Sweep representations are useful for
the origin is defined as the set of points (x, y, constructing 3 dimensional objects that
z) that satisfy the equation possess translational, rotational or other
symmetries
Objects are specified as a 2 dimensional
This can also be done in parametric form
shape and a sweep that moves that shape
using latitude and longitude angles
through a region of space

2
9 11 Constructive Solid Geometry Methods
of Sweep Representations - Examples of
19
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
19 (cont…)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
10 12 Constructive Solid Geometry Methods
of Constructive Solid Geometry Methods of
19 19 (cont…)
Constructive Solid Geometry (CSG) CSG usually starts with a small set of
performs solid modelling by generating a primitives such as blocks, pyramids, spheres
new object from two three dimensional and cones
objects using a set operation Two objects re initially created and
combined using some set operation to
Valid set operations include create a new object
– Union
This object can then be combined with
– Intersection another primitive to make another new
– Difference object
This process continues until modelling
complete

3
13 Constructive Solid Geometry Methods 15
of of Ray-Casting
19 (cont…) 19

CSG models are Ray-casting is typically used to implement


CSG
Object usually represented CSG operators when objects are described
as CSG trees with boundary representations
oper1 oper3
Ray casting is applied by determining the
objects that are intersected by a set of
parallel lines emanating from the xy plane
obj1 obj2 obj4 oper2 along the z axis
The xy plane is referred to as the firing
plane
obj2 obj3

14 Constructive Solid Geometry Methods 16


of of Ray-Casting (cont…)
19 (cont…) 19

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

4
17 19
of
19
Ray-Casting (cont…) of
19
Summary
Surface intersections along each ray are In today’s lecture we began to look at how
calculated and these are sorted according to objects are modelled in 3D
distance from the firing plane Polyhedra are by far the most common
The surface limits for the composite object modelling technique, but there are many
others
are then determined by the specified set
operation Often more exotic modelling techniques are
used in a modelling phase, but the resultant
models are converted to polyhedra before
rendering
Next time we will look at more modelling
techniques

18
of
19
Ray Casting Example
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

5
3
of
24
Octrees
Octrees are hierarchical tree structures used
to represent solid objects
Octrees are particularly
Computer Graphics 11:
useful in applications
3D Object Representations – that require cross
Octrees & Fractals sectional views – for
example medical
applications
Octrees are typically used when the interior
of objects is important

2 4
of
24
Contents of
24
Octrees & Quadtrees
In today’s lecture we would like to continue Octrees are based on a two-dimensional
on from the last day and look at some more representation scheme called quadtree
modelling techniques encoding
– Octrees Quadtree encoding divides a square region
– Fractals of space into four equal areas until
homogeneous regions are found
These regions can then be arranged in a
tree

1
5 7
of
24
Quadtree Example 1 of
24
Octrees
Quadtree encodings provide considerable
savings in storage when large colour areas
exist in a region of space
An octree takes the same approach as
quadtrees, but divides a cube region of 3D
space into octants
Each region within an octree is referred to
as a volume element or voxel
Division is continued until homogeneous
regions are discovered

6 8
of
24
Quadtree Example 2 of
24
Octrees (cont…)
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

2
9 11
of
24
Octrees (cont…) of
24
Octree Examples (cont…)
In 3 dimensions regions can be considered
to be homogeneous in terms of colour,

Taken from http://www-evasion.inrialpes.fr/Membres/Sylvain.Lefebvre/these/


material type, density or any other physical
characteristics
Voxels also have the unique possibility of
being empty

10 12
of
24
Octree Examples of
24
Fractals
All of the modelling techniques covered so
far use Euclidean geometry methods
Taken from http://www.unchainedgeometry.com/jbloom/images.html

– Objects were described using equations


“Clouds are not spheres, mountains are
This is finenot
forcones,
manufactured objects
coastlines are not circles and
bark isnatural
But what about not smooth, nor does
objects thatlightning
have
travel in a straight
irregular or fragmented features?line.”
Benoit Mandelbrot
– Mountains, clouds, coral…

3
13 Fractal Geometry Methods & Procedural 15
of of Generating Fractals
24 Modelling 24

Natural objects can be realistically described A fractal object is generated by repeatedly


using fractal geometry methods applying a specified transform function to
points in a region of space
Fractal methods use procedures rather than
equations to model objects - procedural If P0 = (x0, y0, z0) is a selected initial position,
modelling each iteration of a transformation function F
generates successive levels of detail with
The major characteristic of any procedural the calculations:
model is that the model is not based on
data, but rather on the implementation of a
procedure following a particular set of rules In general the transformation is applied to a
specified point set, or to a set of primitives
Modelling On The Fly! (e.g. lines, curves, surfaces)

14 16
of
24
Fractals of
24
Generating Fractals (cont…)
A fractal object has two basic Although fractal objects, by definition have
characteristics: infinite detail, we only apply the
– Infinite detail at every point
transformation a finite number of times
– A certain self similarity between object parts Obviously objects we display have finite
and the overall features of the object dimension – they fit on a page or a screen
A procedural representation approaches a
true representation as we increase the
number of iterations
The amount of detail is limited by the
resolution of the display device, but we can
Mandelbrot Set Video From: always zoom in for further detail
The Koch Curve http://www.fractal-animation.net/ufvp.htm

4
17 19
of
24
Example: The Koch Snowflake of
24
Fractal Dimension
The amount of variation in the structure of a
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

fractal object is described as the fractal


dimension, D
– More jagged looking objects have larger
fractal dimensions
Calculating the fractal dimension can be
difficult, especially for particularly complex
fractals
We won’t look at the details of these
calculations

18 20
of
24
Example: Ferns of
24
Types Of Fractals
Very similar techniques can be used to Fractals can be classified into three groups
generate vegetation – Self similar fractals
• These have parts that are scaled down versions
of the entire object
• Commonly used to model trees, shrubs etc
– Self affine fractals
• Have parts that are formed with different scaling
parameters in each dimension
• Typically used for terrain, water and clouds
– Invariant fractal sets
• Fractals formed with non-linear transformations
• Mandelbrot set, Julia set – generally not so useful

5
21 Random Midpoint Displacement 23
of of Fractals In Film Special Effects
24 Methods For Topography 24

One of the most successful uses of fractal


techniques in graphics is the generation of
landscapes
One efficient method for doing this is
random midpoint displacement

22 Random Midpoint Displacement 24


of of Summary
24 Methods For Topography (cont…) 24

Easy to do in two dimensions In today’s lecture we looked at how octrees


Easily expanded to three dimensions to and fractals are used in modelling
generate terrain Fractals in particular are a fairly exotic
Can introduce a roughness factor H to modelling technique, but can be extremely
control terrain appearance effective
Control surfaces can be used to start with a Next time we will look at curved surfaces
general terrain shape which are extremely important

Terrain generation demo:


http://world.std.com/~bgw/applets/1.02/MtFractal/MtFractal.html

6
3
of
18
Spline Representations
A spline is a smooth curve
defined mathematically
using a set of constraints
Computer Graphics 12: Splines have many uses:

“Manifold Splines”, X. Gu,


Y. He & H. Qin, Solid and
Physics Modeling 2005.
Spline Representations – 2D illustration
– Fonts
– 3D Modelling
– Animation

ACM © 1987 “Principles of


traditional animation applied
to 3D computer animation”

2 4
of
18
Contents of
18
Physical Splines
Today we are going to look at Bézier spline Physical splines are used in car/boat design
curves
– Introduction to splines
– Bézier origins
– Bézier curves
– Bézier cubic splines

Pierre Bézier

1
5 7
of
18
Big Idea of
18
Convex Hulls
User specifies control points The boundary formed by the set of control

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
Defines a smooth curve points for a spline is known as a convex hull
Think of an elastic band stretched around the
Curve control points

Control Control
Points Points

6 8
of
18
Interpolation Vs Approximation of
18
Control Graphs
A spline curve is specified using a A polyline connecting the control points in
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
set of control points order is known as a control graph
There are two ways to fit a curve to Usually displayed to help designers keep
these points: track of their splines
– Interpolation - the curve passes
through all of the control points
– Approximation - the
curve does not pass
through all of the control
points

2
9 11
of
18
Bézier Spline Curves of
18
Bézier Spline Curves (cont…)
A spline approximation method developed The Bézier blending functions BEZk,n(u) are
by the French engineer Pierre Bézier for use the Bernstein polynomials
in the design of Renault car bodies
A Bézier curve can be fitted to any number
of control points – although usually 4 are
used where parameters C(n,k) are the binomial
coefficients

10 12
of
18
Bézier Spline Curves (cont…) of
18
Bézier Spline Curves (cont…)

Consider the case of n+1 control points So, the individual curve coordinates can be
denoted as pk=(xk, yk, zk) where k varies given as follows
from 0 to n
The coordinate positions are blended to
produce the position vector P(u) which
describes the path of the Bézier polynomial
function between p0 and pn

3
13 15
of
18
Bézier Spline Curves (cont…) of
18
Cubic Bézier Curve
Many graphics packages restrict Bézier
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

curves to have only 4 control points (i.e. n =


3)
The blending functions when n = 3 are
simplified as follows:

14 16
of
18
Important Properties Of Bézier Curves of
18
Cubic Bézier Blending Functions
The first and last control points are the first
and last point on the curve
– P(0) = p0
– P(1) = pn
The curve lies within the convex hull as the
Bézier blending functions are all positive and
sum to 1

4
17
of
18
Bézier Spline Curve Exercise
y
(3, 7)

(11, 5)

(1, 4)

(7, 1)

18
of
18
Summary
Today we had a look at spline curves and in
particular Bézier curves
The whole point is that the spline functions
give us an approximation to a smooth curve

5
3
of
28
Why?
We must determine what is visible within a
scene from a chosen viewing position
For 3D worlds this is known as visible
Computer Graphics 13: surface detection or hidden surface
Surface Detection Methods elimination

2 4
of
28
Contents of
28
Two Main Approaches
Today we will start to take a look at visible Visible surface detection algorithms are
surface detection techniques: broadly classified as:
– Why surface detection? – Object Space Methods: Compares objects
– Back face detection and parts of objects to each other within the
– Depth-buffer method scene definition to determine which surfaces
are visible
– A-buffer method
– Image Space Methods: Visibility is decided
– Scan-line method point-by-point at each pixel position on the
projection plane
Image space methods are by far the more
common

1
5 7
of
28
Back-Face Detection of
28
Back-Face Detection (cont…)
The simplest thing we can do is find the Ensure we have a right handed system with

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
faces on the backs of polyhedra and discard the viewing direction along the negative z-axis
them
Now we can simply say that if the z component
of the polygon’s normal is less than zero the
surface cannot be seen

6 8
of
28
Back-Face Detection (cont…) of
28
Back-Face Detection (cont…)

We know from before that a point (x, y, z) is In general back-face detection can be
behind a polygon surface if: expected to eliminate about half of the
polygon surfaces in a scene from further
visibility tests
where A, B, C & D are the plane parameters
More complicated surfaces
for the surface
though scupper us!
This can actually be made even easier if we
We need better techniques
organise things to suit ourselves
to handle these kind of
situations

2
9 11
of
28
Depth-Buffer Method of
28
Depth-Buffer Algorithm
Compares surface depth values throughout 1. Initialise the depth buffer and frame buffer
a scene for each pixel position on the so that for all buffer positions (x, y)
projection plane depthBuff(x, y) = 1.0
Usually applied to scenes only containing frameBuff(x, y) = bgColour
polygons
As depth values can be computed easily,
this tends to be very fast
Also often called the z-buffer method

10 12
of
28
Depth-Buffer Method (cont…) of
28
Depth-Buffer Algorithm (cont…)
2. Process each polygon in a scene, one at
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

a time
– For each projected (x, y) pixel position of a
polygon, calculate the depth z (if not already
known)
– If z < depthBuff(x, y), compute the surface
colour at that position and set
depthBuff(x, y) = z
frameBuff(x, y) = surfColour(x, y)
After all surfaces are processed depthBuff
and frameBuff will store correct values

3
13 15
of
28
Calculating Depth of
28
Iterative Calculations (cont…)
At any surface position the depth is Depth values along the edge being
calculated from the plane equation as: considered are calculated using

For any scan line adjacent x positions differ


by ±1, as do adjacent y positions

14 16
of
28
Iterative Calculations of
28
Iterative Calculations (cont…)
The depth-buffer algorithm proceeds by
starting at the top vertex of the polygon top scan line

Then we recursively calculate the x-


y scan line
coordinate values down a left edge of the
y - 1 scan line
polygon
The x value for the beginning position on
each scan line can be calculated from the
previous one bottom scan line

x x’
where m is the slope

4
17 19
of
28
A-Buffer Method of
28
A-Buffer Method (cont…)
The A-buffer method is an extension of the
depth-buffer method
The A-buffer method is visibility detection
method developed at Lucasfilm Studios for
the rendering system REYES (Renders
Everything You Ever Saw) If depth is >= 0, then the surface data field
stores the depth of that pixel position as
before
If depth < 0 then the data filed stores a
pointer to a linked list of surface data

18 20
of
28
A-Buffer Method (cont…) of
28
A-Buffer Method (cont…)
The A-buffer expands on the depth buffer Surface information in the A-buffer includes:
method to allow transparencies – RGB intensity components
The key data structure in the A-buffer is the – Opacity parameter
– Depth
accumulation buffer
– Percent of area coverage
– Surface identifier
– Other surface rendering parameters
The algorithm proceeds just like the depth
buffer algorithm
The depth and opacity values are used to
determine the final colour of a pixel

5
21 23
of
28
Scan-Line Method of
28
Scan-Line Method (cont…)
An image space method for identifying The surface facet tables contains:
visible surfaces – The plane coefficients
Computes and compares depth values – Surface material properties
along the various scan-lines for a scene – Other surface data
– Maybe pointers into the edge table

22 24
of
28
Scan-Line Method (cont…) of
28
Scan-Line Method (cont…)
Two important tables are maintained: To facilitate the search for surfaces crossing
– The edge table a given scan-line an active list of edges is
– The surface facet table formed for each scan-line as it is processed
The edge table contains: The active list stores only those edges that
– Coordinate end points of reach line in the cross the scan-line in order of increasing x
scene Also a flag is set for each surface to indicate
– The inverse slope of each line whether a position along a scan-line is either
– Pointers into the surface facet table to inside or outside the surface
connect edges to surfaces

6
25 27
of
28
Scan-Line Method (cont…) of
28
Scan-Line Method Limitations
Pixel positions across each scan-line are The scan-line method runs into trouble when

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
processed from left to right surfaces cut through each other or otherwise
At the left intersection with a surface the cyclically overlap
surface flag is turned on Such surfaces need to be divided
At the right intersection point the flag is
turned off
We only need to perform depth calculations
when more than one surface has its flag
turned on at a certain scan-line position

26 28
of
28
Scan Line Method Example of
28
Summary
We need to make sure that we only draw
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

visible surfaces when rendering scenes


There are a number of techniques for doing
this such as
– Back face detection
– Depth-buffer method
– A-buffer method
– Scan-line method
Next time we will look at some more
techniques and think about which
techniques are suitable for which situations

7
3
of
11
Depth-Sorting Method
A visible surface detection method that uses
both image-space and object-space
operations
Basically, the following two operations are
Computer Graphics 14: performed
More Surface Detection Methods – Surfaces are sorted in order of decreasing
depth
– Surfaces are scan-converted in order, starting
with the surface of greatest depth
The depth-sorting method is often also known
as the painter’s method

2 4
of
11
Contents of
11
Depth-Sorting Method (cont…)
Today we will continue to look at visible First, assume that we are viewing along the
surface detection methods: z direction
– Depth-sorting method All surfaces in the scene are ordered
– Other methods according to the smallest z value on each
surface
We will also compare the different
The surface S at the end of the list is then
techniques we have studied and suggest compared against all other surfaces to see if
when different techniques should be there are any depth overlaps
employed If no overlaps occur then the surface is scan
converted as before and the process
repeats with the next surface

1
5 7
of
11
Depth Overlapping of
11
Depth-Sorting Method (cont…)
The tests are performed in the order listed
z z and as soon as one is true we move on to
zmax zmax
the next surface
S S
z’max
If all tests fail then we swap the orders of the
zmin
surfaces
S’
z’max zmin
S’

z’min z’min

x x
No Depth Overlap Depth Overlap

6 8
of
11
Depth-Sorting Method (cont…) of
11
Other Techniques
When there is depth overlap, we make the There are a number of other techniques all
following tests: based around are division
– The bounding rectangles for the two surfaces – BSP-Tree Method
do no overlap – Area-Subdivision Method
– Surface S is completely behind the – Octree Methods
overlapping surface relative to the viewing
position
Raycastig can also be used
– The overlapping surface is completely in front
of S realtive to the viewing position
– The boundary edge projections of the two
surfaces onto the view plane do not overlap

2
9 11
of
11
Ray-Casting of
11
Summary
We need to make sure that we only draw
visible surfaces when rendering scenes

10
of
Comparison Of Visibility-Detection
11 Methods
When few surfaces are present either the
depth sorting algorithm or the BSP tree
method tend to perform best
Scan-line also performs well in these
situations – up to a several thousand
polygon surfaces
The depth buffer method tends to scale
linearly, so that for low numbers of polygons
its performance is poor, but it is used for
higher numbers of polygons.

3
3
of
45
Why Lighting?
If we don’t have lighting effects nothing
looks three dimensional!

Computer Graphics 15:


Illumination

2 4
of
45
Contents of
45
Why Lighting? (cont…)
Today we will start to look at illumination
models in computer graphics
– Why do we need illumination models?
– Different kinds of lights
– Different kinds of reflections
– Basic lighting model

1
5 7
of
45
Point Light Sources of
45
Radial Intensity Attenuation (cont…)
A point source is the simplest We use instead in inverse quadratic function
model we can use for a light of the form:
source
We simply define:
– The position of the light
where the coefficients a0, a1, and a2 can be
– The RGB values for the colour of the light
varied to produce optimal results
Light is emitted in all directions
Useful for small light sources

6 8
of
45
Radial Intensity Attenuation of
45
Infinitely Distant Light Sources
As light moves from a light source its intensity A large light source, like the sun, can be

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
diminished modelled as a point light source
At any distance dl away from the light source However, it will have very little directional effect
the intensity diminishes by a factor of Radial intensity attenuation is not used
However, using the factor does not
produce very good results so we use
something different

2
9 11
of
45
Directional Light Sources & Spotlights of
45
Angular Intensity Attenuation
To turn a point light source into a spotlight As well as light intensity decreasing as we
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

we simply add a vector direction and an move away from a light source, it also
angular limit θl decreases angularly
A commonly used function for calculating
angular attenuation is:

where the attenuation exponent al is


assigned some positive value and angle is
measured from the cone axis

10 Directional Light Sources & Spotlights 12


of of Reflected Light
45 (cont…) 45

We can denote Vlight as the unit vector The colours that we perceive are determined
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

in the direction of the light and Vobj by the nature of the light reflected from an
as the unit vector from the light object
source to an object For example, if white
The dot-product of these light is shone onto a
two vectors gives us the green object most Colours

angle between them wavelengths are Absorbed

absorbed, while green


light is reflected from
If this angle is inside the light’s angular limit the object
then the object is within the spotlight

3
13 15
of
45
Surface Lighting Effects of
45
Specular Reflection
The amount of incident light reflected by a Additionally to diffuse reflection some of the
surface depends on the type of material reflected light is concentrated into a highlight
Shiny materials reflect more of the incident or bright spot
light and dull surfaces absorb more of the This is called specular reflection
incident light
For transparent surfaces some of the light is
also transmitted through the material

14 16
of
45
Diffuse Reflection of
45
Ambient Light
Surfaces that are rough or grainy tend to A surface that is not
reflect light in all directions exposed to direct light
may still be lit up by
This scattered light is called diffuse reflections from other
reflection nearby objects –
ambient light
The total reflected light
from a surface is the
sum of the contributions
from light
sources and reflected
light

4
17 19
of
45
Example of
45
Nate Robin’s Tutorial

Nate Robin’s OpenGL Tutorials available at: http://www.xmission.com/~nate/tutors.html

18 20
of
45
Example of
45
Basic Illumination Model
We will consider a basic illumination model
which gives reasonably good results and is
Ambient Diffuse used in most graphics systems
The important components are:
– Ambient light
– Diffuse reflection
– Specular reflection
Final
Specular Image For the most part we will consider only
monochromatic light

5
21 23
of
45
Ambient Light of
45
Diffuse Reflection (cont…)
To incorporate background light we simply A parameter kd is set for each surface that
set a general brightness level for a scene determines the fraction of incident light that
This approximates the global diffuse is to be scattered as diffuse reflections from
reflections from various surfaces within the that surface
scene This parameter is known as the diffuse-
We will denote this value as Ia reflection coefficient or the diffuse
reflectivity
kd is assigned a value between 0.0 and 1.0
– 0.0: dull surface that absorbs almost all light
– 1.0: shiny surface that reflects almost all light

22 24
of
45
Diffuse Reflection of
45
Diffuse Reflection – Ambient Light
First we assume that surfaces reflect For background lighting effects we can
incident light with equal intensity in all assume that every surface is fully
directions illuminated by the scene’s ambient light Ia
Such surfaces are referred to as ideal Therefore the ambient contribution to the
diffuse reflectors or Lambertian reflectors diffuse reflection is given as:

Ambient light alone is very uninteresting so


we need some other lights in a scene as
well

6
25 27
of
45
Diffuse Reflection (cont…) of
45
Diffuse Reflection (cont…)
When a surface is illuminated by a light So the amount of incident light on a surface
source, the amount of incident light depends is given as:
on the orientation of the surface relative to
the light source direction
So we can model the diffuse reflections as:

26 28
of
45
Diffuse Reflection of
45
Diffuse Reflection (cont…)
The angle between the incoming light Assuming we denote the
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
direction and a surface normal is referred to normal for a surface as N
as the angle of incidence given as θ and the unit direction
vector to the light source
as L then:

So:

7
29 Combining Ambient And Incident Diffuse 31
of of Specular Reflection
45 Reflections 45

To combine the diffuse reflections arising The bright spot that we see on a shiny

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
from ambient and incident light most surface is the result of near total of the
graphics packages use two separate diffuse- incident light in a concentrated region
reflection coefficients: around the specular reflection angle
– ka for ambient light
The specular reflection angle equals the
– kd for incident light angle of the incident light
The total diffuse reflection equation for a
single point source can then be given as:

30 32
of
45
Examples of
45
Specular Reflection (cont…)
A perfect mirror reflects light only in the

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
specular-reflection direction
Other objects exhibit specular reflections
over a finite range of viewing positions
around vector R

8
33 35 The Phong Specular Reflection Model
of The Phong Specular Reflection Model of
45 45 (cont…)
The Phong specular reflection model or The graphs below show the effect of ns on
Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)

Images taken from Hearn & Baker, “Computer Graphics with OpenGL” (2004)
Phong model is an empirical model for the angular range in which we can expect to
calculating specular reflection range see specular reflections
developed in 1973 by Phong Bui Tuong
The Phong model sets the intensity of
specular reflection as
proportional to the angle
between the viewing
vector and the specular
reflection vector

34
of
The Phong Specular Reflection Model 36
of
The Phong Specular Reflection Model
45 (cont…) 45 (cont…)
So, the specular reflection intensity is For some materials the amount of specular
proportional to reflection depends heavily on the angle of
the incident light
The angle Φ can be varied between 0° and
90° so that cosΦ varies from 1.0 to 0.0 Fresnel’s Laws of Reflection describe in
great detail how specular reflections behave
The specular-reflection exponent, ns is
determined by the type of surface we want to However, we don’t need to worry about this
display and instead approximate the specular
effects with a constant specular reflection
– Shiny surfaces have a very large value (>100)
coefficient ks
– Rough surfaces would have a value near 1
For an explanation of Fresnel’s laws try here

9
37
of
The Phong Specular Reflection Model 39
of
Combining Diffuse & Specular
45 (cont…) 45 Reflections
So the specular reflection intensity is given For a single light source we can combine the
as: effects of diffuse and specular reflections
simply as follows:

Remembering that we can say:

38 40 Diffuse & Specular Reflections From


of Example of
45 45 Multiple Light Sources
We can place any number of light sources in
a scene
We compute the diffuse and specular
reflections as sums of the contributions from
the various sources

Exam Question
Common
10
41 43
of
45
Adding Intensity Attenuation of
45
RGB Colour Considerations (cont…)
To incorporate radial and angular intensity Each component of the surface colour is
attenuation into our model we simply adjust then calculated with a separate expression
our equation to take these into account For example:
So, light intensity is now given as:

where fradatten and fangatten are as discussed


previously

42 44
of
45
RGB Colour Considerations of
45
Summary
For an RGB colour description each intensity T create realistic (or even semi-realistic)
specification is a three element vector looking scenes we must model light correctly
So, for each light source: To successfully model lighting effects we
need to consider:
– Ambient light
Similarly all parameters are given as vectors: – Diffuse reflections
– Specular reflections

11
45
of
45

12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy