Graphics 4
Graphics 4
Three-dimensional translation
Three-dimensional rotation
Three-dimensional scaling
1. Three-dimensional translation
We can rotate an object about any axis in space, but the easiest rotation axes
to handle are those that are parallel to the Cartesian-coordinate axes.
Also, we can use combinations of coordinate-axis rotations (along with
appropriate translations) to specify a rotation about any other line in space.
Therefore, we first consider the operations involved in coordinate-axis
rotations, then we discuss the calculations needed for other rotation axes.
By convention, positive rotation angles produce counterclockwise rotations
about a coordinate axis, assuming that we are looking in the negative
direction along that coordinate axis.
3D Coordinate-axis rotations
The two-dimensional z-axis rotation equations are easily extended to three dimensions, as
follows:
Parameter θ specifies the rotation angle about the z axis, and z-coordinate values are
unchanged by this transformation.
In homogeneous-coordinate form, the three-dimensional z-axis rotation equations are:
which we can write more compactly as:
This composite matrix is of the same form as the two-dimensional transformation sequence for
rotation about an axis that is parallel to the z axis (a pivot point that is not at the coordinate
origin).
When an object is to be rotated about an axis that is not parallel to one of the coordinate axes,
we must perform some additional transformations.
In this case, we also need rotations to align the rotation axis with a selected coordinate axis and
then to bring the rotation axis back to its original orientation.
Given the specifications for the rotation axis and the rotation angle, we can accomplish the
required rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about the selected coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original spatial position.
• We can transform the rotation axis onto any one of the three
coordinate axes.
• The z axis is often a convenient choice, and we next consider a
transformation sequence using the z-axis rotation matrix.
• A rotation axis can be defined with two coordinate positions, or
with one coordinate point and direction angles (or direction
cosines) between the rotation axis and two of the coordinate axes.
• We assume that the rotation axis is defined by two points, and that
the direction of rotation is to be counterclockwise when looking
along the axis from P2 to P1.
• The components of the rotation-axis vector are then computed as:
where the components a, b, and c are the direction cosines for
the rotation axis:
Now that we have determined the values for cosα and sin α in
terms of the components of vector u, we can set up the matrix
elements for rotation of this vector about the x axis and into
the xz plane:
Quaternions, which are extensions of two-dimensional complex numbers, are useful in a number
of computer-graphics procedures, including the generation of fractal objects.
They require less storage space than 4 × 4 matrices, and it is simpler to write quaternion
procedures for transformation sequences.
This is particularly important in animations, which often require complicated motion sequences
and motion interpolations between two given positions of an object.
One way to characterize a quaternion is as an ordered pair, consisting of a
scalar part and a vector part:
A rotation about any axis passing through the coordinate origin is
accomplished by first setting up a unit quaternion with the
scalar and vector parts as follows:
For example, we can perform a rotation about the z axis by setting rotation axis vector u to the
unit z-axis vector (0, 0, 1). Substituting the components of this vector into Matrix 39, we get the
3×3 version of the z-axis rotation matrix Rz(θ).
3. Three-dimensional scaling
A parameter value greater than 1 moves a point farther from the origin in the corresponding
coordinate direction.
Similarly, a parameter value less than 1 moves a point closer to the origin in that coordinate
direction.
Also, if the scaling parameters are not all equal, relative dimensions of a transformed object are
changed.
We preserve the original shape of an object with a uniform scaling: sx = sy = sz.
Because some graphics packages provide only a routine that scales relative to the coordinate
origin, we can always construct a scaling transformation with respect to any selected fixed
position (xf , yf , zf) using the following transformation sequence:
1. Translate the fixed point to the origin.
2. Apply the scaling transformation relative to the coordinate origin using Equation for z-
coordinate scaling in the transformation matrix.
3. Translate the fixed point back to its original position.
The matrix representation for an arbitrary fixed-point scaling can then be expressed as the
concatenation of these translate-scale-translate transformations:
We can set up programming procedures for constructing a three-dimensional scaling matrix using
either a translate-scale-translate sequence or a direct incorporation of the fixed-point
coordinates.
Composite 3D Transformations:
Three-dimensional Reflections
Three-dimensional Shear
Three-dimensional Reflections
Plane areas that are perpendicular to the z axis are thus shifted
by an amount equal to z-zref.
Affine Transformations
Each of the transformed coordinates x, y, and z is a linear function of the original coordinates x,
y, and z, and parameters ai j and bk are constants determined by the transformation type.
Affine transformations (in two dimensions, three dimensions, or higher dimensions) have the
general properties that parallel lines are transformed into parallel lines, and finite points map
to finite points.
Problems
A triangle is defined by 3 vertices A (0, 2, 1), B (2, 3, 0), C (1, 2, 1). Find the
final co-ordinates after it is rotated by 45 degree around a line joining the
points (1, 1, 1) and (0, 0, 0).
Unlike a camera picture, we can choose different methods for projecting a scene onto the view
plane.
One method for getting the description of a solid object onto a view plane is to project points
on the object surface along parallel lines.
This technique, called parallel projection, is used in engineering and architectural drawings to
represent an object with a set of views that show accurate dimensions of the object.
Another method for generating a view of a three-dimensional scene is to project points to the
view plane along converging paths.
This process, called a perspective projection, causes objects farther from the viewing position
to be displayed smaller than objects of the same size that are nearer to the viewing position.
A scene that is generated using a perspective projection appears more realistic, because this is
the way that our eyes and a camera lens form images.
Parallel lines along the viewing direction appear to converge to a distant point in the
background, and objects in the background appear to be smaller than objects in the foreground.
III. Depth Cueing
More distant objects appear dimmer to us than nearer objects due to light scattering by dust
particles, haze, and smoke.
Some atmospheric effects can even change the perceived color of an object, and we can model
these effects with depth cueing.
IV. Identifying Visible Lines and
Surfaces
We can also clarify depth relationships in a wire-frame display using
techniques other than depth cueing.
One approach is simply to highlight the visible lines or to display them in a
different color.
Another technique, commonly used for engineering drawings, is to display the
non-visible lines as dashed lines.
Or we could remove the non-visible lines from the display.
But removing the hidden lines also removes information about the shape of the back surfaces of
an object, and wire-frame representations are generally used to get an indication of an object’s
overall appearance, front and back.
A wire-frame object displayed with depth cueing, so that the brightness of lines decreases from
the front of the object to the back.
When a realistic view of a scene is to be produced, back parts of the objects are completely
eliminated so that only the visible surfaces are displayed.
In this case, surface-rendering procedures are applied so that screen pixels contain only the
color patterns for the front surfaces.
V. Surface Rendering
Many graphics packages allow objects to be defined as hierarchical structures, so that internal
details can be stored.
Exploded and cutaway views of such objects can then be used to show the internal structure and
relationship of the object parts.
An alternative to exploding an object into its component parts is a cutaway view, which removes
part of the visible surfaces to show internal structure.
VII. Three-dimensional and
Stereoscopic Viewing
Three-dimensional views can be obtained by reflecting a raster image from a
vibrating, flexible mirror.
The vibrations of the mirror are synchronized with the display of the scene on
the cathode ray tube (CRT).
As the mirror vibrates, the focal length varies so that each point in the scene
is reflected to a spatial position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and
the other for the right eye.
The viewing positions correspond to the eye positions of the viewer.
These two views are typically displayed on alternate refresh cycles of a raster
monitor.
The Three-dimensional Viewing
Pipeline
First of all, we need to choose a viewing position corresponding to where we
would place a camera.
We choose the viewing position according to whether we want to display a front,
back, side, top, or bottom view of the scene.
We could also pick a position in the middle of a group of objects or even inside a
single object, such as a building or a molecule.
Which way do we want to point the camera from the viewing position, and how
should we rotate it around the line of sight to set the “up” direction for the
picture?
Some of the viewing operations for a three-dimensional scene are the same as, or
similar to, those used in the two-dimensional viewing pipeline.
A two-dimensional viewport is used to position a projected view of the three-
dimensional scene on the output device, and a two-dimensional clipping window is
used to select a view that is to be mapped to the viewport.
Three-dimensional Viewing Pipeline
World coordinates(3D)
View coordinates(3D)
We first select a world-coordinate position P0 =(x0, y0, z0) for the viewing
origin, which is called the view point or viewing position. (Sometimes the
view point is also referred to as the eye position or the camera position.)
For three-dimensional space, we also need to assign a direction for one of the
remaining two coordinate axes.
A. The View-Plane Normal Vector
Because the viewing direction is usually along the zview axis, the view plane, also called the
projection plane, is normally assumed to be perpendicular to this axis.
Thus, the orientation of the view plane, as well as the direction for the positive zview axis, can
be defined with a view-plane normal vector N.
An additional scalar parameter is used to set the position of the view plane at some coordinate
value zvp along the zview axis.
This parameter value is usually specified as a distance from the viewing origin along the
direction of viewing, which is often taken to be in the negative zview direction.
Thus, the view plane is always parallel to the xview, yview plane, and the projection of objects
to the view plane corresponds to the view of the scene that will be displayed on the output
device.
Vector N can be specified in various ways. In some graphics systems, the
direction for N is defined to be along the line from the world-coordinate origin to a selected
point position.
Other systems take N to be in the direction from a reference point Pref to the viewing origin P0,
as in Figure 10. In this case, the reference point is often referred to as a look-at point within
the scene, with the viewing direction opposite to the direction of N.
We could also define the view-plane normal vector, and other vector directions, using direction
angles.
These are the three angles,α,β, andγ, that a spatial line makes with the x, y, and z axes,
respectively. But it is usually much easier to specify a vector direction with two point positions
in a scene than with direction angles.
B. The View-up Vector
Therefore, viewing routines typically adjust the user-defined orientation of vector V, as shown
in Figure 11, so that V is projected onto a plane that is perpendicular to the view-plane normal
vector.
We can choose any direction for the view-up vector V, so long as it is not parallel to N.
A convenient choice is often in a direction parallel to the world yw axis; that is, we could set V =
(0, 1, 0).
C. The uvn Viewing-Coordinate
Reference Frame
Left-handed viewing coordinates are sometimes used in graphics packages,
with the viewing direction in the positive zview direction.
With a left-handed system, increasing zview values are interpreted as being
farther from the viewing position along the line of sight.
But right-handed viewing systems are more common, because they have the
same orientation as the world-reference frame.
This allows a graphics package to deal with only one coordinate orientation
for both world and viewing references.
Because the view-plane normal N defines the direction for the zview axis and
the view-up vector V is used to obtain the direction for the yview axis, we
need only determine the direction for the xview axis.
We determine the correct direction for U by taking the vector
cross product of V and N so as to form a right-handed viewing
frame.
The vector cross product of N and U also produces the
adjusted value for V, perpendicular to both N and U, along
the positive yview axis. Following these procedures, we
obtain the following set of unit axis vectors for a right-
handed viewing coordinate system.
D. Generating Three-dimensional
Viewing Effects
For instance, from a fixed viewing position, we could change the direction of N to display
objects at positions around the viewing-coordinate origin.
We could also vary N to create a composite display consisting of multiple views from a fixed
camera position.
We can simulate a wide viewing angle by producing seven views of the scene from the same
viewing position, but with slight shifts in the viewing direction; the views are then combined to
form a composite display.
Similarly, we generate stereoscopic views by shifting the viewing direction as well as shifting the
view point slightly to simulate the two eye positions.
If we want to simulate an animation panning effect, as
when a camera moves
through a scene or follows an object that is moving
through a scene, we can keep the direction for N fixed
as we move the view point, as illustrated in Figure 13.
In the next phase of the three-dimensional viewing pipeline, after the transformation to viewing
coordinates, object descriptions are projected to the view plane.
Graphics packages generally support both parallel and perspective projections.
In a parallel projection, coordinate positions are transferred to the view plane along parallel
lines.
A parallel projection preserves relative proportions of objects, and this is the method used in
computer-aided drafting and design to produce scale drawings of three-dimensional objects.
All parallel lines in a scene are displayed as parallel when viewed with a parallel projection.
There are two general methods for obtaining a parallel-projection view of an object: We can
project along lines that are perpendicular to the view plane, or we can project at an oblique
angle to the view plane.
For a perspective projection, object positions are transformed to projection coordinates along
lines that converge to a point behind the view plane.
Unlike a parallel projection, a perspective projection does not preserve relative proportions of
objects.
But perspective views of a scene are more realistic because distant objects in the projected
display are reduced in size.
1. Parallel Projections
Parallel Projection use to display picture in its true shape and size. When
projectors are perpendicular to view plane then is called orthographic
projection.
The parallel projection is formed by extending parallel lines from each vertex
on the object until they intersect the plane of the screen.
The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working
drawing of the object, for complete representations require two or more
views of an object using different planes.
1.1. Orthogonal Projections
A transformation of object descriptions to a view plane along lines that are all
parallel to the view-plane normal vector N is called an orthogonal projection (or,
equivalently, an orthographic projection).
Orthogonal projections are most often used to produce the front, side, and top
views of an object.
Elevations and plan view: Front, side, and rear orthogonal projections of an
object are called elevations; and a top orthogonal projection is called a plan view.
❖ Mostly used by drafters and engineers to create working drawings of an object
which preserves its scale and shape.
❖ The distance between the COP and the projection plane is infinite i.e. The
projectors are parallel to each other and have a fixed direction.
Normalization Transformation for an
Orthogonal Projection
Projectors: It is also called a projection vector. These are rays start from the
object scene and are used to create an image of the object on viewing or
view plane.
It introduces several anomalies due to these object shape and appearance gets
affected.
Perspective foreshortening: The size of the object will be small of its
distance from the center of projection increases.
Vanishing Point: All lines appear to meet at some point in the view plane.
Distortion of Lines: A range lies in front of the viewer to back of viewer is
appearing to six rollers.
1. Perspective Projections: Vanishing
points
The point at which a set of projected parallel lines appears to converge is
called a vanishing point.
It is the point where all lines will appear to meet. There can be one point,
two point, and three point perspectives.
There is an illusion that certain sets of parallel lines (that are not parallel to
the view plane ) appear to meet at some point on the view plane.
The vanishing point for any set of parallel lines that are parallel to one of
principal axis is referred to as a principal vanishing point (PVP).
The number of PVPs is determined by the number of principal axes
intersected by the view plane.
One principal vanishing point projection
- occurs when the projection plane is perpendicular to one of the principal axes (x, y or z ).
Vanishing point
X-axis
Vanishing point Z-axis
Vanishing point
Three principal vanishing point intersection
VP2
VP1
VP3
Normalized Perspective-Projection
Transformation Coordinates
View Volume
- The view volume bounds that portion of the 3D space that is to be clipped out
and projected onto the view plane.
View Volume for Perspective Projection
- its shape is semi-infinite pyramid with apex at the view point and edge
passing through the corners of the window.
cop
View Front clipping Back clipping
window plane plane
View Volume for Parallel Projection
Parallelepiped
Viewed volume
Back clipping
plane
-It's shape is an infinite parallelepiped with sides parallel to the direction of projection.
Front clipping
plane
View
window
Producing a Canonical view volume for a perspective projection
cop
View volume
View
window
Frustum centerline
View volume
Step2: scale view volume inversely proportional to the distance from the view window, so that
shape of view volume becomes rectangular parallelepiped.
View volume
Converting object coordinates to view plane coordinates
zw
VRP
xw Yw Yv
Xv
World coordinate system View plane (eye)
coordinate system
Steps:
3. Align object coordinate’s z-axis with view plane coordinates z-axis (the view plane
normal).
a)- Rotate about x-axis to place the line (ie. Object coordinates z-axis) in the view
plane coordinates xz-plane.
b)- Rotate about y-axis to move the z axis to its proper position.
c)- Rotate about the z-axis until x and y axis are in place in the view plane
coordinates.
Three-Dimensional Clipping Algorithms
All device-independent transformations (geometric and viewing) are concatenated and applied
before executing the clipping routines.
And each of the clipping boundaries for the normalized view volume is a plane that is parallel to
one of the Cartesian planes, regardless of the projection type and original shape of the view
volume.
Depending on whether the view volume has been normalized to a unit cube or to a symmetric
cube with edge length 2, the clipping planes have coordinate positions either at 0 and 1 or at -1
and 1.
For the symmetric cube, the equations for the three-dimensional clipping planes are:
Clipping in 3D Homogeneous
Coordinates
Computer-graphics libraries process spatial positions as four-dimensional homogeneous
coordinates so that all transformations can be represented as 4 by 4 matrices.
As each coordinate position enters the viewing pipeline, it is converted to a four-dimensional
representation:
Three-dimensional Region Codes
3D Point and Line Clipping
3D Polygon Clipping
THREE-DIMENSIONAL (3D) OBJECT
REPRESENTATION
General Modeling Techniques
o Particle Systems
(for modeling objects that exhibit ‘fluid-like’ properties
e.g. – smoke, fire, waterfalls etc)
o Volume Rendering
(to show interior information of a data set e.g.- Seismic
data or data set from a CT scanner)
o Physically-based Modeling
(for modeling nonrigid objects and its behavior in terms
of the interaction of external and internal forces e.g.- a
rope, a piece of cloth or a soft rubber ball)
POLYGAN MESH MODELS
The various objects that are often useful in graphics applications include
quadric surfaces, superquadrics, polynomial and exponential functions, and
spline surfaces.
A frequently used class of objects are the quadric surfaces, which are described with second-
degree equations (quadratics).
They include spheres, ellipsoids, tori, paraboloids, and hyperboloids. Quadric surfaces,
particularly spheres and ellipsoids, are common elements of graphics scenes, and routines for
generating these
surfaces are often available in graphics packages.
Reproducible - the representation should give the same curve every time;
Computationally Quick;
Easy to manipulate, especially important for design purposes;
Flexible;
Easy to combine with other segments of curve.
Categories of Curves
Interpolating Curves
- These curves will pass through the points used to describe it.
- The points through which the curve passes are known as knots.
Approximation Curves
- An approximating curve will get near to the points without necessarily passing through any of
them.
Interpolation
Approximation
Curve Representation
Non-parametric Representation
The Explicit form is satisfactory when the function is single-valued and the curve has no
vertical tangents.
The implicitly defined curves require the solution of a non-linear equation for each point and
thus numerical procedures have to be employed.
Both explicitly and implicitly represented curves are axis-dependent.
Parametric Representation of Curves
Therefore the general equation of a curve in terms of basis functions bi(t) and control points pi
is as follows:
Various Sets of Basis functions have been used in Computer Graphics for the specification of
curves.
Among these are the Bézier Basis and B-Spline Basis.
Advantages of Cubic Curves
To ensure a smooth transition from one section of a piecewise parametric curve to the next, we can
impose various continuity conditions at the connection points.
Suppose each section of a connected curve is described with following parametric equations:
x=x(u), y=y(u), z=z(u), u1u u2
We set the parametric continuity by matching the parametric derivatives of adjoining curve sections
at their common boundary.
Contd…
Another method for joining two successive curve sections is to specify conditions
for geometric continuity.
In this case, we require only that the parametric derivatives of the two sections
are proportional to each other at their common boundary, instead of requiring
equality.
Zero-order geometric continuity, described as G0 continuity, is the same as zero-
order parametric continuity. That is, two successive curve sections must have the
same coordinate position at the boundary point.
First-order geometric continuity, or G1 continuity, means that the parametric
first derivatives are proportional at the intersection of two successive sections.
Second-order geometric continuity, or G2 continuity, means that both the first
and second parametric derivatives of the two curve sections are proportional at
their boundary. Under G2 continuity, curvatures of two curve sections will match
at the joining position.
Bézier Curves
Control Polygon
Bezier Curve
Contd…
It is easy to generalize the above to curves of degree n with n+1 control points. Thus:
0 0 1
0.5 0 1
1 0 0.5
1 0 0
1 0 0
1 0 -0.5
2 0 0
2 0 0.5
Construction of Bezier Curve
Recursive Subdivision approach (De Casteljau algorithm)
B
BC
C
ABC ABCD
BCD
AB
CD
A
Fig: Subdivision of a Cubic Bezier Curve D
Contd…
1. The curve passes through the start and finish points of the
control polygon defining the curve.
2. The tangent to the curve at t = 0 lies in the direction of the
line joining the first point to the second point. Also the
tangent to the curve at t=1 is in the direction of the line
joining the penultimate point to the last point.
3. Any point on the curve lies inside the convex hull of the
control polygon.
4. All control points affect the entire curve.
5. The order of the curve is related to the number of control
points. Hence using many control points to control the
curve shape means evaluating high order polynomials.
6. The curve is transformed by applying any affine
transformation to its control points and generating the
transformed curve from the transformed control points.
7. No line can intersect the curve more than twice if the four
control points form an open polygon. Thus there can be no
loops in the curve and it must be smooth. This is called the
Variation Diminishing Property.
Bezier Surfaces
Hermite spline allow local control of a spline. User specifies the tangent at each control point.
Hermite splines can be calculated for two control points pk and pk+1 :
p(0)=pk
p(1)=pk+1
p’(0)=Dpk
p’(1)=Dpk+1
Drawback –
It requires input values for the curve derivatives at the
control points.
The tradeoff is that B-splines are more complex than Bézier splines.
APPROXIMATING SPLINES: B-SPLINE CURVES…
1. Their non-localness: while a control point mainly influence the shape of the curves
close to it, it also affects the entire curve to some extent.
2. The fact that the degree of the curve is related to the number of control points.
Thus either high order polynomials have to be used or multiple low-degree curve
segments have to be used.
Definition –
A B-spline is a set of piecewise (usually cubic) polynomial segments
that pass close to a set of control points.
Here n (no. of control points ), m ( knot points) and k (order of curve) satisfy the following conditions
m=n+k+1
Blending functions Ni,k for B-spline curve are defined by the Cox-deBoor recursion formula:
when k=1
Ni,1(u)= 1, u є [u i , u i+1)
0, otherwise
Contd…
And if k>1
If the knot vector does not have any particular structure, the generated curve will not touch the
first and last lags of the control polygon.
Clamped B-spline
We may want to clamp the curve so that it is tangent to first and last lags just like the Bezier
curve does. To do so, the first knot and the last knot must be repeated (k+1) times. This type
of B-spline curve is called clamped curve.
Closed B-spline
By repeating some knots and control points, the generated curve can be a closed one.
- 1 3 -3 1 Pi - 1
3 -6 3 0 Pi
C i (u) =1/6 [u3 u2 u 1]
- 3 0 3 0 Pi + 1
1 4 1 0 Pi + 2
Summary of properties of B-spline curve
Contd…
The family of complex shapes may be described by what the French Mathematician Benoit
Mandelbrot (year 1975) called fractals.
The word ‘fractal’ derives its origin from the Latin word fractus meaning ‘irregular and
fragmented’.
The fractal geometry has helped to reconnect pure mathematics with natural sciences and
computing.
Euclidean and Fractal Geometry
Euclidean Fractal
Traditional (over 2000 years) Modern monsters (20 years)
Based on characteristic size & scale No specific size and scale
Suits for man-made objects Appropriate for natural shapes
Described by a non-recursive formula Described by algorithm or recursive formula
Fractal: Definition & Properties
Fractals are infinitely magnifiable irregular objects with fractional dimension which can be
produced by a small set of instructions and data.
Important properties of fractals are:
➢Self-similarity
➢Fractional Dimensions
➢Ill defined characteristic scale of length
➢Formation by Iteration
1. Self-Similarity
Geometric figures are similar if they have the same shape
i.e. the corresponding sides are in proportion. For example,
the following two squares are similar.
Self-similar figure
Self-Similarity of Fractals
As fractal structure are examined at smaller and smaller scales i.e. as the
view of them is magnified more and more, the smaller scale versions seem to
resemble the large scale version.
Self-similarity, in other words invariance against change in scale or size, is an
attribute of many laws of nature.
Example: Self-similarity of fractals
Koch Curve
2. Fractional Dimension
Dimension of Geometric Objects
A line has one dimension - length. It has no width and no height, but infinite length.
Again, this model of a line is really not very good, but until we learn how to draw a line with 0
width and infinite length, it'll have to do.
Dimension of Geometric Objects
Contd…
Space, a huge empty box, has three dimensions, length, width, and depth, extending to
infinity in all three directions.
Obviously following isn't a good representation of 3-D space. Besides its size, it's just a hexagon
drawn to fool you into thinking it's a box.
Topological Dimension
We observe that all the geometric objects residing in Euclidean space has integer dimension.
Such a dimension is alternatively also known as topological dimension.
In general, Euclidean space Rn has dimension n.
Intuitively, the dimension of the space equals the number of real parameters necessary to
describe different points in the space.
Fractional Dimension
In fractal geometry, there is another concept of this dimension, that is, here
dimension of an object is not an integer rather a fraction like 1.53, 2.71 etc.
It is also known as Hausdorff dimension or Fractal dimension.
Simply this implies that objects are possible where dimension is between 1
and 2.
Mathematical Interpretation
If we take an object residing in Euclidean dimension D and reduce its linear size by 1/r in each
spatial direction, its measure would increase to N=rD times the original (refer next figure).
We consider N=rD, take the log of both sides, and get log(N) = D log(r). If we solve for D, then D
= log(N)/log(r).
Figure: Understanding the concept of dimension
Fractional Dimension
Contd…
(Length 1)
The Initiator and Generator
for constructing the Koch
(Length 4/3) Curve
Higher Levels of Koch Curve
The rule says to take each line and replace it with four lines, each one-third the length of the
original.
Here the line is reduced in scale by 3 i.e. r=3, and generates 4 equal pieces
i.e. N = 4, then
4 = 3D
Solving for D,
D = log 4 / log 3
= 1.2618
Interesting Features of Koch Curve
The previous examples were from the examples of Linear fractal geometry. For the creation of such
fractals, we use some rules and these rules are to be applied repeatedly a large number of times to
get the final one.
On the other hand, in case of Non-Linear fractal, a mathematical formula yields fractal.
The examples are: Julia set and the Mandelbrot set.
Julia Set
(Self-squaring fractal)