Extension
Extension
Motion
A. Joshi and S. Alam
1.1.1 SFML
1.2 Algorithm
1.2.1 Motivation
As with our previous applications, we de-
cided to use SFML as our primary graphics Our intention is to create a projectile
library. We decided to keep using SFML, launcher, that works on a model of the
as to finish this project faster, and prevent Earth. We can decompose this problem
ourselves from getting bogged down in the into the following steps:
details of learning a new library.
Since we were not using OpenGL di- 1. Be able to render a sphere
rectly, we will be rendering purely on the
2. Be able to map a texture on to the
CPU side, and will not be lending the
sphere
help of the GPU for rasterization. This
is a trade-off we were willing to make, as 3. Be able to draw a point on the sphere
we were not looking to make a game, but (as the launch point)
rather a simulation.
4. Be able to compute and draw the tra-
jectory of the projectile
1.1.2 ImGUI
5. Be able to account for the rotation of
Like with SFML, we already had the code the Earth
infastructure to use ImGUI, and we decided
to use it for the same reasons as SFML. 6. Be able to animate the projectile
1
1.2.2 Rendering a Sphere B, but isn’t nessecarily on the line segment
AB.
To render a sphere, we must note that we
Staying with the two dimension story, if
have no way to render a sphere directly
we were to draw a set point of points P that
(since we do not have access to OpenGL,
were all on a straight line not going through
and could use gluSphere()). Note that
the origin, and we were to normalise them
SFML does provide a method for us to ren-
in reference to the origin, with a certain
der polygons, so we thought of rendering a
distance d, we would contruct an arc of a
sphere in terms of polygons.
circle with radius d, since all this exercise
To do so, we must first introduce the
is, is drawing a set of points on a circle with
idea triangle subdivision. The idea is to
radius d. It is then trivial to prove that the
take a triangle and divide it into smaller tri-
same would hold in three dimensions2 .
angles (as the name would suggest). There
The reason to even go through such an
are multiple ways of doing this (see [3] for
exercise, it to realise that we can start of
more), and the general reasoning behind
a octahedron, and then subdivide it, yield-
this that we can have a better more re-
ing us the points on a straight line. Then
fined representation of any polygon, with-
we can normalise these points to get the
out having to store extra information1 . See
points on a sphere, and obviously we can
in figure 1 how by subdividing a tetrahe-
also control the radius of such a sphere. To
dron, we can approximiate a smooth sur-
keep things simple, we use an octahedron,
face. Usually a few iterations of this pro-
because it is a comprised of 8 equilateral
cess is suffice to give a good approximation
triangles, which are trivial to subdivide.
of the limit surface.
The second idea we must introduce is Now that we have our points that we
of normalisation, with respect to a set dis- can render, we need to somehow convert
tance. Normally, normalisation preserves these from 3D to 2D, so that we can render
the direction of a vector, but scales it such them on to the screen. This is where GLM
that its magnitude is 1. Our normalisa- does most of the heavy lifting, in that, we
tion is a bit different, however, because we don’t have to manually construct the equa-
don’t end up with magnitude 1, but rather tions to this, but can leave it up to GLM
a magnitude of a set distance. to do this for us.
Here is a two-dimensional example of
normalisation with respect to a distance: The Rendering Pipeline
3 shows two points, A and B, and the line
drawn between them. Currently, the dis- Throughout this procedure, we will be us-
tance between A and B is 6 units, however ing 4D vectors (x, y, z, w), and 4×4 matri-
if one were tasked to find a point on the ces. The reason for this is that we can use
line AB that is 12 units away from A (see the fourth dimension to store information
figure 4 as point C). about the vector. I would recommend using
More generally, we can say that this [2] as a guide to further understanding the
point C will always be colinear with A and intricate process defined here. This being
1
Though, obviously we take a memory and computation penalty for this, we can achieve a smooth
limit surface
2
This is a exercise left to the reader
2
Figure 1: Image shows the tetrahedron being subdivided 0, 1, 2, 6 times
3
If w = 0, then the vector is a direc- be equivalent of moving the entire world 3
tion vector units to the left instead. Mathematically,
this is equivalent of describing everything
If w = 1, then the vector is a point in terms of the basis vectors defined relative
to the camera, rather than in world space.
To begin, all of the points that describe a
This is the idea behind the view matrix.
sphere are relative to the origin (obviously),
however this origin is not nessecarily the Now that we are in Camera Space, we
origin of the world (but rather relative to can start to project our points onto the
the origin of the model). To make it rel- screen. This is done by the projection ma-
ative to the world, we can apply a model trix. We obviously have to use the x and
matrix transformation. The model matrix y coordinates of the points to determine
is consistent of: where to place our points on the screen,
however we must also use the z coordinate
A translation matrix – which de- to determine which point should be more
scribes the position of the object in on the screen than the other. The pro-
the world relative to the origin of the jection matrix converts the frustum of the
world. camera to a cube, and then scales the cube
A rotation matrix – which describes to the screen. 5 shows the steps taken de-
the orientation of the object in the scribed here. Once our coordinates have
world relative to the basis vectors of been projected onto the screen, we can then
the world. render them using SFML, this is done by
creating a vertex array, and then filling it
A scaling matrix – which describes with the points we have projected onto the
the size of the object in the world rel- screen.
ative to the basis vectors of the world.
<<Get UV coordinate for a point
After applying the model matrix, our coor- xyz>>
<<Rendering a Sphere>>=
dinates are now in world space (points are
<<Get subdivided octahedron>>
defined relative to the origin of the world). <<Map the octahedron onto a sphere>>
Quote from Futurama:
sf::Texture texture = sf::Texture();
‘The engines don’t move the for (int i = 0; i <
ship at all. The ship stays triangles.size(); i++) {
where it is and the engines glm::vec3 v1 = triangles[i].v1;
move the universe around it’ glm::vec3 v2 = triangles[i].v2;
glm::vec3 v3 = triangles[i].v3;
For example, if you want to view a moun-
tain from a different angle, you can either glm::vec4 p1 = MVP * glm::vec4(v1,
move the camera or move the mountain. 1.0f);
glm::vec4 p2 = MVP * glm::vec4(v2,
Whilst not practial in real life, the latter is
1.0f);
easier and simpler in CG than the former glm::vec4 p3 = MVP * glm::vec4(v3,
Intially, your camera is at the origin 1.0f);
of the world space and you want to move
your camera 3 units to the right, this would
4
Figure 5: shows the steps taken to get screen coordinates
5
sf::VertexArray the y axis3 . To begin with then4 :
triangle(sf::Triangles, 3);
triangle[0].position = y = − cos(θ)
sf::Vector2f(p1.x, p1.y); x = − cos(ϕ) sin(ϕ)
triangle[1].position =
sf::Vector2f(p2.x, p2.y);
z = sin(ϕ) sin(θ)
triangle[2].position = From this we can infer that:
sf::Vector2f(p3.x, p3.y);
θ = arccos(−y)
<<Set UV coordinates>>
window.draw(triangle, ϕ = atan2(z, −x)
&texture); Where atan2 is the four-quadrant inverse
}
tangent function. This returns values in
the range [−π, π], however these values go
from 0 to π, then flip to −π, proceeding
back to 0. While mathematically correct,
1.2.3 Mapping a texture onto the this cannot be used to map uv coordinate,
sphere since we want a smooth transition from 0
to 1.
Fortunately,
After the arduous task of getting the trian-
gles we want on to the screen, we can now atan2(a, b) = atan2(−a, −b) + π
move on to the task of mapping a texture
onto the sphere. To do so, we must in- This formulation gives values in the de-
troduce the idea of uv coordinates. These sired smooth range of [0, 2π], therefore
coordinates specify the location of a 2D ϕ = atan2(z, −x) + π
source image (or in some 2D parameter-
ized space). We need to find a mapping Since we have our θ and ϕ values, we
of a point from a 3D surface (in this case can now convert them to uv coordinates.
of a sphere) onto uv coordinates. This is done by:
uv coordinates are defined in the range ϕ
u=
[0, 1], where u is the horizontal coordinate 2π
and v is the vertical coordinate. Their θ
v=
range allows them to be used in any tex- π
ture, regardless of size, since they are rela- Now that we have our uv coordinates,
tive to the size of the texture. SFML provides a method of interpolation
For spheres, surface coordinates are de- between these coordinates defined by the
fined in terms of two angles θ and ϕ, where vertices of the triangles, so we need not
θ measures the angle made between the y worry about the interpolation of the uv co-
axis and the point and ϕ is the angle about ordinates:
3
Annoyingly, many textbook definitions of ϕ and θ are not only swapped, but also the axes of measure-
ment are also changed, we consider the ”poles” of our sphere to be the y axis, however many textbooks
consider the ”poles” to be the z axis, which ends up changing the equations in a subtle, yet frustrating
to debug manner.
4
Assuming unit sphere
6
conclude that our y coordinate will be the
<<Get UV coordinate for a point
sine of the latitude.
xyz>>=
glm::vec2 getUV(glm::vec3 xyz) {
By the same logic, we can infer that the
float theta = acos(-xyz.y); x and z coordinates will be the cosine of the
float phi = atan2(xyz.z, latitude since the x and z coordinates are
-xyz.x) + M_PI; the projection of the latitude onto the xz
return glm::vec2(phi / (2 * plane.
M_PI), theta / M_PI); The longitude will affect the x and z co-
} ordinates, since the longitude is the angle
<<Set UV coordinates>>=
about the y axis. The x coordinate will be
glm::vec2 uv1 = getUV(v1);
glm::vec2 uv2 = getUV(v2); the cosine of the longitude, and the z coor-
glm::vec2 uv3 = getUV(v3); dinate will be the sine of the longitude.
Therefore the equations are:
triangle[0].texCoords =
sf::Vector2f(uv1.x, 1 - uv1.y); y = sin(latitude)
triangle[1].texCoords = x = cos(latitude) cos(longitude)
sf::Vector2f(uv2.x, 1 - uv2.y);
triangle[2].texCoords = z = cos(latitude) sin(longitude)
sf::Vector2f(uv3.x, 1 - uv3.y);
1.2.5 Computing and drawing the
Interestingly, we have had to reverse our v trajectory of the projectile
coordinate, this is because SFML’s reading
We gave the user the option to select these
of texture coordinates is from the top left
configuration items for the projectile:
corner, rather than the bottom left corner.
This is a common convention in computer Latitude
graphics, and is something to be aware of.
Longitude
7
other basis vectors of the coordinate sys- tion to find the position of the projectile
tem. We can define the z ′ axis to be the at any given time. We can infer the direc-
normal to the sphere, and the x′ axis to be tion in which gravity will act in, since it
the basis vector ‘facing’ the Westerly direc- will into the center of the sphere. Since our
tion. The y ′ axis is then the basis vector sphere is centered at (0, 0, 0), we know that
facing the Northerly direction5 . the direction is simply the negative of the
We can create a local coordinate system position vector of the projectile.
by applying the cross product two times to Next we can calculate the acceleration
the normal of the sphere. The implemen- due to gravity, by using the formula:
tation we used is a derivation of the one
GM
defined in [4]. a=
r2
<<Get local coordinate system>>= One inaccuracy of our model, is that
void CoordinateSystem(const our mass of Earth (or Mars or Moon) M ,
glm::vec3 &v1, glm::vec3 *v2,
is scaled and not accurate to the real mass
glm::vec3* v3)
{ of the planet. Our scaled mass was calcu-
*v2 = glm::vec3(-v1.z, 0, lated as:
v1.x) / std::sqrt(v1.x *
v1.x + v1.z * v1.z); M = g ∗ r2 /G
8
(launchControlSettings.radius planetMass /
* (distanceFromCenter *
launchControlSettings.radius); distanceFromCenter);
9
getCartesian(glm::degrees(theta), transformedPoint.y =
glm::degrees(phi), 1) * length; (transformedPoint.y + 1.0f)
} * 0.5f * RENDER_HEIGHT;
transformedPoint.x =
(transformedPoint.x + 1.0f)
* 0.5f * RENDER_WIDTH;
10
References [3] Jos Stam. “Evaluation of Loop Sub-
division Surfaces”. In: 2010. url:
[1] Department of Applied Mathematics https : / / api . semanticscholar .
and Theoretical Physics. “Deflection org/CorpusID:8420692.
of a Projectile due to the Earth’s Ro-
tation”. In: (). url: https : / / www . [4] Vectors. https://pbr-book.org/3ed-
damtp . cam . ac . uk / user / reh10 / 2018/Geometry and Transformations/Vectors#Coordina
lectures/ia-dyn-handout14.pdf. (Visited on 08/12/2024).
[2] Essence of Linear Algebra. [5] Eric W. Weisstein. Spherical Coordi-
http://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE
nates. https://mathworld.wolfram.com/. ab.
(Visited on 08/09/2024). Text. (Visited on 08/12/2024).
11