0% found this document useful (0 votes)
7 views4 pages

Problem Set 1

The document outlines a problem set for an advanced course on Virtual Earth at Waseda University, focusing on Gaussian elimination and its computational efficiency. It includes a series of tasks related to solving systems of linear equations, analyzing the cost of computations, and understanding the implications of round-off errors in numerical methods. Additionally, it discusses the sensitivity of solutions to changes in input data, highlighting the challenges of ill-conditioned systems.

Uploaded by

kwinvestor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views4 pages

Problem Set 1

The document outlines a problem set for an advanced course on Virtual Earth at Waseda University, focusing on Gaussian elimination and its computational efficiency. It includes a series of tasks related to solving systems of linear equations, analyzing the cost of computations, and understanding the implications of round-off errors in numerical methods. Additionally, it discusses the sensitivity of solutions to changes in input data, highlighting the challenges of ill-conditioned systems.

Uploaded by

kwinvestor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

School of international Liberal Studies

Waseda University
Advanced Course: Virtual Earth
Spring 2025

Problem Set 1
1. For the following system of simultaneous equations:

2x − y + 2z = 6
4x + 3y − 7z = −11
x + y − 3z = −6

(a) Describe (using sketches if you’re artistically inclined) the “row view” and “column view” of this
system.
(b) Write down the system in matrix form.
(c) Solve using Gaussian elimination (keeping track of the multipliers at each step for the question below).
(d) Using the multipliers from the previous question, write down the matrix that represents each Gauss
elimination step. (Recall: if li,j is the multiplier required to produce a zero in the i, j position, the
corresponding elimination matrix is just the identity matrix with the i, j element replaced by −li,j .)
(e) Write down the LU factorization of A (i.e., the matrixes L and U such that A = LU).
(f) Solve the linear system of equations described by the same coefficient matrix A but with the right-hand
side given by b = (6, 11, 2). (Hint: use your answer to the previous question!)
2. Each step of Gaussian elimination involves a sequence of arithmetic operations (multiplications, additions,
etc). While tedious and time consuming to do by hand one would think they wouldn’t pose much of a chal-
lenge for today’s fast computers. And indeed, for moderate sized problems the computations are practically
instantaneous. But for “real” problems involving 10’s of thousands of equations the cost of computation
quickly becomes important and sometimes even limits the size of problems that can be solved. The “cost”
is measured not only in terms of time but also how much energy is expended in performing the calculations.
The earliest processors required a whopping ≈70 joules of energy to perform one arithmetic (floating point)
operation, roughly what an incandescent lightbulb consumes or comparable to how much heat energy the av-
erage person emits per second (∼100 J/s). To put that number into perspective, at this energy (in)efficiency
the average laptop today would consume as much power as all of humanity combined (roughly 17 TW)!
Luckily, technology has improved dramatically and today’s most energy efficient systems built in Japan can
perform over 17 billion operations per joule of energy1 . Still, the environmental impact of computing (much
of it involving linear algebra in one form or another) is quite substantial. Data centers run by Google and
Facebook, not to mention the supercomputers used to simulate everything from mantle dynamics and (iron-
ically) climate change to airflow around the turbines of a Rolls Royce jet engine now account for roughly
2% of CO2 emissions, about the same as air travel. (Note that energy is used to both power the computers
that actually perform calculations as well as to cool them down. This, too, has improved in recent years.
In the most advanced data centers, the ratio of energy used to perform computations to that for cooling and
other secondary uses is now as high as 10.)
1
https://www.top500.org/green500/lists/2018/11/

1
Given the above considerations it is therefore important to know the cost of whatever calculation you’re
performing. Typically we don’t care about the exact cost but how it varies as a function of problem size. This
is called “scaling”. For instance, does doubling the number of equations double the number of operations
required to perform Gaussian elimination? If so, we say that the cost scales linearly with n, where n
is the number of equations. If it quadruples then we say that Gaussian elimination scales as n2 . And
so on. Here, you will estimate how Gaussian elimination scales with n by working through the following
examples. For each system, count the total number of arithmetic operations required to solve it via Gaussian
elimination. (It’s enough to keep track of just the multiplications and divisions. You can ignore the additions
and subtractions as these are generally a lot “cheaper” (even for humans!).) Note: For the purpose of this
problem, I don’t really care about the actual solutions to the eautions (although it would be good practice).
If you can figure out the number of operations without actually doing the arithmetic then that’s perfectly
fine too.

(a) Find x, y and give the number of steps required to solve:

x − 2y = 1
3x − 2y = 1.

(b) Find x, y, z and give the number of steps required to solve:

2x + 4y − 2z = 2,
4x + 9y − 3z = 8,
−2x − 3y + 7z = 10.

(c) Find x, y, z, t and give the number of steps required to solve:

3x − 2y + 2z = 5,
x + y + z + t = 10,
−x − 4y + 3z = −1,
x + 3z = 13.

(d) Find a, b, c, d, e and give the number of steps required to solve:

a + b − 3c + 3d − 2e = 4,
5a − c + 4d + 4e = 4,
5a − 2b + 5c + 4d + e = −6,
−3b + c − 4d − 5e = −36,
3a + 4b + c − 4d + 3e = 6

(e) Plot the cost of elimination for parts (a) through (d) as a function of n. Do you notice a pattern? Can
you generalize to a system of size n? (It helps to think about how many operations it takes, at any
given stage, to produce a zero below a pivot.)
3. Computers only deal with a finite set of numbers that can be represented in binary form. Whether a given
number can be exactly represented on a computer depends on how many binary digits (bits), n, are used on
2
the machine (and programming language) in question. The number of bits limits both how large a number
can be represented and how precisely. The maximum value an integer can take on is given by 2n − 1; in 32
bit precision this is 4,294,967,295. (Real or floating point numbers are treated differently but are similarly
restricted to a finite range.) As an example of inexact representation, in Matlab type the number “9.3577”
and see what Matlab represents the number as. To display the full precision of the number, use Matlab’s
format long command, i.e., type:

format long
9.3577

(In python, numpy.set printoptions and numpy.printoptions do something similar.) What


happens then if an arithmetic operation results in a number that cannot be represented in binary? If the
number is too large (either positive or negative) for the precision being used–this is known as an overflow–it
may be simply set to the largest number that can be represented. If the number cannot be exactly represented
in binary, some digits after the decimal point will be discarded–a process known as “rounding”–resulting
in a “round-off error”. That error can accumulate and ultimately give you inaccurate results. (See https:
//www3.nd.edu/˜zxu2/acms40390F15/Lec-1.2.pdf for more information on rounding.) To
illustrate this phenomenon, write a Matlab script to add “0.00001” 100000 times (i.e., 0.00001 + 0.00001 +
· · · ), and then subtract the sum from 1. How does the result compare with what you expect?
4. Round-off error can dramatically affect the results of a calculation (see here pages for some incredible–
and terrifying–examples: http://mathworld.wolfram.com/RoundoffError.html). How-
ever, while round-off is unavoidable, careful design of the numerical algorithm can minimize the error.
To illustrate this, consider the following system of linear equations Ax = b, with
 
−0.002 4.000 4.000
 
A=  −2.000 2.906 −5.387   and b = (7.998, −4.481, −4.143).
3.000 −4.031 −3.112
The exact solution to this problem is x = (1.000, 1.000, 1.000). (Confirm that if you plug in the above A
and b into Matlab and use A\b (scipy.linalg.solve in python) to solve the problem, that is what
you get.)

(a) To exaggerate the effect of round-off, solve the problem by Gaussian elimination using only 4-digit
arithmetic (i.e., rounding to 4 digits after each operation). Computers obey standardized and detailed
rules for rounding known as “IEEE arithmetic” but a skim through this Wikipedia (https://en.
wikipedia.org/wiki/IEEE_floating_point) article is sufficient for our current purpose.
If in doubt, use the Matlab function round(x,n) (numpy.round in python) to round any number
x to n digits (e.g., round(pi,3) will round π to 3 digits). You can do the calculation by any
means you like (hand, calculator, Matlab ...) but you must show each step, particularly the final
(augmented) upper triangular matrix before performing back-substitution (notice its last row!). How
does the solution compare with the exact one?
(b) Now reorder the rows (of both A and b) such that row 1 becomes row 2, row 2 becomes row 3,
and row 3 becomes row 1. The system of equations hasn’t changed; only the order in which the
equations appear has. Now repeat Gaussian elimination on this reordered system (again using 4-digit
arithmetic). Do you see the dramatic improvement in the solution? Simply by reordering equations

3
(row exchanges in the language of Gauss elimination) we can greatly minimize the effect of round-off.
In this particular case, what we’ve just done is called “pivoting”: reordering rows such that the largest
numbers (in absolute terms) appear in the pivot positions. All (well written) programs for performing
Gauss elimination do pivoting.
5. Sometimes, regardless of how good an algorithm you have, the problem itself may be such that it is highly
sensitive to small errors (such as due to round-off, but also due to those in measurement). To illustrate this,
consider Ax = b, with
!
1.01 0.99
A= and b = (2.00, 2.00).
0.99 1.01
The exact solution to this problem is clearly x = (1, 1).

(a) Change b to (2.02, 1.98) and solve (without rounding) using Gaussian elimination. Compare with the
original solution.
(b) Change b to (1.98, 2.02) and solve again.

Do you notice how even small changes in the RHS b vector completely change the solution? Think of a
linear system Ax = b as a “machine” (inside which sits A) that takes in b as input and spits out a solution
x as output. For our given matrix, even when the three inputs (the b’s) are “close together”, the outputs are
“far apart”. A system with this property–the solution is extremely sensitive to small changes in the input (or
coefficients of the matrix)–is known as an ill-conditioned system. While this example is obviously artificial,
such systems are distressingly common (particularly in fields such as seismology). Great care–and some
pretty clever maths–is required to deal with ill-conditioned problems.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy