0% found this document useful (0 votes)
82 views29 pages

Sta122 N1-2023

This document outlines the contents and structure of a course on Computational Methods and Data Analysis (CMDA I). It includes the programme and course learning outcomes, course outline, references, prerequisites, administration details, and expected learning outcomes. The course outline covers topics like number systems, errors and accuracy, interpolation, difference equations, and data analysis. The document provides guidance on course resources, evaluation criteria consisting of assignments, exams, and a final examination. It also lists additional references for numerical methods and MATLAB-based resources.

Uploaded by

McSafrege One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views29 pages

Sta122 N1-2023

This document outlines the contents and structure of a course on Computational Methods and Data Analysis (CMDA I). It includes the programme and course learning outcomes, course outline, references, prerequisites, administration details, and expected learning outcomes. The course outline covers topics like number systems, errors and accuracy, interpolation, difference equations, and data analysis. The document provides guidance on course resources, evaluation criteria consisting of assignments, exams, and a final examination. It also lists additional references for numerical methods and MATLAB-based resources.

Uploaded by

McSafrege One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Contents

Chapter 1: Programme Learning Outcomes & Course Learning Outcomes 2


1.1 PLO’s & CLO’s - STA 122: CSDA/CMDA I . . . . . . . . . . . . . . . . . . . 2
1.2 Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Pre-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Expected Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 2: Computational Methods & Data Analysis 7


2.1 Introduction to CMDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The art & science of problem solving . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Formulation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Numerical Methods Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Programming Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Chapter 3: Number Systems 12


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Types of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 *Important* Errors: machine representation & arithmetic operations . . . . . . . . . 16
3.4.1 Machine representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4.2 Arithmetic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5 Consequences of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5.1 Loss of Significant Figures/Digits, Precision and/or Resolution . . . . . . . . 19
3.6 Propagation of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6.2 Substraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6.4 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Errors/Noise in Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 4: Interpolation 28
4.1 Definition & Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 1
Chapter 1

Programme Learning Outcomes & Course Learning


Outcomes
1.1 PLO’s & CLO’s - STA 122: CSDA/CMDA I
Welcome to STA122: Computational Methods and Data Analysis I (CSDA I) at the University of
Nairobi.
1.2 Course Outline
The following will be our course outline
• Number systems - Errors and Accuracy
• Taylor polynomial: order of convergence
• Solution to non-linear equations
• Successive iterative techniques
• Finite differences
• Interpolation
• Difference equations
• Data analysis and computer graphics

1.3 References
The following reference books will be useful during our course
• deBoor C. R. & Conte S. D. (2017). Elementary Numerical Analysis: An algorithmic approach.
3rd Edition. McGraw-Hill Book Company, New York, USA.
• Hultquist P. F. (). Numerical methods for engineers and computer scientists
• Burden R. L. & Faires J. D. (2011). Numerical Analysis. 9th Edition. Brooks/Cole, Boston,
USA.
• Chapra S. C. & Canale R. P. (2010). Numerical methods for engineers. 6th Edition. McGraw-
Hill Book Company, New York, USA.
• Chapra, S. C. (2012). Applied numerical methods with MATLAB for engineers and scientists.
3rd Edition. McGraw-Hill Book Company, New York, USA.
• Atkinson K., Han W. & Stewart D. (2009). Numerical solution of ordinary differential
equations. John Wiley & Sons, New Jersey, USA.
• Any other online references specific to the topics

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 2
1.4 Pre-requisites
Calculus
• Differentiation & Integration
Basic Statistics
First semester courses in Statistics & Mathematics. Number Systems.
Algebra
Basic manipulation of matrices algebra

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 3
1.5 Administration
Class Times
Semester: February - May 2023

Time: Mondays, 1200 − 1500HRS

Venue: AGRIC LAB & Online

Lecturer
Timothy K. K. Kamanu, PhD

Consultation Times
Online + On demand

Evaluation

CATS (random n + fixed) 30%


Assignments 30%
Final examination 70%
Total 100%

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 4
1.6 Expected Learning Outcomes
• Learn about different Numerical Methods.
• Introduce key terms as used Numerical Methods & Numerical Analysis

– Errors – Interpolation
– Accuracy – Polynomials

Calculate the value of p given that

δfp = fp+ 1 − fp− 1


2 2

δfp = f1 − f0

p = 1 − 1/2

∆ and δ

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 5
Additional References
• K. Atkinson: An Introduction to Numerical Analysis, Wiley, (2nd ed.), 1989.
• K. Atkinson and W. Han: Elementary Numerical Analysis, 3rd ed., Wiley, 2003.
• W. Cheney and D. Kincaid: Numerical Mathematics and Computing, 6th ed., Brooks/Cole,
2007.
• S. Conte and C. de Boor: Elementary Numerical Analysis, McGraw-Hill, 1980.
• G. Dahlquist and A. Bjorck: Numerical Methods, Dover, 2003.
• G. Dahlquist and A. Bjorck: Numerical Methods in Scientific Computing, SIAM, 2008.
• P. Deuflhard and A. Hohmann, Numerical Analysis in Modern Scientific Computing, 2nd ed.,
Springer, 2003.
• W. Gautschi: Numerical Analysis: an introduction, Birkhauser, 1997. M. T. Heath, Scientific
Computing: An Introductory Survey, 2nd ed., McGraw-Hill, 2002.
• E. Isaacson and H. Keller: Analysis of Numerical Methods, Wiley, 1966 (or Dover 1994).
• D. Kahaner, C. Moler, and S. Nash: Numerical Methods and Software, Prentice-Hall, 1989.
• D. Kincaid and W. Cheney: Numerical Analysis: Mathematics of Scientific Computing, 3rd
edition Brooks/Cole, 2001.
• C. Moler, Numerical Computing with Matlab, SIAM, 2004.
• A. Quarteroni, R. Sacco, and F. Saleri: Numerical Mathematics, 2nd Edition, Springer, 2004.
• A. Ralston and P. Rabinowitz: A First Course in Numerical Analysis, McGraw-Hill, 1978.
• L. Shampine, Allen, and Pruess: Fundamentals of Numerical Computing, Wiley, 1997.
• L. Shampine, I. Gladwell, and S. Thompson: Solving ODEs with MATLAB, Cambridge, 2003.
• G. W. Stewart: Afternotes on Numerical Analysis, SIAM, 1996.
• G. W. Stewart: Afternotes goes to Graduate School Lectures on Advanced Numerical Analysis,
SIAM, 1998.
• J. Stoer and R. Bulirsch: Introduction to Numerical Analysis, Springer-Verlag, 1993.
• E. Suli and D. Mayers: An Introduction to Numerical Analysis, Cambridge, 2003.
• C. Van Loan: Introduction to Scientific Computing: A Matrix-Vector Approach Using
MATLAB, 2nd ed., Prentice-Hall, 2000.
Matlab-based References
• T. Davis and K. Sigmon: MATLAB Primer, Seventh Edition, Chapman and Hall, 2004.
• D. J. Higham and N. J. Higham: MATLAB Guide, Second Edition, SIAM, 2005.
• C. Moler: Numerical Computing with MATLAB, SIAM, 2004.
• A. Quarteroni and F. Saleri, Scientific Computing with Matlab, Springer, 2003.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 6
Chapter 2

Computational Methods & Data Analysis


2.1 Introduction to CMDA
Recent trends in multiple fields indicate that any problem, perceived or real, is susceptible to solution
using computers. Computational models such as machine learning methods (including deep learning)
and their practical implementations (Artificial Intelligence) are increasingly being used in:
• Understanding historical data on natural phenomena
• Evaluating complex inter-dependencies and inter-relations among variables ()
• Predictive analytic tasks (evaluating possible outcomes into the future)
2.2 The art & science of problem solving
Solution via computer can approximately be summarized in three phases as indicated in Figure 2.1.
1. Formulation
2. Numerical methods
3. Programming
4. Reporting/Interpretation
5. Evaluation - Comparison
This course encompasses part of the ”numerical methods” phase. The art and science that defines
the ”formulation” and ”programming” phases are very critical to problem solving in practice/industry
but are beyond the scope of this course. It is very important therefore that you generate interest (in
person or as a group), create time, and purpose to master the techniques there-of in your free time.
I will try to walk with you on this for as much as I can through the activities at the University of
Nairobi DASCLAB: Data Analytics and Scientific Computing Laboratory at the following address -
www.dasclab.uonbi.ac.ke
Illustration (a)

Directed Research
Figure 2.1: Practical problem solving on computers

2.2.1 Formulation Phase


The formulation phase includes:
• Specify the objectives for a given problem (flooding of river Nyando)
• Identification of the proper input data (number of people affect, area affected, levels of river,
flow rate m3 /second, etc.)
• Consideration of potent solution (numerical / mathematical / statistical techniques) methods
and associated checks/balances (identify peak levels using derivatives of functions in time t)
• Determination of type and amount of output. Output can be
Timothy K. K. Kamanu, PhD
– Analytical
Department of Mathematics, University of Nairobi 7
– Parameters
– Graphical, and/or
– Interactive
• Formulation of an explicit mathematical model
2.2.2 Numerical Methods Phase
The numerical methods phase involves:
• Representing a problem on a computer
• Preliminary ”analysis of errors” i.e., the sources and effects of errors on the results
• Determination of algorithm1 design and implementation i.e., the numerical methods to be used
to solve a problem including consideration of:
– How much accuracy is required
– Magnitude of round-off or discretization errors
– Step-size or minimum number of iterations required to achieve convergence to the correct
analytical answer or acceptable answer i.e., at ”a pre-specified accuracy level”
– Provision of adequate checks on the accuracy
– Making allowance for corrective action in case of non-convergence
• Formulation of a mathematical model2 . Often, and most recent developments indicate that
computational models (as opposed to closed-form expressions) are more accurate or represent
industrial/real-life data well enough to require explicit analytical mathematical models or
solutions.
2.2.3 Programming Phase
The programming phase involves conversion of a set of un-ambiguous step-by-step instructions to a
computer. Typically these instructions can be summarized in a flow chart.
Flowchart is a graphical representation of an algorithm (a set of procedural statements in a logical
form which a computer will follow). Flowcharting is a program-planning tool inorder to solve a
problem. Flowcharts use of symbols and their inter-connection(s) that indicate the flow of information
and processing of procedural statements. The complexity of a flowchart depends on that of the
problem.
The basic symbols used in flowcharts are
1. Terminal: Oval - start (first node), stop (last node) or halt a program (say under an error
condition)
2. Input and/or output: Parallelogram - take input or display output on output devices
3. Processing: Rectangle - arithmetic (add, substract, divide, multiply) processing/operations
4. Decision: Diamond - decision point (yes/no or true/false)
5. Connectors: Circle - useful to avoid confusion for complex flowcharts that spread over several
pages
6. Flowline: Directed/undirected lines - indicate direction of flow of control; relationships
between flowchart symbols; and exact sequence of execution of the instructions

1
An algorithm is a sequence of steps required to solve a problem.
Timothy K. K. Kamanu, PhD
2
All mathematical models are wrong but all are useful in8identifying general tendency/trend.
Department of Mathematics, University of Nairobi
Figure 2.2: Basic flowchart showing symbols for terminal, input/output, and decision nodes of a program
2.1

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 9
The flowchart in Figure 2.1 above can be represented on a computer using the R programming
environment as follows
i = 1; ## Initialization

while (i <= 100){


cat("\tI is equal to ",i,"\n"); ## Output

if (i == 39){ ## Decision node


i = 61;
} else {
i = i + 1; ## Incrementing the counter "i"
}
}
Advantages of Flowcharts
• Best way of communicating logic of a system
• Guide for blueprint during program design
• Useful during debugging (error tracking and correction)
• Analysis and comparison of programs
• Act as preferred documentation for programs
Disadvantages of Flowcharts
• Difficult to draw for large and complex programs
• The exists no standard to determine the amount of details
• Difficult to reproduce
• Difficult to modify
The programming phase includes translation of flowchart instructions into machine code based on
interpreters/compilers using different machine languages/dialects i.e.,
• Machine languages
• Assembly languages
• Object-oriented languages

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 10
HOMEWORK
Potent questions from this section
1. Briefly discuss the phases required during implementation of CMDA in practice
2. Define the term ”flowchart” and briefly define their advantages and disadvantages
3. List atleast 10 different types of programming languages/environments and highlight the
domain in which they are most useful
4. Write atleast five questions that are relevant in this section.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 11
Chapter 3

Number Systems: Errors & Accuracy


3.1 Introduction
Numerical analysis/methods span all methods of solving a given problem numerically - using
a sequence of numerical calculations based on stepwise repeated iterations towards achieving a
satisfactory answer.
A subsantial component of implementing numerical method involves consideration of the
(1) Sources errors i.e., arising from numerical calculations (arithmetic operations)
(2) Magnitude of errors, and lastly
(3) Satisfactory/acceptable accuracy
while solving problems on computers.
Advancement in computer hardware has ensured ready implementation of numerical methods since
they require fast, tedious and repetitive that ensures high accuracy (near-exact results).
3.2 Errors
Definition 1. Suppose that the true value of some variable x is denoted as xT and xA denotes the
approximate values.
1. The error ϵx in the computed values x is

ϵx = xT − xA (3.1)

2. The relative error Rx is denoted

xT − xA ϵx
Rx = = (3.2)
xT xT

Often the true value is not known in advance and therefore xA can be used in the denominator.
3. The percentage error is equivalent to Rx × 100.
xT − xA
Rx % = × 100 (3.3)
xA

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 12
Example 3.2.1. Calculate the error associated with estimating π to 5dp.

Solution. True Value:

Figure 3.1: π; see http://www.todayifoundout.com/index.php/2014/07/history-pi/

xA = 3.14286.
ϵx = xT − xA
= 3.1415926535897932 · · · − 3.14286
= 0.001267346.
π − 22
7
Rx =
π
= 0.0004034088

In fact, the relative error when approximating π using the rational


approximation 22/7 is Rx = −0.0004024994 and the percentage error
|Rx |% ≈ 0.04025%
The R code for checking the same is
pi-22/7 ## Error
(pi-22/7)/pi*100 ## Relative Error

3.3 Types of Errors


There are several types of errors in scientific / mathematical or computational spheres. This include
1. Measurement errors
2. Modeling errors
3. Errors in machine representation and/or arithmetic operations
4. Mathematical approximation errors
5. Statistical approximation errors
6. Blunders and mistakes
Timothy K. K. Kamanu, PhD
13
Department of Mathematics, University of Nairobi
1. Measurement Error
Physical measurement error is ”error in the primary data” that occurs during physical
observation/measurement. Such error cannot be eliminated using numerical methods but analysis
can help in
• Understanding its propagation effect in downstream calculations
• Recommending the best ways of calculations to mitigate the effects of its propagation
2. Modelling Errors
This are mathematical/statistical model errors that arise from the use of incorrect mathematical or
statistical formulation for the data. Mathematical/Statistical modeling seeks to represent physical
reality in closed-form expressions but ”all models are wrong but most are useful”.
Modeling errors results in under- and over-estimation of the dependent variable especially pre- and
post-observation intervals. For instance, a population growth model may lead to overestimation
population demography in the future.
3. Machine representation and arithmetical errors
This errors arise from
• Rounding off
• Truncation
• Representation of numbers on computers using floating point arithmetic.
This group of errors will form be highlighted severally during our course.
• How do we mitigate, reduce or even eliminate such errors during arithmetic operations i.e.,
summation/addition, substraction, division & multiplication?
• How do these errors adjudge our use of competing algorithms for solution of a given problem?
• Are these inevitable say when solving systems of linear equations?
4. Mathematical estimation errors
This errors arise from approximation of mathematical formulation that is not amicable for explicit
analytical solutions. For example, some elementary functions1 do not have elementary antiderivatives
but suitable approximations are often used:
Z √
x2 π
e dx = Erf(x) + C
2
Z
sin(x)
dx = Si(x) + C
x
Z
ln(x)
dx = Ei(x) + C
x
Z
1
dx = Li(x) + C
ln(x)

where the first terms RHS have ready approximations, using say Taylor series approximations2
5. Statistical estimation errors
Mathematical models are devoid of a random component. Statistical error estimation arises from
random variation as well as statistical estimation based on Monte Carlo simulation.
Monte Carlo methods are useful when approximation integrals of known functions f (x)
1
”Elementary functions” any function composed of addition, subtraction, multiplication, division, powers, roots,
logarithms, trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic
functions; and their finite expressions R x
2
The above set is not exhaustive: other functions that do not have elementary antiderivatives are: x dx,
RTimothy
p K. K. Kamanu, PhDR R√
2 of Nairobi
R 14 R R
sin(x)dx, sin(x )dx,
Department of Mathematics, University 3
1 + x dx, exp(exp(x))dx, ln(ln(x))dx, sin(sin(x)), etc.
based on uniform random draws but find the best and most useful setting in estimating
integrals of unknown functions such as probability density and distribution functions [see Gibbs
Sampler & Importance Sampling].
6. Blunders or mistakes
Blunders or mistakes often arise in simulation studies and comprise programming errors. Logical
program errors are hard to determine without explicit domain knowledge and/or experience.
Blunders and mistakes can be avoided and detected by
• Test data: testing programs using known true values
• Compartmentalization: breaking the problem into small sub-problems solved using subroutines
(small bits of code) that can be tested separately.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 15
3.4 *Important* Errors: machine representation & arithmetic operations
From a numerical point of view, the important errors include errors deriving from
• Machine representation & arithmetic operations
• Mathematical & Statistical estimation
3.4.1 Machine representation
• All numbers are represented in computers as in binary representation.
• Some numbers can be represented exactly and some cannot be represented exactly but rather
using fractional approximation

Number Binary Representation


010 02
110 12
210 102
310 112
410 1002
510 1012
610 1102
710 1112
810 10002
910 10012
1010 10102
.. ..
. .
1/210 0.12
z}|{
1/310 0.010 101 · · ·2
z}|{
1/1010 0.00011 0011 · · ·2
2.12510 10.0012

Transcendental numbers e.g. π, e etc., and transcendental functions such as logarithm, hyperbolic,
trigonometric, etc. functions cannot be represented in finite representation in binary as well as decimal
representations.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 16
Computers can represent numbers using the
1. Fixed point
2. Floating point representation
(A) Fixed-point representation
• Allows representation of, and operations on, integers ONLY.
• Often represents numbers as a computer word of 32 binary digits (i.e. on 32-bit
devices/computers).
• At most 232 = (4, 294, 967, 296 − 1) ≈ 4.294 × 109 (positive; x ∈ Z+ ) integers can be stored
• Maximum number of integers, both x ∈ Z+ (positive) and x ∈ Z− (negative) i.e. x ∈
[−231 − 1, +231 − 1]
• Fixed-point representation offers very limited support for advanced scientific calculations but
is often used with indices & counters.
• The disadvantages of fixed point representations includes
– Only allows for integers
– Numbers stored must be equally spaced
– Limited range of numbers can be stored
(B) Floating-point representation
Floating-point allows representation of real-numbers i.e. x ∈ R.
Definition 2. Decimal floating-point representation

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 17
3.4.2 Arithmetic operations
• Binary (base 2)
– Addition
– Substraction
– Multiplication
– Division
• Octal (base 8)
– Addition
– Substraction
– Multiplication
– Division
• Hexadecimal (base 16)
– Addition
– Substraction
– Multiplication
– Division
• Conversions between number systems
– Decimal (base 10) → Binary & vice versa
– Decimal → Octal & vice versa
– Decimal → Hexadecimal & vice versa
– Octal → Binary & vice versa
– Hexadecimal → Binary & vice versa
– Octal → Hexadecimal & vice versa

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 18
3.5 Consequences of Errors
The unfortunate consequences of errors in numbers and functions are
1. Loss of significant digits, precision and/or resolution
2. Propagation of errors in arithmetic operations
3. Noise during functional evaluation
4. Overflow & underflow errors
3.5.1 Loss of Significant Figures/Digits, Precision and/or Resolution
The loss of significant digits/figures, loss of resolution/precision results from operations such as
cancellations during subtraction and/or addition operations e.g. when solving quadratic equation.

Example 3.5.1. Solution to Quadratic Equations


Solutions to y = f (x) = ax2 + bx + c results in real solutions
√ √
−b + b2 − 4ac −b − b2 − 4ac
x1 = x2 =
2a 2a
Cancellation is likely for ........ whenever 4ac ≈ 0. □

Definition 3. Significant digits/figures


• Significant digit of a number when written in positional notation are digits that carry
meaningful contributions to the measurement resolution of the number
• Significant digits of xA is the number of its leading digits that are correct relative to the
corresponding digits in the true values xT .
• If xT and xA are written in the decimal notation as

xT = a1 a2 . a3 · · · am am+1 am+2 am+3 · · · s.t. ϵx is


ϵx = xT − xA = 0 0. 0 · · · 0 am+1 am+2 am+3 · · ·
.

and if the error is ≤ 5 in the (m + 1)th value of xT , counting from the first non-zero digit, then
xA is said to have, at-least, m significant digits of accuracy relative to xT .

Example 3.5.2. Number of Significant Digits Determine the number of significant digits
in the following
• xT = 1234.567 and xA = 1234.517

xT = 1234.5 67
xA = 1234.5 17
|ϵx | = |xT − xA | = 0000.0 50

has 5 significant figures.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 19
• xT = 0.02348 and xA = 0.02351

xT = 0.0 23 48
xA = 0.0 23 51
|xT − xA | = 0.0 00 0 3

has .... significant figures.


• xT = π = 3.14159265 and xA = π̂ = 3.1428571(7dp)

xT = 3.14 159265
xA = 3.14 28571
|xT − xA | = 0. 00 126445

has .... significant figures.

3.6 Propagation of Errors


Every floating point operation in a computational process can give rise to an error which, once
generated, may then be amplified or reduced in subsequent operations i.e., propagated.
The result of propagation of error is loss of precision/accuracy. The most common, and often
avoidable, way of increasing the importance of error is termed loss of significant digits.
NOTE
When an arithmetic operation is performed the number of significant digits of the results is the same
as that of the operands except when subtracting two numbers that are about the same magnitude.

Example 3.6.1. Consider x − y as follows.

# Significant digits
1
x = 0.76545423 × 10 7
y = 0.76544211 × 101 7
x − y = 0.00001212 × 101 4

Example 3.6.1 illustrates loss of precision which should be avoided whenever possible/necessary.
The propagation of errors under addition and other arithmetic operations can be considered by letting

x : True value of x
x̄ : Approximate value of x
ϵx : Error in x i.e., x − x̄

y : True value of y
ȳ : Approximate value of y
ϵy : Error in x i.e., y − ȳ
Timothy K. K. Kamanu, PhD
Department of Mathematics, University of Nairobi 20
3.6.1 Addition
Propagation of error under addition is sanctioned as

x+y = (x̄ + ϵx ) + (ȳ + ϵy )


| {z }
True value
= (x̄ + ȳ) + (ϵx + ϵy )
| {z } | {z }
Approximate value Error from addition

Notes
1. The maximum error taken is
|ϵx + ϵy | ≤ |ϵx | + |ϵy |

2. The error propagated under addition is

ϵx+y = ϵx + ϵy

3. The relative error is ϵx + ϵy


Rex+y =
x̄ + ȳ

3.6.2 Substraction
Propagation of error under subtraction is sanctioned as

x − y = (x̄ + ϵx ) − (ȳ + ϵy )
= (x̄ − ȳ) + (ϵx − ϵy )

Notes
1. The error propagated under subtraction is
ϵx−y = ϵx − ϵy

2. The relative error is ϵx − ϵy


Rex−y =
x̄ − ȳ

3.6.3 Multiplication
Propagation of error under multiplication is sanctioned as

xy = (x̄ + ϵx )(ȳ + ϵy )
= x̄ȳ + x̄ϵy + ȳϵx + ϵx ϵy
|{z}
2nd order approx.≈0
xy − x̄ȳ ≊ +x̄ϵy + ȳϵx

after neglecting the cross product term (i.e., limiting to calculations first order approximation under
multiplication), it follows that
1. The error propagated under multiplication is
ϵxy = x̄ϵy + ȳϵx

2. The relative error under multiplication is equivalent to the sum of the relative error of the
operands (x and y)
ϵx ϵy
Timothy K. K. Kamanu, PhD Rexy = + = Rex + Rey
Department of Mathematics, University of Nairobi x̄ 21ȳ
3.6.4 Division
Propagation of error under multiplication is sanctioned as
x x̄ + ϵx x̄ + ϵx
= = −1
y ȳ + ϵy

ϵx
ȳ 1 + ȳ
= but if |h| << 1 then (1 + h)−1 = 1 − h + h2 − h3 + · · · , hence
ϵy ϵx 2 ϵx 3
  
x̄ ϵx
= + 1 − + 2 − 3 + ···
ȳ ȳ ȳ ȳ ȳ
x̄ ϵx x̄ ϵx ϵy
= + − 2− +···
ȳ ȳ ȳ ȳ 2
|{z}
2nd order approx.≈0
x̄ ϵx x̄
≊ + − 2.
ȳ ȳ ȳ
|{z} | {z }
Approx’ Error

That is
x x̄ ȳϵx − x̄ϵy
ϵx/y = − =
y ȳ ȳ 2

The relative error is


 ,
ϵx/y ȳϵx − x̄ϵy x̄
Rex/y = =
x̄/ȳ ȳ 2 ȳ
ȳ 1 ȳ x̄
= ϵx − ϵy
x̄ ȳ x̄ ȳ 2
Note
1. The relative error under division is equal to the difference of relative errors of the operands

ϵx ϵy
Rex/y = − = Rex − Rey
x̄ ȳ

2. For three (or more) operands say, x, y and z under


• Addition say Q = x + y + z: We let u = x + y with ϵu = ϵx + ϵy and thus the error under
addition of u + z is
ϵQ = ϵu + ϵz = ϵx + ϵy + ϵz

• Multiplication say Q = xyz: We let u = xy the error is

ϵQ = ϵuz = z̄ϵu + ūϵz but ϵu = x̄ϵy + ȳϵx and ū = x̄ȳ


= z̄(x̄ϵy + ȳϵx ) + ūϵz thus
= x̄z̄ϵy + ȳz̄ϵx + x̄ȳϵz
= ϵxyz

xy
• EXERCISE: Division - Suppose Q = z
and let u = xy, then the error is???

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 22
3.7 Errors/Noise in Functions
The use of approximate values with functions can lead to loss of precision or noise during functional
evaluation. The consequence of the error depends on the functional formulae (at some limiting values)
adopted during sequential/iterative procedures during optimization.
Consider a function f (x) evaluated at a point x = a where a is an exact value. If the same function is
evaluated at an approximate value x = ā,
Q1. What is the expression of the error ϵf in the function f ?

ϵf = f (a) − f (ā)

Q2. How can the evaluation of f be optimized to reduce the error ϵf ?


Q3. What is the practical bound M on the maximum error such that ϵf ≤ M ?
The answers to the above questions are as follows
A1. If f (x) is approximated by its Taylor series approximation of degree n i.e.,
n+1
X f (k) (ā)
f (x) = (x − ā)k
k=0
k!
′ ′′ ′′′
f (ā) f (ā) f (ā)
= f (ā) + (x − ā) + (x − ā)2 + (x − ā)3 + ···
1! 2! 3!
(n) (n+1)
nf (ā) n+1 f (ξ)
· · · + (x − ā) + (x − ā) , (3.4)
n! n + 1!
then the error ϵf is precisely the last term of the Taylor polynomial of degree n + 1 whereby the
derivative at the term is evaluated at some point ξ that lies between a and x {i.e., ξ ∈ [x, a]},
rather than exactly at a.
If x = a in (3.4), then
′ ′′ ′′′
f (ā) f (ā) f (ā)
f (a) = f (ā) + (a − ā) + (a − ā)2 + (a − ā)3 + ···
1! 2! 3!
f (n) (ā) f (n+1) (ξ)
· · · + (a − ā)n + (a − ā)n+1 (3.5)
n! n + 1!
′′ ′′′
′ f (ā) f (ā)
= f (ā) + ϵa f (ā) + ϵ2a + ϵ3a + ···
2! 3!
f (n) (ā) f (n+1) (ξ)
· · · + ϵna + ϵn+1
a , (3.6)
n! | n + 1! }
{z
ϵf for some ξ ∈ [a, ā]

For example, to a first-order approximation of f , then



ϵf = f (a) − f (ā) = ϵa f (ξ)

Example 3.7.1. Find an expression for the error associated with the quantity Q =
(x + 2y) · z 4
Solution
Let u = x + 2y and v = z 4 , then ϵu = ϵx + 2ϵy
Timothy K. K. Kamanu, PhD □
Department of Mathematics, University of Nairobi 23
Timothy K. K. Kamanu, PhD
Department of Mathematics, University of Nairobi 24
Illustration of loss of significant digits during functional evaluation
1e−13
Differences in f(x)

5e−14
0e+00
−5e−14

0 20 40 60 80 100

Values of x

Figure 3.2: Loss of significant digits during calculations as simulated by the algorithm below.

A2. Some functional formulae do not lend themselves for efficient evaluation to reduce ϵf .
Reformulation is necessary inorder to reduced loss of precision following loss-of-significance
error.

Example 3.7.2. Consider the function


√ √ 
f (x) = x x+1− x

Solution It is imperative to find an appropriate computable form to avoid error emanating from
cancellation and loss of significant digits i.e.
√ √ √
√  x+1+ x
true
f (x) = x x+1− x · √ √
x+1+ x
x
= √ √
x+1+ x

As x increases the magnitude of the errors increase and decreases as illustrated below with
x ∈ [1, 100]
As x increases there are fewer digits of accuracy as illustrated in Table 3.7 and Figure ??. The
above results can be recreated using the following code-block

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 25
x f¯(x): approx f (x): true ϵf = |f¯ − f |
1 0.414213562373095 0.414213562373095 5.55111512312578E-17
2 0.635674490391564 0.317837245195782 0.317837245195782
3 0.803847577293368 0.267949192431123 0.535898384862246
4 0.944271909999159 0.236067977499790 0.708203932499370
5 1.067108826416940 0.213421765283388 0.853687061133552
10 1.543471301870200 0.154347130187021 1.389124171683180
100 4.987562112088990 0.049875621120890 4.937686490968100
1000 15.807437428957600 0.015807437428956 15.791629991528700
10000 49.998750062485400 0.004999875006250 49.993750187479200
Table 3.1: Errors in functions. In appropirate formulae furthers larger errors and/or noise during functional
evaluation

Illustration of loss of significant digits (functional evaluation)


1e−10
8e−11
Error in function evaluation

6e−11
4e−11
2e−11
0e+00

1 2 3 4 5 10 100 1000 10000 1e+05

Values of x

Figure 3.3: Illustration of the error due to loss-of-significance during functional evaluation.

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 26
## Write a funtion that is going to show long format for decimal
representation
Specify_Decimal = function(x,k) trimws(format(round(x, k), nsmall=k))
;
Value = 10.03; ## Note that numbers are not always
represented accurately on computers
Specify_Decimal(Value,10)
5 for (i in 1:30) cat("i=",i,"\t",Specify_Decimal(Value,i),"\n");

## Illustration of Errors (STA122)


x = 1:10;
fx = x * (sqrt(x+1)-sqrt(x));
10 fx_true = x / (sqrt(x+1) + sqrt(x));
cbind(x,fx,fx_true)
Specify_Decimal(cbind(x,fx,fx_true),20)

x = c(1:5,10,100,1000,10000,100000)
15 fx = x * (sqrt(x+1)-sqrt(x));
fx_true = x / (sqrt(x+1) + sqrt(x));
Diff_fx2 = fx_true-fx;
cbind(x,fx,fx_true)
Specify_Decimal(cbind(x,fx,fx_true),20)
20 ## Please comment out pdf() and dev.off() lines hereunder
#pdf("Loss_of_Significant_Digits2.pdf",height = 6,width = 12)
barplot(Diff_fx2,col = rainbow(length(x)),names.arg = x,cex.axis =
1.2,
main="Illustration of loss of significant digits (functional
evaluation)",
xlab = "Values of x",
25 ylab = "Error in function evaluation")
#dev.off();

## Development of the illustrative plot


x = 1:100; ## creating some simulated x values
30 fx = x * (sqrt(x+1)-sqrt(x));
fx_true = x / (sqrt(x+1) + sqrt(x));
Diff_fx = fx_true-fx;

#pdf("Loss_of_Significant_Digits.pdf",height = 6,width = 12)


35 plot(x,Diff_fx,’h’,lwd=5,cex.lab=1.2,
xlab="Values of x",ylab="Differences in f(x)",
main="Illustration of loss of significant digits during
functional evaluation")
#dev.off(); ## Closing the graphical device (pdf document)
./R–codes/Chapter1v2.R

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 27
Chapter 4

Interpolation
4.1 Definition & Application
Webster Dictionary
... act of introducing something, especially spurious and foreign ...
... act of calculating values of functions between values already known ...

Definition 4. Interpolating Polynomial


A polynomial P (x) is termed an interpolating polynomial if the value of P (x) and/or its certain order
derivatives coincide with those of f (x) and/or its same order derivatives at one or more points termed
nodes, base points, tabular points, or arguments
Applications
• Reconstructing a function f (x), when it is not given explicitly but rather only values of f (x)
and/or its derivatives at a set of known points.
• To replace the function f (x) by the interpolating polynomial P (x) so that the common
operations such as differentiation, integration, etc. which are intended for the function f (x)
are performed using P (x).

Example 4.1.1. Taylor series expansion of f (x) about a point x = x0 . If a function f (x) is
continuous and and differentiable between say [a, b] ⊂ R, then the Taylor
series expansion Pn (x) of f (x) about a point x = x0 i.e.,
n
X f k (x0 )
Pn (x) = (x − x0 ) (4.1)
k=0
k!
f ′ (x0 ) f ′′ (x0 ) f (n) (x0 )
= f (x0 ) + (x − x0 ) + (x − x0 )2 + · · · + (x − x0 )n
1! 2! n!
can be regarded as an interpolating polynomial satisfying the following
conditions
• Derivatives satisfy

Pn(k) (x0 ) = f (k) (x0 ) for k = 0, 1, . . . , n, i.e.,


Pn (x0 ) = f (x0 ) if k = 0
Pn′ (x0 ) = f ′ (x0 ) if k = 1
Pn′′ (x0 ) = f ′′ (x0 ) if k = 2

• a
• b
which is of the form

Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi 28
Contents

Definition 5. Interpolation: problem definition Given the values of a known function y = f (x) at a
sequence of ordered points x0 , x1 , . . . , xn . Find f (x) for arbitrary x between the base points.
Note
• {xi }, i = 0, 1, 2, . . . , n are termed nodes, base points or tabular points
• We assume that f (x) is continuous and differentiable in [x0 , xn ]
• When x0 ≤ x ≤ xn , the problem is termed interpolation
• When x < x0 or x > xn , the problem is termed extrapolation
• Interpolation is more accurate than extrapolation
• If yi = f (xi ), the problem of interpolation involves drawing a smooth curve through the points
(x0 , y0 ), (x1 , y1 ), · · · , (xn , yn ).
• Interpolation is not equivalent to the least-squares curve-fitting problem

Timothy K. K. Kamanu, PhD


Department of Mathematics, University of Nairobi

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy