100% found this document useful (2 votes)
2K views

Approximations and Errors in Numerical Computing

The document discusses various sources of error in numerical computing methods, including: 1) Inherent errors from limited precision in experimental data and conversions between number representations. 2) Numerical errors from rounding numbers during calculations, which introduces round-off error, and truncating infinite expressions, which introduces truncation error. 3) Specific methods for rounding numbers, like chopping/truncating or symmetric rounding, and how they affect the size of rounding errors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
2K views

Approximations and Errors in Numerical Computing

The document discusses various sources of error in numerical computing methods, including: 1) Inherent errors from limited precision in experimental data and conversions between number representations. 2) Numerical errors from rounding numbers during calculations, which introduces round-off error, and truncating infinite expressions, which introduces truncation error. 3) Specific methods for rounding numbers, like chopping/truncating or symmetric rounding, and how they affect the size of rounding errors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Approximations and Errors in numerical computing (1) (2)

Objective:

Approximations and errors are an indispensable piece of human life. They are all over the place
and unavoidable. In numerical methods, we additionally can't disregard the presence of error.
Blunders come in assortment of structures and sizes; some are avoidable, some are most
certainly not. For instance, data conversion and round off mistakes can't be evaded, yet human
error can be dispensed with. Although certain error can't be wiped out totally, we must tried to
minimize these errors for our final solutions. It is accordingly basic to know how mistakes
emerge, how they develop during the numerical procedure, and how they influence accuracy of
an answer. The main objective of this lecture is to understand the major sources of errors in
Numerical Method.

Numbers and Their Accuracy:


There are two kinds of numbers:
1. Exact Number
2. Approximate Numbers

Exact Numbers:
2, 1/3, 100 etc are exact numbers because there is no approximation or uncertainty associated
with them. Π, √2 etc are also exact numbers when written in this form.

Approximate Numbers:
An approximate number is a number which is used as an approximation to an exact number and differs
only slightly from the exact number for which it stands. For example, an approximate value of ∏ is 3.14
or if we desire a better approximation, it is 3.14159265. But we cannot write the exact value of ∏.

Significant Digits:

The digits that are used to express a number are called significant digits (or figures).
 A significant digit/figure is one of the digits 1,2, ... 9 and 0 is a significant figure except
when it is used to fix the decimal point or to fill the places of unknown or disorder digits.
 The following notion describe how to count significant digits:

1. All non-zero digits are significant.


2. All zeros occurring between non-zero digits are significant digits.
3. Trailing zeros following a decimal point are significant. For example, 3.50, 65.0 and
0.230 have three significant digits each.
4. Zeros between the decimal point and preceding a non-zero digits are non significant. For
example, the following numbers have four significant digits: 0.0001234, 0.001234,
0.01234
5. When the decimal point is not written, trailing zeros are ambiguous (generally. not
considered to be significant). We can avoid the ambiguity by writing the number in the
scientific notation. For example, 4500 may be written as 45 x 10 2 contains 2 significant
digits. However, 4500.0 contains five significant digits. Further examples are:
7.56 x 104 has three significant digits
7.560 x 104 has four significant digits
7.5600 x 104 has five significant digits

Accuracy and Precision


In terms of their accuracy and precision, the errors associated with both calculations and measure
ments can be described.The definition of precision and accuracy is closely linked to significant
numbers:
 Accuracy refers to how closely a computed or measured value agrees with the true value. For
example, the number of significant digits in a value. The number 57.345 is accurate up to
five significant digits.
 Precision refers to how closely a computed or measured value agrees with each other. For
example, the number of decimal position, i.e. the order of magnitude of
last digit in a value. The number 57.396 have a precision of 0.001.

(2)

Rules of Rounding off:

We come across numbers in numerical computation that have large numbers of digits and it will
be necessary to cut them to manageable numbers of figures. This process is referred to as
rounding off. The error caused by a large number cut-off into usable figure number is called a
round-off error.
For example, 3.14159265... rounded to the nearest thousandth is 3.142. That is because the third
number after the decimal point is the thousandths place, and because 3.14159265... is closer to
3.142 than 3.141.

Banker’s Rounding Rule


In Banker’s rounding rule (also known as "Gaussian rounding") the value is rounded to the
nearest even number. Banker's rounding is the default method for Delphi, VB.NET and VB6. It
follows the specification of IEEE Standard 754. The rule is as follows:
To round off a number to n significant digits, discard all the digits to the right of the n th
significant digit and check the (n+1) th significant digit.

a) if it is less than 5, leave the nth significant digit unaltered


b) if it is greater than 5, add 1 to the nth significant digit
c) If it is exactly 5 then leave the nth significant digit unaltered if it is an even number, but
increase it by 1 if it is an odd number.
 See the following links:
http://www.cs.umass.edu/~weems/CmpSci535/535lecture6.html

http://www.rit.edu/~meseec/eecc250-winter99/IEEE-754references.html

http://www.gotdotnet.com/Community/MessageBoard/Thread.aspx?id=260335

Example: The following numbers are rounded to 4 significant digits.

1.6583 → 1.658
30.0567 → 30.06
0.859458 → 0.8594
3.14159 → 3.142
3.14358 → 3.144

Error in Numerical Methods:


Numerical errors arise from the use of approximations to represent exact mathematical
operations and quantities.
True value = approximation + error
Error (E) = True Value – approximation

Sources of Errors
A number of different types of errors arise during the process of numerical computing. All these
errors contribute to the total error in the final result.

1. Inherent Errors: Inherent errors are those present in the data provided to the model (also
known as input errors). It contains two components, data errors and errors of conversion.
Data Errors:

Data errors (also known as statistical errors) occur when some experimental methods obtain dat
a for a problem and are therefore of limited precision and reliability. This may be due to some
limitation in instrumentation & reading and may therefore be inevitable.

Conversion Error:

Conversion errors (also known as errors of representation) occur due to the computer's
limitation to store the data accurately. We know that there is only a specified number of digits in
the floating point representation. The unretained digits are the round-up error.

Example: Represent the decimal number 0.1 and 0.4 in binary number form with an accuracy of
8 binary digits. Add them and then convert the result back to the decimal form.

Solution:

0.110 = 0001 1001


0.410 = 0110 0110
Sum = 0111 1111
= 2-2 + 2-3 +2-4 +2-5 +2-6 +2-7 +2-8
= 0.25 + 0.125 + 0.0625 + 0.03125 + 0.015625 + 0.0078125 + 0.00390625
= 0.49609375

2. Numerical Errors:

Numerical errors were created during the process of implementing a numerical procedure (also
known as procedural errors). These come in two ways round off errors and and truncation
errors.

a) Round off error:


Round-out errors occur when the exact numbers are represented by a fixed number of digits.
Since the numbers are stored at every stage of computation, round off error is introduced at
the end of every arithmetic operation.
Rounding a number can be dome in two ways-- chopping and symmetric rounding. Some
systems use the chopping method while others use symmetric rounding.

i) Chopping: In chopping, the extra digits are dropped. This is called truncating the
number. For example, suppose we are using a computer with a fixed word length of four
digits. Then a number like 42.7893 will be stored as 42.78 and the digits 93 will be
dropped.
x = 42.7893
= 0.427893 × 102
= (0.4278 + 0.000093) × 102
= (0.4278 + 0.93× 10-4) × 102
This can be expressed in general form as:
True x = (fx + gx × 10-d) × 10E
= fx × 10E + gx × 10E-d
= approximate x + error
 In chopping, Error = gx × 10E-d ; 0 ≤ gx < 1, where d is the length of the mantissa, E is
the exponent and gx is the truncated part of the number in normalized form.
 In chopping, absolute error  10E.

ii) Symmetric Round off: In the symmetric Round off method, the last retained significant
digit is “rounded up” by 1 if the first discarded digit is larger or equal to 5; otherwise, the
last retained digit is unchanged. For example, the number 42.7893 would become 42.79
and the number 76.5432 would become 76.54.
 In symmetric round off, when gx < 0.5
True x = fx × 10E + gx × 10E-d
Approximate x = fx × 10E
Error = gx × 10E-d
 In symmetric round off, when gx ≥ 0.5
True x = (fx + 10-d) × 10E = fx × 10E + 10-d × 10E
Error = [fx × 10E + gx × 10E-d] – [fx × 10E + 10-d × 10E]
= (gx –1) × 10E-d
 In symmetric round off, absolute error  0.5 × 10E-d.
 Sometimes banker’s rounding rule is used for symmetric round off.

Example: Find the round off error in storing the number 752.6835 using a four digit
mantissa.

Solution:

True x = 752.6835
= 0.7526835 × 103
= (0. 7526 + 0.0000835) × 103
= (0. 7526 + 0.835× 10-4) × 103
= 0. 7526 × 103 + 0.835× 10-1

Chopping method
Approximate x = 0. 7526 × 103
Error = 0.835× 10-1
Symmetric Round off
Approximate x = 0. 7527 × 103
Error = (gx –1) × 10E-d = (0.835 – 1)× 10-1 = – 0.0165
b) Truncation Errors: Truncation errors arise from using an approximation instead of a precise
mathematical method. Usually, this is the error that occurs from the numerical system
truncation. To approximate the sum of an infinite series, we often use a finite number of
terms. For example, consider the following infinite series:
sin(x) = x – x3/ 3! + x5 / 5! – x7 / 7! + ... ... ... ...

When we calculate the sine of an angle using this series, we cannot use all the terms in the
series for computation. We usually terminate the process after a certain term is calculated.
The term “truncated” introduces an error which is called truncation error.

Example: Find the truncation error in the result of the following function for x = 1/5 when
we use first three terms.
ex = 1 + x + x2/ 2! + x3 / 3! + x4 / 4! + x5 / 5! + x6 / 6!
Solution:
Truncation error = x3 / 3! + x4 / 4! + x5 / 5! + x6 / 6! = 0.1402755 × 10-2

3. Modeling Errors:
Modeling errors occur in the formulation of mathematical models due to some simplifying
assumption. For example, when designing a model to measure the force acting on a falling body,
we may not be able to properly estimate the coefficient of air resistance or determine the
direction and magnitude of wind force acting on the body, etc. To simplify the model, we may
assume that the force due to air resistance is linearly proportional to the velocity of the falling
body or we may assume that there is no wind force acting on the body. All such simplifications
certainly result in errors in the output from such model which is called Modeling error.

Through adding more functionality, we can reduce modeling errors through improving or
extending the models. But the improvement may take the model more difficult to solve or may
take more time to implement the solution process. It is also not always true that an enhanced
model will provide better results. On the other hand, an oversimplified model may produce a
result that is unacceptable. It is, therefore, necessary to strike a balance between the level of
accuracy and the complexity of the model.

4. Blunders: Blunders are errors that are cause due to human imperfections. Since these errors
are due to human mistakes, it should be possible to avoid them to a large extent by acquiring a
sound knowledge of all aspects of the problem as well as the numerical process. Some common
types of errors are:

1. Lack of understanding the problem


2. Wrong assumption
3. Selecting a wrong numerical method for solving the mathematical model
4. Making mistakes in the computer program
5. Mistake in data input
6. Wrong guessing of initial values
Different Types of Error:

Absolute Error:

The absolute error is the absolute difference between the true value of a quantity and its
approximate value as given or obtained by measurement or calculation. Thus, if X t is the true
value of a quantity and Xa is its approximate value, then the absolute error E a is given by:

Ea = | Xt – Xa |.

Relative Error:

The relative error is nothing but the “normalized” absolute error. The relative error E r is defined
as follows:

𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (Ea)


𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟(Er) =
𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 (𝑋)

More often, the quantity that is known to us is Xa and, therefore, we can modify the above
relation as follows: Er = | Xt – Xa | / | Xt |

Percent Relative Error:

The percent relative error is 100 times the relative error. It is denoted by Ep and defined by: E p =
Ea x 100

𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (Ea)


𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝐸𝑟𝑟𝑜𝑟(Ep) = ∗ 100%
𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 (𝑋)

 Relative and Percent relative errors are independent of the unit of measurement, whereas
absolute errors are expressed in terms of the unit uses.

Absolute and Relative Accuracy


Let ∆X is the number such that | Xt – Xa | ≤ ∆X then ∆X is an upper limit on the magnitude of the
absolute error is said to measure absolute accuracy.
Similarly, the quantity, ∆X / | Xt | ≈ ∆X / | Xa | measures the relative accuracy.

𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (Ea) 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (Ea)


𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 (𝑅 ) = ≈
|𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 (𝑋)| |𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑉𝑎𝑙𝑢𝑒 (𝑋 )|

 If the number X is rounded to N decimal places, then


∆X = ½ * 10-N
 Example: If X = 0.51 and is correct to 2 decimal places then ∆X = 0.005. The relative
accuracy is given by 0.005/0.51 ≈ 0.98 = 98 %

Example:
Suppose that you have the task of measuring the lengths of a bridge and a rivet and come up with
9999 and 9 cm, respectively. If the true values are 10,000 and 10 cm, respectively, compute:
(a) the true absolute error and
(b) the true percent relative error for each case.

Solution:
(a) The Error for measuring the bridge is: Et = 10,000-9999 = 1 cm.
The Error for measuring the bridge is: Et = 10-9 = 1 cm.
(b) The percent relative error for the bridge is: €t = 1/1000 *100% = 0.01%
For the rivet it is: €t = 1/10*100% = 10%

Thus, although both measurements have an error of 1 cm, the relative error for the rivet is much
greater. We would conclude that we have done an adequate job of measuring the bridge, whereas
our estimate for the rivet leaves something to be desired.

Machine Epsilon:
The round off error introduced in a number when it is represented in floating point form is given
by, Chopping error = g * 10E-d, 0 ≤ g < 1
Where,
g: truncated part of the number in normalized form,
d: is the number of digits permitted in the mantissa, and
E: is the exponent.
The absolute relative error due to chopping is then given by, Er = │(g * 10E-d ) / (f * 10E)│
The relative error is maximum when g is maximum and f is minimum. We know that the
maximum possible value of g is less than 1.0 and minimum possible value of f is 0.1. The
absolute value of the relative error therefore satisfies:
Er ≤│(1.0 * 10E-d ) / (0.1 * 10E)│= 10 -d+1
The maximum relative error given above is known as machine epsilon. The name “machine”
indicates that this value is machine dependent. This is true because the length of mantissa d is
machine dependent.

For a decimal, machine that use chopping, Machine epsilon є = 10-d+1


Similarly, for a machine which uses symmetric round off,
Er ≤│(0.5 * 10E-d )/(0.1 * 10E)│= ½ * 10 -d+1
And therefore Machine epsilon є = ½ * 10 -d+1
It is important to note that the machine epsilon represents upper bound for the round off error
due to floating point representation. It also suggests that data can be represented in the machine
with d significant decimal digits and the relative error does not depend in any way on the size of
the number.
More generally, for a number x represented in a computer,
Absolute error bound = |x| * є
For a computer system with binary representation, the machine epsilon is given by
Chopping: Machine epsilon є = 2 -d+1
Symmetric rounding: Machine epsilon є = 2 -d
Here we have simply replaced the base 10 by base 2, where d indicates the length of binary
mantissa in bits.
We may generalize the expression for machine epsilon for a machine, which uses base b with d-
digit mantissa as follows:
Chopping: Machine epsilon є = b * b -d
Symmetric rounding: Machine epsilon є = b/2 * b-d

Error propagation
Numerical computing involves a series of computations consisting of basic arithmetic operations.
Therefore, it is not the individual round off errors that are important but the final error on the
result. Our major concern is how an error at one point in the process propagates and how it
effects the final total error.
Addition and Subtraction:
Consider addition of two numbers, say, x and y.
xt + yt = xa + ex + ya + ey = (xa + ya) + (ex + ey)
Therefore,
Total error = ex+y = ex + ey
Similarly, for subtraction
Total error = ex-y = ex - ey
The addition ex + ey does not mean that error will increase in all cases. It depends on the sign of
individual errors. Similar is the case with subtractions.
Since we do not normally know the sign of errors, we can only estimate error bounds. That is, we
can say that |ex±y| ≤ |ex| + |ey|
Therefore, the rule for addition and subtraction is: the magnitude of the absolute error of a sum
(or difference of the absolute errors of the operands.
 This inequality is called triangle inequality.

Multiplication:
Here, we have
xt * yt = (xa + ex) * (ya + ey) = xaya + yaex + xa ey + exey
Errors are normally small and their products will be much smaller. Therefore, if we neglect the
product of the errors, we get
xt * yt = xaya + xaey + yaex = xaya + xaya (ex/xa + ey/ya)
Then,
Total error = exy = xaya (ex/xa + ey/ya)

Division:
We have
xt / yt = (xa + ex) / (ya + ey)
Multiplying both numerator and denominator by ya - ey and rearranging the terms, we get
xt / yt = (xaya + yaex - xaey - exey) / ( ya2 - ey2)
Dropping all terms that involve only product of errors, we have
xt / yt = (xaya + yaex - xaey) / ya2 = xa/ya + xa/ya (ex/xa – ey/ya)
Thus,
Total error = ex/y = xa/ya (ex/xa – ey/ya)
 Applying the triangle inequality theorem, we have
ex/y ≤ |xa/ya| ( |ex/xa| + |ey/ya| )

exy ≤ |xaya| ( |ex/xa| + |ey/ya| )


 Note: The final errors (after arithmetic operations) ex+y, ex-y, exy, and ex/y are expressed in
terms of only ex and ey and do not contain the round off errors introduced by the operations
themselves. This results from the final error in floating point representation. Therefore, we
must add the round off error in doing the operation in each case. For example, e x+y = ex + ey +
e0
 Now, we can have relative errors for all the four operations as follows:

Addition & Subtraction:


er, x±y ≤ (|ex| + |ey|) / (| xa ± ya |)

Multiplication & Division:


er, xy = | er, x | + | er, y |
er, x/y = | er, x | + | er, y |

Example: Estimate the relative error in z = x – y when x = 0.1234*104 and y = 0.1232*104 as


stored in a system with four–digit mantissa.
Solution: We know
er,z ≤ (|ex| + |ey|) / (| xa - ya |)
Since, the number x and y are stored in a four-digit mantissa system they are properly rounded
off and therefore,
er,x ≤ ½ * 10-3 = 0.05%
er,y ≤ ½ * 10-3 = 0.05%
Then
ex ≤ 0.1234 * 104 * 0.5 * 10-3 = 0.617
ey ≤ 0.1232 * 104 * 0.5 * 10-3 = 0.616
Therefore
|ez| ≤ |ex| + |ey| = 1.233
|er,z| ≤ (1.233 * 10-4)/ | 0.1234 – 0.1232 | = 0.6165 = 61.65%

Example: If ∆X = 0.005 and ∆Y = 0.001 be the absolute error in X = 2.11 and Y = 4.15. Find the
relative error in the computation of x + y.

Solution: Here, X = 2.11 Y = 4.15

... x + y = 2.11 + 4.15 = 6.26

and ∆X = 0.005 ∆Y = 0.001


... ∆X + ∆Y = 0.005 + 0.001 = 0.006

Relative error is ER = 0.006 / 6.26 = 0.000958 ≈ 0.001

CONVERGENCE

 Numerical computing is based on the idea of iterative process.


 Iterative processes involve generation of sequence of approximation with the hope that,
the process will end of the required solution.
 Certain methods convert faster than others.
 It is necessary to know that convergence rate of any method to get the required
solution.
 Rapid convergent take less execution time.

References:

1. BalaGurushamy, E. Numerical Methods. New Delhi : Tata McGraw-Hill, 2000.

2. Steven C.Chapra, Raymon P. Cannale. Numerical Methods for Engineers. New Delhi : Tata
McGRAW-HILL, 2003. ISMN 0-07-047437-0.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy