0% found this document useful (0 votes)
47 views4 pages

Spring 2021: Numerical Analysis Assignment 5 (Due Thursday April 22nd 10:00am)

This document describes the numerical analysis assignment for Spring 2021. It involves several topics in root finding and interpolation, including deriving Muller's method and Brent's method for root finding, comparing polynomial interpolation and least squares fitting, analyzing the space of polynomials, and computing errors in polynomial interpolation using Lagrange polynomials. Students are asked to derive formulas, solve systems of equations, plot functions, compute condition numbers, and qualitatively describe errors in interpolation.

Uploaded by

Dibs Maniac
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views4 pages

Spring 2021: Numerical Analysis Assignment 5 (Due Thursday April 22nd 10:00am)

This document describes the numerical analysis assignment for Spring 2021. It involves several topics in root finding and interpolation, including deriving Muller's method and Brent's method for root finding, comparing polynomial interpolation and least squares fitting, analyzing the space of polynomials, and computing errors in polynomial interpolation using Lagrange polynomials. Students are asked to derive formulas, solve systems of equations, plot functions, compute condition numbers, and qualitatively describe errors in interpolation.

Uploaded by

Dibs Maniac
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Spring 2021: Numerical Analysis

Assignment 5 (due Thursday April 22nd 10:00am)

1. Quadratic interpolation and root finding [4+4pts] When solving an equation of the form f (x) = 0,
Newton’s method uses explicit derivative information of f to calculate the tangent line and its x-intercept
(its root). The secant method uses two points (xk−1 , f (xk−1 )), and (xk , f (xk )), to fit a linear function
approximating the tangent line and then finds its root.

(a) On the other hand, Muller’s method uses three points, (xk−2 , f (xk−2 )), (xk−1 , f (xk−1 )),
(xk , f (xk )), to fit a parabola (i.e. interpolate a parabola). It then uses the quadratic formula to
find the root of this parabola that is closest to xk . Using Lagrange interpolation, derive Muller’s
method, i.e., write as compact of a formula for how to obtain xk+1 from the previous three points
as you can.
(b) A core part of Matlab’s default algorithm in fsolve (called Brent’s method) is to use inverse
quadratic interpolation approach, which flips the role of x and y. The idea is to interpolate
a quadratic polynomial through (f (xk−2 ), xk−2 ), (f (xk−1 ), xk−1 ), (f (xk ), xk ), and then simply
evaluate this polynomial at y = 0 to estimate the root, thus avoiding solving quadratic equations
and possibly complex roots. Derive the iteration formula for this method.
Note: I am aware that both methods are described in Wikipedia, and you are welcome to reference
that. However, you must give complete steps of the derivation with explanations to get credit.
Writing just the final formula will get no points.

2. Polynomial interpolation versus least squares fitting [4+2pts] Previously, we have used a least
squared approach to fit functions to data points. We are given the points:

i 0 1 2 3 4 5
X 0.0 0.5 1.0 1.5 2.0 2.5
Y 0.0 0.20 0.27 0.30 0.32 0.33

(a) Write down the least squares problem associated to finding the cubic best fit polynomial

Y = aX 3 + bX 2 + cX + d .

using (i) all six points, (ii) only the data for i = 0, 1, 2, 3, 4, and (iii) i = 0, 1, 2, 3. In each case
solve the system and plot both the data points and the polynomial. Why is case (iii) a special least
squares problem?
(b) What is the degree of the polynomial you would have to use so that the solution interpolates (i.e.,
goes through) all six data points?

1
3. Space of polynomials Pn [3+3+3pts, Code required for c)] Let Pn be the space of functions defined
on [−1, 1] that can be described by polynomials of degree less of equal to n with coefficients in R. Pn is
a linear space in the sense of linear algebra, in particular, for p, q ∈ Pn and a ∈ R, also p + q and ap are
in Pn . Since the monomials {1, x, x2 , . . . , xn } are a basis for Pn , the dimension of that space is n + 1.

(a) Show that for pairwise distinct points x0 , x1 , . . . , xn ∈ [−1, 1], the Lagrange polynomials Lk (x)
are in Pn , and that they are linearly independent, that is, for a linear combination of the zero
polynomial with Lagrange polynomials with coefficients αk , i.e.,
n
X
αk Lk (x) = 0 (the zero polynomial)
k=0

necessarily follows that α0 = α1 = . . . = αn = 0. Note that this implies that the (n + 1) Lagrange
polynomials also form a basis of Pn .
(b) Since both the monomials and the Lagrange polynomials are a basis of Pn , each p ∈ Pn can be
written as linear combination of monomials as well as Lagrange polynomials, i.e.,
n
X n
X
p(x) = αk Lk (x) = β k xk , (1)
k=0 k=0

with appropriate coefficients αk , βk ∈ R. As you know from basic matrix theory, there exists a basis
transformation matrix that converts the coefficients α = (α0 , . . . , αn )T to the coefficients β =
(β0 , . . . , βn )T . Show that this basis transformation matrix is given by the so-called Vandermonde
matrix V ∈ Rn+1×n+1 given by

1 x0 x20 · · · x0n−1 xn0


 
1 x1 x2 · · · xn−1 xn 
1 1 1
V = . . ..  ,

.. . . ..
 .. .. . . . . 
1 xn x2n · · · xnn−1 xnn

i.e., the relation between α and β in (1) is given by α = V β. An easy way to see this is to choose
appropriate x in (1).
(c) Note that since V transforms one basis into another basis, it must be an invertible matrix. Let us
compute the condition number of V numerically.1 Compute the 2-based condition number κ2 (V )
for n = 5, 10, 20, 30 with uniformly spaced nodes xi = −1 + (2i)/n, i = 0, . . . , n. Based on the
condition numbers, can this basis transformation be performed accurately?
1
MATLAB provides the function vander, which can be used to assemble V (actually, the transpose of V ). Alternatively, one
can use a simple loop to construct V .

2
4. Interpolation and optimal 2-norm approximation [extra credit, 2+2+2+2pt] For an interval (a, b),
n ∈ N and disjoint points x0 , . . . , xn in [a, b], we define2 for polynomials p, q
n
X
hp, qi := p(xi )q(xi ).
i=0

(a) Show that h· , ·i is an inner product for each Pk with k ≤ n, where Pk denotes the space of
polynomials of degree k or less.
(b) Why is h· , ·i not an inner product for k > n?
(c) Show that the Lagrange polynomials Li corresponding to the nodes x0 , . . . , xn are orthonormal
with respect to the inner product h· , ·i.
(d) For a continuous function f : [a, b] → R, compute its optimal approximation in Pn with respect to
the inner product h· , ·i and compare with the interpolation of f .
5. [Errors in polynomial interpolation, 5pt extra credit] Interpolate the function
(
1 if x ≥ 0
f (x) =
0 if x < 0,

on the domain [−1, 1] using Lagrange polynomials with Chebyshev points.3 You can use the following
MATLAB function lagrange interpolant to compute the values of the Lagrange interpolants pn .

1 f u n c t i o n y0 = l a g r a n g e i n t e r p o l a n t ( x , y , x0 )
2 % x i s the vector of a b s c i s s a s .
3 % y i s the matching v e c t o r of o r d i n a t e s .
4 % x0 r e p r e s e n t s t h e t a r g e t t o be i n t e r p o l a t e d
5 % y0 r e p r e s e n t s t h e s o l u t i o n from t h e L a g r a n g e i n t e r p o l a t i o n
6 y0 = 0 ;
7 n = length (x );
8 for j = 1 : n
9 t = 1;
10 for i = 1 : n
11 i f i ˜= j
12 t = t * ( x0−x ( i ) ) / ( x ( j )−x ( i ) ) ;
13 end
14 end
15 y0 = y0 + t * y ( j ) ;
16 end

Describe qualitatively what you see for n = 2, 4, 8, 16, 32, 64, 128, 256 interpolation points. Provide a
table of the maximum errors4
||pn − f ||∞ = max |pn (x) − f (x)|,
x∈[−1,1]
2
This problem highlights a relationship between optimal 2-norm interpolation and interpolation. You can think of the inner
product as obtained from a weighted 2-norm inner product as limit of weight functions w that are very large at the node points
and small or zero elsewhere.
3
Recall that the Chebyshev points on the interval [a, b] are
i + 12
 
1 1
xi = (a + b) + (b − a) cos π for i = 0, . . . , n.
2 2 n+1
4
You can approximate the maximum error by evaluating the error pn − f at a large number of uniformly distributed points,
e.g., at ∼ 10n points, and determining the difference using the maximum absolute value, i.e.
||pn − f ||∞ = max |pn (x) − f (x)| ≈ max |pn (ξj ) − f (ξj )|,
x∈[−1,1] j=0,...,10n

2
where ξj = −1 + 10n
j for j = 0, . . . , 10n.

3
and the L2 -errors5 sZ
1
||pn − f ||2 = (pn (x) − f (x))2 dx
−1

for each n = 2, 4, 8, 16, 32, 64, 128, 256. Do you expect convergence in the maximum norm? How about
in the L2 norm?

5
You can approximate the L2 -error by evaluating the error pn − f at a large number of uniformly distributed points, e.g., at
∼ 10n points, and computing
sZ v
1
u 10n
u 2 X
||pn − f ||2 = (pn (x) − f (x)) dx ≈ t
2 (pn (ξj ) − f (ξj ))2 ,
−1 10n j=0

2
where ξj = −1 + 10n
j for j = 0, . . . , 10n.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy