0% found this document useful (0 votes)
23 views22 pages

ECH 3128 Topic 6 Curve Fitting 1

The document describes various techniques for curve fitting discrete data points to obtain estimates of intermediate values. There are two general approaches to curve fitting: fitting a single curve to represent the overall trend of scattered data, or passing curves through each precise data point. Curve fitting has two main applications in engineering: trend analysis to predict dependent variable values, and hypothesis testing to compare mathematical models to measured data.

Uploaded by

Iman Salim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views22 pages

ECH 3128 Topic 6 Curve Fitting 1

The document describes various techniques for curve fitting discrete data points to obtain estimates of intermediate values. There are two general approaches to curve fitting: fitting a single curve to represent the overall trend of scattered data, or passing curves through each precise data point. Curve fitting has two main applications in engineering: trend analysis to predict dependent variable values, and hypothesis testing to compare mathematical models to measured data.

Uploaded by

Iman Salim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Curve Fitting 1

Curve Fitting

• Describes techniques to fit curves (curve fitting) to discrete


data to obtain intermediate estimates.

• There are two general approaches two curve fitting:


– Data exhibit a significant degree of scatter. The strategy is to derive a
single curve that represents the general trend of the data.
– Data is very precise. The strategy is to pass a curve or a series of curves
through each of the points.

• In engineering two types of applications are encountered:


– Trend analysis. Predicting values of dependent variable, may include
extrapolation beyond data points or interpolation between data points.
– Hypothesis testing. Comparing existing mathematical model with
measured data.
Curve Fitting
a) Regression
Linear Regression, Polynomial Regression,
Multilinear Regression, Nonlinear Regression

b) Linear Interpolation

c) Polynomial Interpolation
Linear Regression

• Fitting a straight line to a set of paired


observations: (x1, y1), (x2, y2),…,(xn, yn).
y=a0+a1x+e
a1- slope
a0- intercept
e- error, or residual, between the model and the
observations
Linear Regression
Criteria for a “Best” Fit/
• Minimize the sum of the residual errors for all
available data:
n n

∑e = ∑(y
i =1
i
i =1
i − ao − a1 xi )

n = total number of points


• However, this is an inadequate criterion, so is the sum
of the absolute values
n n

∑e =∑ y
i =1
i
i =1
i − a0 − a1 xi
Linear Regression
• Best strategy is to minimize the sum of the squares of
the residuals between the measured y and the y
calculated with the linear model:
n n n
S r = ∑ e = ∑ ( yi , measured − yi , model) = ∑ ( yi − a0 − a1 xi ) 2
2
i
2

i =1 i =1 i =1

• Yields a unique line for a given set of data.


Linear Regression
∂S r
= −2∑ ( yi − ao − a1 xi ) = 0
∂ao
∂S r
= −2∑ [( yi − ao − a1 xi ) xi ] = 0
∂a1
Linear Algebraic Equation methods
0 = ∑ yi − ∑ a 0 − ∑ a1 xi  n

∑ x  a  =  ∑ y 
i 0 i
0 = ∑ yi xi − ∑ a 0 xi − ∑ a x a 
 ∑ xi ∑    ∑ x y 
2 2
1 i x i 1 i i

∑a 0 = na0 Normal equations, can be


na0 + (∑ xi )a1 = ∑ yi solved simultaneously

n∑ xi yi − ∑ xi ∑ yi
a1 =
n∑ x − (∑ xi )
2 2
i Mean values
a0 = y − a1 x
Linear Regression
“Goodness” of our fit/
If
• Total sum of the squares around the mean for the
dependent variable, y, is St
• Sum of the squares of residuals around the regression
line is Sr
• St-Sr quantifies the improvement or error reduction
due to describing data in terms of a straight line rather
than as an average value.
2 St − S r
r = r2-coefficient of determination
St
Sqrt(r2) – correlation coefficient
Linear Regression
• For a perfect fit
Sr=0 and r=r2=1, signifying that the line
explains 100 percent of the variability of the
data.
• For r=r2=0, Sr=St, the fit represents no
improvement.
Linearize a Nonlinear Relationship

Example – Falling film Absorption


Sh = a Re m
You want to find a and m??

ln ( Sh ) m ln ( Re ) + ln ( a )
=
y
= m x + c
Linearize a Nonlinear Relationship
Polynomial Regression

• Similar to linear regression, we can do the


same thing for polynomial regression
2
y = a0 + a1 x + a2 x + ... + an x + e n

Example – second order


2
y =a0 + a1 x + a2 x + e
n

∑( y − (a ))
2
2
S r= 0 + a1 x + a2 x
i =1
Polynomial Regression

• Differentiate with respect to the coefficients


and equate to zero.
∂S r ∂S r ∂S r
= 0,= 0,= 0
∂a0 ∂a1 ∂a2
 n

∑ xi ∑ i   a0   ∑ yi 
x 2

3  =
 ∑ xi ∑i ∑ i   1   ∑ xi yi 
2
x x a
 ∑ xi2 ∑i
x 3
∑ i   2  ∑ i yi 
x 4
 a   x 2

Linear Algebraic Equation methods

2
Best Fit: r
Multilinear Regression

• Similar to linear regression, we can do the


same thing for multilinear regression
y = a0 + a1 x1 + a2 x2 + ... + an xn + e
Example – two independent variables
y =a0 + a1 x1 + a2 x2 + e
n

∑( y − (a + a1 x1 + a2 x2 ) )
2
S r= 0
i =1
Multilinear Regression

• Differentiate with respect to the coefficients


and equate to zero.
∂S r ∂S r ∂S r
= 0,= 0,= 0
∂a0 ∂a1 ∂a2
 n

∑x 1,i ∑x 2,i   a0   ∑ yi 
   
 ∑ x1,i ∑x ∑x x1,i 2,i   a1  =  ∑ x1,i yi 
2
1,i

 ∑ x2,i ∑x x ∑x
 2
    ∑ x2,i yi 
2,i   a2 
1,i 2,i  
Linear Algebraic Equation methods

2
Best Fit: r
General Linear Square Method

Let say,
y= a0 z0 + a1 z1 + a2 z2 + ... + am zm + e
Linear Polynomial
z0 = 1 z0 = 1
z1 = x1 z1 = x
z2 = x2 z2 = x 2
 
zm = xm zm = x m
General Linear Square Method

[Y ]
= [ Z ][ A] + [ E ]
 z01 z11  zm1 
z 
[ ] 
Z =  02 

n – number of Data

 
 z0 n 
m – number of independent variable/order

If we reduce the residual to zero [E] ≈ 0


[ Z ][ A] = [Y ]
[ ] [ ][ ] [ ] [Y ]
T T
Z Z A = Z
Nonlinear Regression

• As we seen before, if the model is a


polynomial, we can use polynomial regression.
• If the model can be linearized by rearranging
the model, we can use linear regression.
• If cannot linearized by rearranging, we can
linearize the model by using Taylor Series
Expansion. This method requires iteration.
Nonlinear Regression

=y f ( x) + e
n

∑ ( y − f ( x ))
2
Sr
=
i =1

Example:
( x ) a0 (1 − e
f= a1 x
)
Taylor Series Expansion
∂f ( xi ) j ∂f ( xi ) j
f ( xi ) j +1 = f ( xi ) j + ∆a0 + ∆a1
∂a0 ∂a1 i – Data number
j – iteration number
Nonlinear Regression
∆a=
0 a0, j +1 − a0, j , ∆a=
1 a1, j +1 − a1, j
Insert the Taylor series into the equation
=y f ( x) + e
∂f ( xi ) j ∂f ( xi ) j
yi , j = f ( xi ) j + ∆a0 + ∆a1 + e
∂a0 ∂a1
∂f ( xi ) j ∂f ( xi ) j
∆a0 + ∆a1 + e= yi , j − f ( xi ) j
∂a0 ∂a1
 ∂f ( x1 )1 ∂f ( x1 )1 
Change to a  
 ∂a0 ∂a1   e1,1   y1,1 − f ( x1 )1 
matrix form:     ∆a0      

  =  
   +
   ∆a1     
 ∂f ( x )     
 n 1 ∂f ( xn )1

e y
 n ,1   n ,1 − f ( x )
n 1
 ∂a0 ∂a1 
Nonlinear Regression

[ Df ][ ∆A] + [ E ] =
[Y ]
If we reduce the residual to zero [E] ≈ 0
[ Df ][ ∆A] =
[Y ]
[ ] [ ][ ] [ ] [Y ]
T T
Df Df ∆ A =Df

We can the determine the subsequence Δa’s


ak , j +1 − ak , j With stopping criteria:
=ea k ×100%
ak , j +1 ea k < es
For all a’s

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy