0% found this document useful (0 votes)
30 views17 pages

NMM2270

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views17 pages

NMM2270

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

There are three types of classifications for a differential equation:

Classification by Type

A single independent variable (for ( dy

dx
) x is the independent variable) shows an Ordinary Differential Equation.

A differential equation with two or more independent variables is a partial differential equation

Classification by Order

The order of a differential equation is the order of the highest derivative in the equation
2
d y

dx
2
+ 5
dy

dx
= e
x
is of second order

Classification by Linearity

A differential equation is linear if:

The dependent variable and all of its derivatives are of the first degree (e.g., if y were the dependent variable and y was in 2

the equation, it is nonlinear)


The coefficient of one of the derivatives of the dependent variable does not include the dependent variable (e.g. if y were
the dependent variable, we cannot have (y − x) be one of the terms as y would be in the coefficient of )
dy dy

dx dx

The function of the term is linear (e.g. if y was our dependent variable, then sin y cannot be a term in the differential equation)

Forms of DE's
n

Normal Form:
d y (n−1)
n
= f (x, y, . . . , y )
dx

General Form: F (x, y, y , . . . , y ′ (n)


) = 0

The Solution of a DE

Any function ϕ defined on an interval I with at least n derivatives that are continuous on I , which when substitutes into the nth order
ODE reduces the equation to an identity, is said to be a solution of the equation on the interval.

Explicit/Implicit Solutions

Explicit solution: any solution in which the dependent variable is expressed solely in terms of the independent variable and constant

Implicit solution: any solution in which the dependent variable is not expressed solely in terms of the independent variable and a
constant, but is rather in a form where there is no single variable on one side of the equation. For example: x + y = C represents an 2 2

implicit solution

Family of Curves

A family of curves is essentially a solution containing an arbitrary constant, which can represent a set of solutions.

Initial Value Problem

Trying to solve for the arbitrary constant in a differential equation. Normally presented in this form

Solve:
Subject to:
Where __ are arbitrary constants

The initial conditions are the values we sub in.

Existence and Uniqueness


It is proven as so:

1. Check if f (x
2. Check if

Separable Equations

dy

h(y)

erf (x) =

erf c(x) =
= g(x)dx

Linear Equations

A first-order DE of the form a

√π

√π

Exact Equations
2


x

0
∂f

∂y

To solve this, follow these steps:

1. Make the equation of the form


2. Plug into the formula: ∫
3. Solve for y

Error Function

By definition, we define:

x
e


To prove existence and uniqueness, set
one solution that exists which passes through the point (x

0,

−t

Also do note: erf (x) + erf c(x) = 1


2

−t
y0 )

x 0 ,y 0

A first order differential equation of the form


form

dt

dt
is defined
is defined

3. If both are, then there must be a unique solution that exists!

1 (x) dx

dx
[ye
dy
dx

dy
= f (x, y)

dy

dx

+ a 0 (x)y = g(x)

dy

dx
+ P (x)y = f (x)

− ∫ P (x)dx
] = ∫ f (x)e
. Essentially proving existence and uniqueness is proving that there is precisely

= g(x)h(y)

and integrating both sides to reach a solution as so: ∫


0,

− ∫ P (x)dx
y0 ).

is said to be separable. It is solvable by manipulating the equation into the

dx
dy

h(y)

A first order differential equation of the form M (x, y)dx + N (x, y)dy = 0 is said to be an exact equation if the expression on the left is an
exact differential

An exact differential here would be defined as so:

For all exact equations, follow this method of solution:

1. Check if
2. If it does, then set

4. Now use
∂M

∂y

∂f

∂y
=
∂N

∂x

= N (x, y) =

5. Integrate the equation in (4) wrt y


holds
∂f

∂x

6. Substitute the result from (5) into the equation in (3)

Integrating Factors
= M (x, y)

3. Find f by integrating both sides wrt x and y constant f (x, y) = ∫


∂y
∫ M (x, y)dx + g (y)

7. The implicit solution is now f (x, y) = C (f(x,y) is found from the substitution in step 6)

∂M

∂y

For non-exact equations of the form M (x, y)dx + N (x, y)dy = 0, we would want to find an integrating factor μ(x, y) such that

is an exact differential.
=
∂N

∂x

M (x, y)dx + g(y)

μ(x, y)M (x, y)dx + μ(x, y)N (x, y)dy = 0


.
= ∫ g(x)dx

is said to be a linear equation in the dependent variable y.


We can find these integrating factors from two formulas,
M y −N x

If is a function of x alone, then μ(x) = e is an integrating factor


M y −N x
∫ dx
N
N

N x −M y

If is a function of y alone, then μ(y) = e is an integrating factor where M (and the same notation holds for the rest
N x −M y ∂M

M
y =
M ∂y

of the subscripted values here)

Solutions by Substitutions

Lets say we have a HMGs equation of the form M (x, y)dx + N (x, y)dy = 0, then the substitution:
u =
y

x
or v = x

y
can help simplify the HMGs equation so that it is separable.

We practically use v = x

y
whenever M (x, y) is simpler than N (x, y) and vice versa.

When using a substitution, lets say, v = x

y
, follow these steps.

1. Find x in terms of v and y (x = yv)


2. Find the derivative (dx = ydv + vdy)
3. Substitute for all x and dx terms
4. Find a way to separate, and then solve as separable equation

Bernoulli's Equation

A Bernoulli equation is of the form

dy
n
+ P (x)y = f (x)y
dx

We solve as so:

1. Divide both sides by y n

−n dy 1−n
y + P (x)y = f (x)
dx

2. Substitute v = y 1−n

1−n
v = y

′ −n ′
v = (1 − n)y y

3. Sub into the original equation


1 ′
v + P (x)v = f (x)
1−n

4. Solve as a normal linear equation, and reverse the substitution at the end to solve in terms of y

Linear Models

Growth: P (t) = P e 0
kt

Decay: P (t) = P e 0
−kt

Where P is the initial population size, t the current time, and k some constant
0

Law of Cooling: = −k(T − T )


dT

dt
env

Where T is the temperature of the item we are looking at, and T is the temperature of the surrounding environment. k represents
env

some constant.

RL Circuits: L di

dt
+ Ri = E 0

Where L = inductance of the inductor in the circuit, = the rate of change of current with respect to time, R = the resistance of the
di

dt

resistor in the circuit, i = the current flowing through the circuit at a given time, and E is the electromotive force (voltage source
0

applied to the circuit).

RC Circuits: R dq

dt
+
C
q
= E0

Where R = Resistance of the resistor in the circuit, represents the rate of change of voltage with respect to charge, q represents
dA
dv

the charge stored in the capacitor at a given time, C is the capacitance of the capacitor, and E is the electromotive force of the circuit. 0

Half Life
The time t at which a decay model is at half of its entire life.

Logistic Equation

Used when both the population affected by a factor and the population not affected both impact the growth/decay rate.
dP
= P (a − bP )
dt

The solution of this differential equation ends up looking like:


aP 0
P (t) = −at
bP 0 +(a−bP 0 )e

Boundary Value Problems

Similar to an Initial Value Problem, however you are given multiple points, one for each differential except the highest order differential.
Essentially, if you have a DE of order N , you will be given N − 1 points for y(x ) = y , y (x ) = y , . . . , y (x ) = y 0 0

1 1
(n−1)
n−1 n−1

And you will solve by means of systems of equations

For the following DE:


n n−1
d y d y
a n (x) + a n−1 (x) +. . . +a 0 (x)y = g(x)
n n−1
dx dx

Here are some conditions that must be satisfied to solve a BVP:

a n (x), a n−1 (x), . . . , a 0 (x), g(x) must all be continuous on an interval I


a n (x) ≠ 0 for all x on I
If x = x is any point on this interval, then a solution y(x) of the initial-value problem exists on the interval and is unique
0

Differential Operators

Let dy

dx
= Dy

So, we have some differential equation like so:


n n−1
a n (x)D y + a n−1 (x)D y+. . . +a 0 (x)y

n n−1
= (a n (x)D + a n−1 (x)D +. . . +a 0 (x))y

We can refer to the term (a n (x)D


n
+ a n−1 (x)D
n−1
+. . . +a 0 (x)) as L and refer to the differential equation as a whole as Ly

Superposition Principle

Let y 1, y2 , . . . , yk be solutions of the homogeneous nth order differential equation on an interval I . Then the linear combination

y = c 1 y 1 (x) + c 2 y 2 (x)+. . . +c k y k (x)

where all the Cs are arbitrary constants is also a solution on the interval to the DE.

Linear Dependence/Independence

A set of functions f 1 (x), f 2 (x), . . . , f n (x) is said to be linearly dependent on an interval I if there exists constants c 1, c2 , . . . , cn , not all
zero, such that

c 1 f 1 (x) + c 2 f 2 (x)+. . . +c n f n (x) = 0

for every x in the interval. If not dependent, it must be linearly independent. We can use the Wronskian to prove linear dependence.

Wronskian

The Wronskian of a set of functions f 1 (x), f 2 (x), . . . , f n (x) possesses at least n − 1 derivatives. The determinant

f1 f2 ⋯ fn

′ ′ ′
f f ⋯ f
1 2 n

W (f 1 , f 2 , . . . , f n ) =
⋮ ⋮ ⋱ ⋮

(n−1) (n−1) (n−1)


f f ⋯ fn
1 2

is called the wronskian


Any set of y , y , . . . , y of n linearly independent solutions of the homogeneous nth order DE on an interval I is said to be a
1 2 n

fundamental set of solutions on the interval. We can find out its linear dependence or independence by calculating the wronskian for
the set of functions. If it simplifies to 0, it is dependent, and not a fundamental set of solutions. If it is not 0, it is a fundamental set, as is
independent.

Particular Solution

When we have a non-homogeneous equation


(n) (n−1)
a n (x)y + a n−1 (x)y + ⋯ + a 0 y = g(x)

What we need to do is, find the fundamental set of solutions of the homogeneous counterpart, which is
(n) (n−1)
a n (x)y + a n−1 (x)y + ⋯ + a0 y = 0

Then find the PARTICULAR solution to the heterogeneous equation (how to compute, we will discuss later), and add the fundamental
set of solutions y 1, y2 , ⋯ , yn with the particular solution y to get the solution to the DE, as so
p

y = c 1 y 1 (x) + c 2 y 2 (x) + ⋯ + c n y n (x) + y p (x)

Reduction of Order

This method is used to solve linear second order differential equations, if we already know one solution.

Essentially, for a formula in the form:


′′ ′
y + P (x)y + Q(x)y = 0

And we already have the solution y , we can solve for the second solution y
1 (x) 2 (x) as so:
− ∫ P (x)dx
e
y 2 (x) = y 1 (x) ∫ dx
2
y (x)
1

Linear Equations with Constant Coefficients

Essentially, when we have a higher order equation, we will replace the order of differential with a power, and the differential itself as a
variable m to obtain the Auxiliary Equation.

So, for the differential equation:


′′ ′
a2 y + a1 y + a0 y = 0

The auxiliary equation is


2
a2 m + a1 m + a0 = 0

This should leave us with two solutions m , and m which will leave us with an equation in the form:
1 2

m1 x m2 x
y(x) = c 1 e + c2 e

We have three potential cases:

Case 1

m1 and m are both real (so b


2
2
− 4ac > 0 )
which leaves us with
m1 x m2 x
y(x) = c 1 e + c2 e

Case 2

Only m can be found (b − 4ac = 0)


1
2

We use the reduction formula to solve for the second solution in this case
2m x
e 1
m1 x m1 x
y 2 (x) = e ∫ 2m x
dx = xe
e 1

m1 x m2 x
⇒ y(x) = c 1 e + c 2 xe
Case 3

There are no real solutions (b 2


− 4ac < 0 )
We then solve using quadratic formula, and utilize i = √−1 to provide some complex solutions.

Assuming our solution ends in the form of α ± iβ we get


(α+iβ)x (α−iβ)x
y(x) = c 1 e ± c2 e

αx iβx αx −iβx
⇒ y(x) = c 1 e e ± c2 e e

But remember, e iθ
= cos θ + i sin θ .

So we get:
αx αx αx
y(x) = c 1 e (cos(βx) + i sin(βx)) + c 2 e (cos(βx) − i sin(βx))⇒ y(x) = e (c 1 cos(βx) + c 2 cos(βx) + c 1 i sin(βx) − c 2 i sin(βx))

αx
⇒ y(x) = e ((c 1 + c 2 ) cos(βx) + (c 1 − c 2 )i sin(βx))

Now say A = c 1 + c 2 , B = (c 1 − c 2 )i

αx
y(x) = e (A cos(βx) + B sin(βx))

Higher Order DEs

Now, if we have a DE in a higher form than 2, how do we solve it?

1. Put it into auxiliary form


2. Utilizing Remainder Theorem (explained below in case you don't remember), find a potential denominator to the equation without
a remainder
3. Divide the auxiliary equation by the denominator
4. You now have a lower order differential equation and one of the solutions to the DE

e.g.
3 2
m + m + m = 0

2
⇒ m(m + m + 1) = 0

We know m 1 = 0

Then the other two we can solve from the quadratic.

Remainder Theorem

For a function x − k, f (k) is the remainder of


f (x)

x−k

We would use this to find a k where the remainder is 0, and divide by that

Undetermined Coefficients

If we have a DE in the form


(n) (n−1) ′
an y + a n−1 y + ⋯ + a 1 y + a 0 y = g(x)

The general solution is y(x) = y c + yp where y is the solution to the homogeneous component, and y is the particular solution. Now,
c p

how can we find the particular solution?

The method of undetermined coefficients is only doable in the following conditions:

the coefficients, a i = 0, 1, 2, ⋯ , n are constants


Where g(x) is a constant, a polynomial function, exponential function e , sine or cosine function, or finite sums and products of
αx

these functions

We can essentially GUESS a potential solution very reasonably in this case.

Essentially, imagine we have g(x) = e . We can guess a solution of y 2x


p = Ae
2x
, find the derivatives until we reach the highest order in
the DE, plug it in, and solve for A to find the actual particular solution.
Some other examples:
g(x) = 1, Guess = A
g(x) = x , Guess = Ax

So on and so forth

and so on

y = x

y

′′
′′
yp = u1 y

a(u 1 y
2


A lot of these will be on your formula sheet so do not fret.

1. Find y by solving ay
2. Find y
Say y

yp = u1 y
p
c

′′
1
= u1 y1 + u2 y2


1

Let us impose the condition that u


This leads us to: y = u y + u y
′′
1
+ u y

3. Solve the system of linear equations


y1 u

y u

W1 =

u

1


1


2

= r(r − 1)x

1


1

Cauchy-Euler Equations
+ y2 u

+ y u

W1

W2

W

2

f (x)

2


2
= 0

4. Combine the solutions so that y = y



+ u y1 + u2 y
1


1

Now plug these into the original equation


+ u y

1

1

A Cauchy-Euler equation is an equation in the form: a


To solve this, we assume y = x
y

y

′′
= rx
r−1

r−2

1

+ u2 y

Now focus on the first terms within each bracket (u

f (t)

This can be solved through Cramer's rule s.t.


0 y2

y2

a

,W
2


p

+ u y
+ Bx + C

But, what if the guess is the same as one of your homogeneous solutions? In such a case, we must multiply the guess by x

Variation of Parameters

For some equation ay ′′

′′
2

2
′′


2

=

+ by + cy = f (t)

+ by + cy = 0


2

′′
2
1

1
+ u y2

y1

y1

1

+ u y

+ u y ) + b(u 1 y
2



2

2


2

2

+ u 2 y ) = f (t)

r
0

f (x)
2

2

2

1

We can factor them out to get the equation. We then look for all other common terms and factor them out, as shown below
u 1 (ay
′′
1
+ by

1
+ cy 1 ) + u 2 (ay

Now notice, that y and y are solutions to the differential equation ay


to 0 leaving us with a(u
1


2


′′
2
+ by



2

,W

We then substitute these values into the original differential equation and it ends up simplifying nicely. Let us see an example:

= rx

= r(r − 1)x
r−1

r−2

Substitute this back into the original equation


y1 + u y2 = 0

c

1

+ yp
=

2

+ cy 2 ) + a(u y

y1

y1
or y



+ u 2 y ) + c(u 1 y 1 + u 2 y 2 ) = f (t)
2


1

y2

y2

1

2

1

ax r(r − 1)x

1

+ y u

and

nx
n
′′
1 y1 ,

ax y

2

ar(r − 1)x

2

(n)

2

+ u 2 y ) = f (t)
2

=

u1 y , u1 y1
1

+ a n−1 x

′′

r−2

r
f (t)

+ bxrx

+ brx
n−1


+ bxy + cy = 0
)

r
y

r−1

+ cx
′′

(n−1)
+ by + c = 0

+ cx

r
⋯ + a2 x y

= 0
r
= 0
2
. Thus, the terms in the brackets MUST equal

′′ ′
+ a 1 xy + a 0 y = 0
r
x (ar(r − 1) + br + c) = 0

Thus, we know ar(r − 1) + br + c = 0 and this is a quadratic that we can solve for r in.

There are three cases to study:

**Case 1 r 1, r2 , r1 ≠ r2

In this case, we will use x and x as the sol'ns to the ODE


r1 r2

Case 2 r only 1

In this case, we use x as one solution, and x


r1 r1
⋅ ln x as the other

Case 3 r = α ± βi, which gives us the solution x α+βi


.

Then, through this derivation we can get another solution:


α βi
x x

α βi ln x
x e

α
x (cos β ln x + i sin β ln x)

With this, we can reach a final solution in the form: y = x α


(C 1 cos β ln x + C 2 sin β ln x)

Another way to solve Cauchy-Euler Equations is by performing the substitution t = ln x s.t. x = e t

We then say
dy dy dt 1 dy
= ⋅ =
dx dt dx x dt
2 2

And d y

dx
2
= −
x
1
2
dy

dt
+
1

x
2
d y

dt
2

If we substitute this back into the original Cauchy-Euler equation, they should cancel out and we should be left with a differential
equation with constant coefficients.

Linear Models of Higher Order


For a differential equation of the form
2
d y dy ′
a2 + a1 + a 0 y = g(t), y(t 0 ) = y 0 , y (t 0 ) = y 1
2
dt dt

g(x) is called: input, driving function


y(t) is the: output or response of the system on the interval I subject to B.C. (boundary conditions)

Spring/Mass Systems: Free Undamped Motion

A spring/mass system consists of a flexible spring suspended vertically from a rigid support with a mass m attached to its free end the
amount of stretch s of the spring will depend on the mass. Hooke's Law states that F = −ks where k > 0 is a constant proportional to
the spring, called the spring constant, and F is the force exerted upon the string.

Newton's Second Law

To determine F , force, we can say that F = ma where a is the acceleration of a mass. In the case of a Spring/Mass system, that would
be g, the acceleration due to gravity of 9.8 m

s
2
.

Equilibrium of Spring/Mass System

The point of equilibrium is the position at which the spring stops oscillating . This is when F g = Fs or when mg = ks or mg − ks = 0.

If the mass is displaced by an amount x from its equilibrium, the restoring force of the spring is then k(x + s)

Assuming no external forces acting on the system, we can equate Newton's second law with the net, or resultant, force of the restoring
force and the weight.
2
d x
m = −k(s + x) + mg = −kx + mg − ks = −kx
2
dt

We can remove the second and third term as we know that mg − ks = 0

The negative sign indicates that the restoring force of the spring acts opposite to the direction of motion.

Differential Equation of Undamped Motion

By dividing the equation m by the mass m we obtain the second-order differential equation or
2 2
d x d x k
2
= −kx 2
+ x = 0
dt dt m

2
d x
2
+ ω x = 0
2
dt

This equation is said to describe simple harmonic motion or free undamped motion. Two obvious initial conditions associated are
x(0) = x 0 , the amount of initial displacement, and x (0) = x , the initial velocity of the mass.

1

Solution and Equation of Motion

To solve an equation we note that the solutions of the auxiliary equation m are the complex numbers
2
d x 2 2 2
2
+ ω x = 0 + ω 4
dt

m 1 = ωi, m 2 = −ωi . Thus, the general solution is:

x(t) = c 1 cos ωt + c 2 sin ωt

The period of free vibrations described by the above equation is T =


ω
and the frequency is f =
1

T
=
ω


.

Double Spring Systems

Suppose two parallel springs with constants k 1, k2 are attached to a common rigid support and then to a metal plate of negligible mass.
A single mass m is attached to the center of the plate in the double-spring arrangement (essentially hanging from both springs
simultaneously). If the mass is displaced from its equilibrium position by x, the net restoring force is −(k + k )x 1 2

Thus, we say that

k ef f = k 1 + k 2

Is the effective spring constant of the system.

How about springs in series? Essentially if a mass is hanging from a spring 1 which is hanging from another spring 2 and it moves
spring 1 down by a term of x and spring 2 down by a term of x then we can say that x = x
1 2 1 + x2 .

This essentially tells us that since F = −kx ,x= F

−k
and thus:

k ef f
=
F

k1
+
k2
F
or 1

k ef f
=
1

k1
+
1

k2

Thus, k ef f =
k1 k2

k 1 +k 2

Systems with Variable Spring Constants

This will not be on the exam so feel free to skip this section
It is reasonable to expect that when a spring/mass system is in motion for a long period the spring would weaken, in other words, the
spring constant would vary/decay with time. In one model for an aging spring, the spring constant k is replaced by the decreasing
function K(t) = ke . The linear differential equation mx + ke x = 0 however cannot be solved with the methods we know.
−αt ′′ −αt

Free Damped Motion in Spring/Mass Systems

Let us add the term β dx

dt
to represent friction of air for example, like so:
2
d x dx
m = −kx − β
2
dt dt

or
2
d x dx
2
+ 2λ + ω x = 0
2
dt dt

Where 2λ = β

m
,ω 2
=
m
k

This is now very easy to solve by turning it into an auxiliary equation.

Some terminology:

When λ 2
− ω
2
> 0 , the spring/mass system is called "overdamped"

When λ 2
− ω
2
= 0 , the spring/mass system is called "critically damped"

When λ 2
− ω
2
< 0 the spring/mass system is called "under damped"

DE of Driven Motion with Damping

Now, say we added a driving force, like an external force other than the damping to the system. This will look like:
2
d x dx
2
m + 2λ + ω x = F (t)
2
dt dt

Where F (t) is the external force.

Definition of the Laplace Transform

Let f be a function defined on [0, ∞). Then the function F is defined by:

−st
F (s) = ∫ e f (t)dt
0

is said to be the Laplace transform of f . The domain of F (s) is the set of values of s for which the improper integral converges. e −st
is
known to be the kernel of the laplace transform.

The notation is that L{f (t)} = F (s)

A Laplace Transform is a linear transform, which essentially means:


L{c 1 f (t) + c 2 g(t)} = c 1 L{f (t)} + c 2 L{g(t)}

Some basic Laplace transforms to memorize:


1
L{1} = , s > 0
s

1
L{t} = 2
s

L{e
−ct
} =
s−c
1
, s > c for any constant c
c
L{sin(ct)} = 2 2
, s > 0
s +c

s
L{cos(ct)} = 2 2
, s > 0
s +c

ict s c
L{e } = 2 2
+ 2
i
s +c s +4

This comes from the property that e ict


= cos(ct) + i sin(ct) hence L{e ict
} = L{cos(ct)} + iL{sin(ct)}

Change of Scale Theorem


1 s
L{f (at)} = F( )
a a

For example, since we know L{e t


} =
1

s−1
, we also know that L{e at
} =
1

a
⋅ s
1

−1
a

Inverse Transforms and Transforms of Derivatives


−1
f (t) = L {F (s)}

Some examples...
−1 1
1 = L { }
s

n −1 n!
t = L { n+1
}
s

−1 k
sin kt = L { 2 2
}
s +k

And so on.
Linearity of the Inverse Laplace Transformation
−1 −1 −1
L {αF (s) + βG(s)} = αL {F (s)} + βL {G(s)}

Partial Fractions are very useful towards solving for inverse laplace transformations, as we can utilize the linearity to split it up into
multiple different fractions and find the laplace transform each one individually summed.

Transform of a Derivative

L{f (t)} = sF (s) − f (0)

for a single derivative, and for higher order is:


(n) n n−1 (n−1)
L{f (t)} = s F (s) − s f (0) − ⋯ − f (0)

Where F (s) = L{f (t)}

Solving Linear ODEs using Laplace Transforms

The laplace transform is ideally suited for solving linear initial value problems in which the differential equation has constant
coefficients.

Let's say we have a differential equation of the form:


n n−1
d y d y
an + a n−1 + ⋯ + a 0 y = g(t)
n n−1
dt dt

The the laplace transform of both sides to get


(n) (n−1)
a n L{y } + a n−1 L{y } + ⋯ + a 0 L{y} = G(s)

We can turn this into a polynomial using the laplace transform of a derivative formula derived above
n (n−1)
a n [s Y (s) − ⋯ − y (0)] + ⋯ + a 0 Y (s) = G(s)

Q(s)+G(s) n n−1
Y (s) = , P (s) = a n s + a n−1 s + ⋯ + a0
P (s)

Where Q(s) is a polynomial in s of degree less than or equal to n − 1 consisting of the various products of the coefficients
a i , i = 1, . . . , n

The laplace transform of a linear differential equation with constant coefficients becomes an algebraic equation in Y (s)

Steps to solving an IVP by the Laplace Transform

1. Find unknown y(t) that satisfies a DE and initial conditions


2. Apply laplace transform L
3. Transformed DE becomes an algebraic equation in Y (s)
4. Solve transformed equation for Y (s)
5. Apply inverse transform L −1

And there we go, we have the solution y(t) of the original IVP.

Behavior of F(s) as s → ∞

If a function f is piecewise continuous on [0, ∞) and of exponential order with c and L{f (t)} = F (s), then lim s→∞ F (s) = 0

Translation Theorems

Evaluating transforms such as L{e 5t 3


t } and L{e −2
cos 4t} is straightforward provided we know L{t 3
} and L{cos 4t}, which we do. In
general, if we know L{f (t)} = F (s), finding the Laplace of the Laplace transform that is, L{e αt
f (t)} is relatively simple by translating
F (s) to F (s − a). In other words,

If L{f (t)} = F (s) exists for s > c and α is any constant, then
for s > α + c

This must also mean that

Unit Step Function

The unit step function Ω(t − a) is defined to be

makes it y = 0 for [0, a).

For s > c

And,

Inverse Form of the Second Translation Theorem

Additional Operational Properties

Derivatives of Transforms

Transforms of Integrals

Properties of the Convolution


L
−1

has some properties similar to normal multiplication, as so:

f ∗ g = g ∗ f (commutative law)
f ∗ (g ∗ h) = (f ∗ g) ∗ h (associative law)
L
−1

L{t
L{e

{F (s − a)} = L

u(t − a) = {

{e
αt

L{u(t − a)} =

−as

n
−1

For some function f (t), shifting it by a and multiplying it by the unit step function like so:

f (t − a)u(t − a) = {

L{f (t − a)u(t − a)} = e


{F (s)

0,

1,

0,


f (t)} = F (s − a)

t ≥ a

F (s)} = f (t − a)u(t − a)

ds
s→s−a

0 ≤ t < a,

−as

s
} = e
at
f (t)

When a function f is multiplied by Ω(t − a), the unit step function "turns off" a portion of the graph of that function. This is because it

f (t − a),

We get the function of t shifted to the right by a units, and having all values from [0, a) be equal to 0.

Second Translation Theorem

If L{f (t)} = F (s) exists for 0 ≤ t < T , and a is a positive constant, then
−as
0 ≤ t < a

t ≥ a

F (s)

If the function f is piecewise continuous on [0, ∞) and of exponential order with c, and L{f (t)} = F (s), then for n = 1, 2, 3, . . . and s > c

f (t)} = (−1)
n
d
n

n
F (s)

If functions f and g are piecewise continuous on the interval [0, ∞) then the convolution of f and g, denoted by the symbol f ∗ g, is a
function defined by the integral f ∗ g = ∫ f (τ )g(t − τ )dτ . Because we are integrating with respect to τ , the convolution is a function of t.
t

The laplace transform of some integral of f (t) where L{f (s)} = F (s) is L{∫ f (τ )dτ } =
F (s)

As the notation f ∗ g suggests, the convolution is often interpreted as a generalized product of two functions f and g. Thus, convolution
s
f ∗ (g + h) = (f ∗ g) + (f ∗ h) (distributive laws)
(cf ) ∗ g = f ∗ (cg) = c(f ∗ g) Where c is a constant

However, be careful, f ∗ 1 ≠ f

Convolution Theorem

If f and g are both piecewise continuous for t ≥ 0, then the laplace transform of a sum f + g is the sum of the individual transforms.
However this does not hold for f g. We will however see in the next theorem, called the convolution theorem, that for convolutions of f
and g we can separate them as so

L{f ∗ g} = L{f (t)}{g(t)} = F (s)G(s)

Inverse Form of Convolution Theorem


−1
L {F (s)G(s)} = f ∗ g

Integrodifferential Equations

An equation that combines both integrals and derivatives of a function. These equations often arise in the modeling of systems that
depend on both the rate of change and the cumulative effects of variables, such as electrical circuits, population models, mechanical
systems. Taking the laplace transform of one of these could make it much easier to solve as it could get rid of the integral (in form of a
convolution) and the derivatives as well.

Transform of a Periodic Function

If a periodic function f has period T , T > 0, then f (t + T ) = f (t). The Laplace transform of a period function can be obtained by
integration over one period.
T
1
−st
L{f (t)} = ∫ e f (t)dt
−sT
1 − e 0

Dirac Delta Function

Mechanical systems are often acted on by an external force of large magnitude that acts only for a very short period of time. The graph
of the piecewise function

⎧0, 0 ≤ t < t0 − a
1
δ a (t − t 0 ) = ⎨ , t0 − a ≤ t < t0 + a
2a

0, t ≥ t0 + a

where a > 0, t > 0 could serve as a model of such force. For a small value of a, δ (t − t ) is essentially a constant function of large
0 a 0

magnitude that is "on" for just a very short period of time, around t . This function is called a unit impulse since it possesses the
0

integration property √2t(t − t )dt = 1. In practice, it is convenient to work with another type of unit impulse, a "function" that
0

approximates δ a (t − t0 ) is defined by the limit

δ(t − t 0 ) = lim δ a (t − t 0 )
a→0

The function on the left can be characterized by two properties

∞, t = t0
(i) δ(t − t 0 ) {
0, t ≠ t0

(ii) ∫ δ(t − t 0 )dt = 1


0

The Laplace Transform of the Dirac Delta Function is


−st 0
L{δ(t − t 0 )} = e

An alternative property that applies to the dirac delta function is


(iii) ∫ f (t)δ(t − t 0 )dt = f (t 0 )


0

(iii) is called the shifting property

Orthogonal Functions

A Taylor Series (Remember Calc II) is an infinite expansion in powers of (x − a). A Fourier Series is an infinite expansion of a function
in trigonometric functions.

Our goal is to take a function (preferably periodic), defined over a closed interval [a, b] and write it as the sum of sines and cosines.

a0 nπ nπ
f (x) = + ∑(a n cos x + b n sin x)
2 p p
n=1

Our hope is that it will be sufficient to take a finite number of terms (partial sum) to properly approximate the function.

To actually find Fourier Series of functions, we need the properties of certain sines/cosines function sets. Namely: Orthogonality and
Completeness. Let us start with a generalization:

Compare with vectors → functions a generalization of vectors


Define inner products
Define Orthogonality

Inner Product

The inner product of vectors u and v is a scalar defined as


3

(u, v) = u 1 v 1 + u 2 v 2 + u 3 v 3 = ∑ u k v k

k=1

Properties of inner products

(u, v) = (v, u)

(ku, v) = k(u, v) where k is a scalar

(u, u) = 0 if u = 0 and (u, u) > 0 if u ≠ 0

(u + v, w) = (u, w) + (v, w)

The inner product of functions on interval [a, b] is


b

(f 1 (x), f 2 (x)) = ∫ f 1 (x)f 2 (x)dx


a

Two vectors or functions are said to be orthogonal if their inner product is 0.

Unlike vectors, orthogonality of function has no geometrical significance. However, we can still think in terms of vector properties.

How to Determine Orthogonality of Sets

A set of real-valued functions ϕ 0 (x), ϕ 1 (x), ϕ 2 (x), . . . is said to be orthogonal on [a, b] if


b

(ϕ m , ϕ n ) = ∫ ϕ m (x)ϕ n (x)dx = 0, m ≠ n
a

This essentially means that for a set to be orthogonal, it must hold for any pair of functions within the set.

Norm of a Function

The norm, or length ||u|| of a vector u can be expressed by ||u|| = √(u, u)

Similarly, the norm of a function ϕ in an orthogonal set ϕ is ||ϕ


b
= √∫
2
n n (x) n (x)|| ϕ n (x)dx
a
Orthogonal Series Expansion

An orthogonal series expansion for a generalized Fourier Series is


f (x) = ∑ c n ϕ n (x)

n=0

where
b
∫ f (x)ϕ(x)dx
a
c =
2
||ϕ n (x)||

With inner product notation, f (x) becomes



(f , ϕ n )
f (x) = ∑ ϕ n (x)
2
||ϕ n (x)||
n=0

Linearly Independent Functions

Space vectors in i, j, k are linearly independent vectors

Functions in an orthogonal set are linearly independent functions

Also, the orthogonal series expansion of a function is a linearly independent one, and finally, the generalized fourier series as well

Orthogonal Set/Weight Function

A set of real-valued functions ϕ 0 (x), ϕ 1 (x), ϕ 2 (x), . . . is said to be orthogonal with respect to a weight function w(x) on an interval [a, b] if
b

∫ w(x)ϕ m (x)ϕ n (x)dx = 0, m ≠ n


a

Complete Sets

A set of orthogonal functions is said to be complete if it satisfies the following property:

If a function ( f (x) ) is orthogonal to every function ( ϕ n (x) ) in the set, then ( f (x) ) must be the zero function:
b
(f , ϕ n ) = ∫ f (x)ϕ n (x)w(x) dx = 0 ∀n ⟹ f (x) = 0
a

Why Completeness is Important:

1. If the set of orthogonal functions ( {ϕ n (x) } ) is not complete, it means there exist nonzero functions ( f (x) ) that are orthogonal to
every member of the set.
2. Such functions would result in the Fourier coefficients ( c ) being zero for all ( n ), which would render the series representation
n

ineffective:
b
∫ f (x)ϕ n (x)w(x) dx
a
cn = 2
= 0, n = 0, 1, 2, …
∥ϕ n (x)∥

Assumption of Completeness:

To avoid such scenarios, we assume the set of orthogonal functions is complete. This ensures that any continuous function ( f (x) )
on ([a, b]) can be expressed as an orthogonal series expansion using the set ( {ϕ (x)} ): n


f (x) = ∑ c n ϕ n (x),
n=0

where the coefficients ( c ) are nonzero if ( f (x) ) is not orthogonal to ( ϕ


n n (x) ).

Definition of a Complete Set:

A set of orthogonal functions ( {ϕ n (x) } ) on ( [a, b] ) is complete if the only continuous function orthogonal to every member of the set is
the zero function ( f (x) = 0 ).

Fourier Series
The Fourier Series of a function f defined on the interval (−p, p) is given by

a0 nπ nπ
f (x) = + ∑(a n cos x + b n sin x)
2 p p
n=1

where
p
1
a0 = ∫ f (x)dx
p −p

p
1 nπ
an = ∫ f (x) cos xdx
p −p
p

p
1 nπ
bn = ∫ f (x) sin xdx
p −p
p

The General Idea of a Fourier Series

Essentially, a fourier series expresses a periodic function as a sum of sine and cosine terms. This is based on the orthogonality of the
sine and cosine functions. Fourier series are useful for solving boundary-value problems in physics and engineering.

The set of functions

πx πx 2πx 2πx
{1, sin , cos , sin , cos }
p p p p

is orthogonal over the interval [−p, p]

a0 is the average value of f (x) over [−p, p]


an and b are Fourier coefficients that capture the projection for f (x) onto cosine and sine terms.
n

To compute Fourier coefficients, we use the formulas provided above.

Fourier Cosine and Sine Series

A function f is even if f (−x) = f (x) and odd if f (−x) = −f (x).

Essentially, an even function is symmetrical around the y-axis, and an odd function about the x-axis.

Here are some properties of even and odd functions:


a. The product of two even functions is even
b. The product of two odd functions is even
c. The product of an even function and an odd function is odd.
d. The sum of two even functions is even
e. The sum of two odd functions is odd
f. If f is even, then ∫
a a
f (x)dx = 2 ∫ f (x)dx
−a 0

e. If f is odd, then ∫
a
f (x)dx = 0
−a

The Fourier series of an even function on (−p, p) is the cosine series

The Fourier series of an odd function on (−p, p) is the sine series

The cosine series is



a0 nπ
f (x) = + ∑ a n cos x
2 p
n=1

where a and a
2 p 2 p nπ
0 = ∫ f (x)dx n = ∫ f (x) cos xdx
p 0 p 0 p

The sine series is




f (x) = ∑ b n sin x
p
n=1
Where b 2 p nπ
n = ∫ f (x) sin xdx
p 0 p

If we are interested in a function defined on (0, L) rather than (−p, p), we may supply an arbitrary definition of f on (−L, 0) by either
i. Reflecting the graph of the function about the y-axis onto (−L, 0) so the function is even on (−L, L)
ii. Reflecting the graph of the function through the origin of the function onto (−L, 0) so the function is odd on (−L, L)
iii. Defining f on (−L, 0) by f (x) = f (x + L)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy