NMM2270
NMM2270
Classification by Type
dx
) x is the independent variable) shows an Ordinary Differential Equation.
A differential equation with two or more independent variables is a partial differential equation
Classification by Order
The order of a differential equation is the order of the highest derivative in the equation
2
d y
dx
2
+ 5
dy
dx
= e
x
is of second order
Classification by Linearity
The dependent variable and all of its derivatives are of the first degree (e.g., if y were the dependent variable and y was in 2
dx dx
The function of the term is linear (e.g. if y was our dependent variable, then sin y cannot be a term in the differential equation)
Forms of DE's
n
Normal Form:
d y (n−1)
n
= f (x, y, . . . , y )
dx
The Solution of a DE
Any function ϕ defined on an interval I with at least n derivatives that are continuous on I , which when substitutes into the nth order
ODE reduces the equation to an identity, is said to be a solution of the equation on the interval.
Explicit/Implicit Solutions
Explicit solution: any solution in which the dependent variable is expressed solely in terms of the independent variable and constant
Implicit solution: any solution in which the dependent variable is not expressed solely in terms of the independent variable and a
constant, but is rather in a form where there is no single variable on one side of the equation. For example: x + y = C represents an 2 2
implicit solution
Family of Curves
A family of curves is essentially a solution containing an arbitrary constant, which can represent a set of solutions.
Trying to solve for the arbitrary constant in a differential equation. Normally presented in this form
Solve:
Subject to:
Where __ are arbitrary constants
1. Check if f (x
2. Check if
Separable Equations
dy
h(y)
erf (x) =
erf c(x) =
= g(x)dx
Linear Equations
√π
√π
Exact Equations
2
∫
∫
x
0
∂f
∂y
Error Function
By definition, we define:
x
e
∣
To prove existence and uniqueness, set
one solution that exists which passes through the point (x
0,
−t
−t
y0 )
x 0 ,y 0
dt
dt
is defined
is defined
1 (x) dx
dx
[ye
dy
dx
dy
= f (x, y)
dy
dx
+ a 0 (x)y = g(x)
dy
dx
+ P (x)y = f (x)
− ∫ P (x)dx
] = ∫ f (x)e
. Essentially proving existence and uniqueness is proving that there is precisely
= g(x)h(y)
− ∫ P (x)dx
y0 ).
dx
dy
h(y)
A first order differential equation of the form M (x, y)dx + N (x, y)dy = 0 is said to be an exact equation if the expression on the left is an
exact differential
1. Check if
2. If it does, then set
4. Now use
∂M
∂y
∂f
∂y
=
∂N
∂x
= N (x, y) =
∂x
Integrating Factors
= M (x, y)
∂y
∫ M (x, y)dx + g (y)
7. The implicit solution is now f (x, y) = C (f(x,y) is found from the substitution in step 6)
′
∂M
∂y
For non-exact equations of the form M (x, y)dx + N (x, y)dy = 0, we would want to find an integrating factor μ(x, y) such that
is an exact differential.
=
∂N
∂x
N x −M y
If is a function of y alone, then μ(y) = e is an integrating factor where M (and the same notation holds for the rest
N x −M y ∂M
∫
M
y =
M ∂y
Solutions by Substitutions
Lets say we have a HMGs equation of the form M (x, y)dx + N (x, y)dy = 0, then the substitution:
u =
y
x
or v = x
y
can help simplify the HMGs equation so that it is separable.
We practically use v = x
y
whenever M (x, y) is simpler than N (x, y) and vice versa.
y
, follow these steps.
Bernoulli's Equation
dy
n
+ P (x)y = f (x)y
dx
We solve as so:
−n dy 1−n
y + P (x)y = f (x)
dx
2. Substitute v = y 1−n
1−n
v = y
′ −n ′
v = (1 − n)y y
4. Solve as a normal linear equation, and reverse the substitution at the end to solve in terms of y
Linear Models
Growth: P (t) = P e 0
kt
Decay: P (t) = P e 0
−kt
Where P is the initial population size, t the current time, and k some constant
0
dt
env
Where T is the temperature of the item we are looking at, and T is the temperature of the surrounding environment. k represents
env
some constant.
RL Circuits: L di
dt
+ Ri = E 0
Where L = inductance of the inductor in the circuit, = the rate of change of current with respect to time, R = the resistance of the
di
dt
resistor in the circuit, i = the current flowing through the circuit at a given time, and E is the electromotive force (voltage source
0
RC Circuits: R dq
dt
+
C
q
= E0
Where R = Resistance of the resistor in the circuit, represents the rate of change of voltage with respect to charge, q represents
dA
dv
the charge stored in the capacitor at a given time, C is the capacitance of the capacitor, and E is the electromotive force of the circuit. 0
Half Life
The time t at which a decay model is at half of its entire life.
Logistic Equation
Used when both the population affected by a factor and the population not affected both impact the growth/decay rate.
dP
= P (a − bP )
dt
Similar to an Initial Value Problem, however you are given multiple points, one for each differential except the highest order differential.
Essentially, if you have a DE of order N , you will be given N − 1 points for y(x ) = y , y (x ) = y , . . . , y (x ) = y 0 0
′
1 1
(n−1)
n−1 n−1
Differential Operators
Let dy
dx
= Dy
n n−1
= (a n (x)D + a n−1 (x)D +. . . +a 0 (x))y
Superposition Principle
Let y 1, y2 , . . . , yk be solutions of the homogeneous nth order differential equation on an interval I . Then the linear combination
where all the Cs are arbitrary constants is also a solution on the interval to the DE.
Linear Dependence/Independence
A set of functions f 1 (x), f 2 (x), . . . , f n (x) is said to be linearly dependent on an interval I if there exists constants c 1, c2 , . . . , cn , not all
zero, such that
for every x in the interval. If not dependent, it must be linearly independent. We can use the Wronskian to prove linear dependence.
Wronskian
The Wronskian of a set of functions f 1 (x), f 2 (x), . . . , f n (x) possesses at least n − 1 derivatives. The determinant
f1 f2 ⋯ fn
′ ′ ′
f f ⋯ f
1 2 n
W (f 1 , f 2 , . . . , f n ) =
⋮ ⋮ ⋱ ⋮
fundamental set of solutions on the interval. We can find out its linear dependence or independence by calculating the wronskian for
the set of functions. If it simplifies to 0, it is dependent, and not a fundamental set of solutions. If it is not 0, it is a fundamental set, as is
independent.
Particular Solution
What we need to do is, find the fundamental set of solutions of the homogeneous counterpart, which is
(n) (n−1)
a n (x)y + a n−1 (x)y + ⋯ + a0 y = 0
Then find the PARTICULAR solution to the heterogeneous equation (how to compute, we will discuss later), and add the fundamental
set of solutions y 1, y2 , ⋯ , yn with the particular solution y to get the solution to the DE, as so
p
Reduction of Order
This method is used to solve linear second order differential equations, if we already know one solution.
And we already have the solution y , we can solve for the second solution y
1 (x) 2 (x) as so:
− ∫ P (x)dx
e
y 2 (x) = y 1 (x) ∫ dx
2
y (x)
1
Essentially, when we have a higher order equation, we will replace the order of differential with a power, and the differential itself as a
variable m to obtain the Auxiliary Equation.
This should leave us with two solutions m , and m which will leave us with an equation in the form:
1 2
m1 x m2 x
y(x) = c 1 e + c2 e
Case 1
Case 2
We use the reduction formula to solve for the second solution in this case
2m x
e 1
m1 x m1 x
y 2 (x) = e ∫ 2m x
dx = xe
e 1
m1 x m2 x
⇒ y(x) = c 1 e + c 2 xe
Case 3
αx iβx αx −iβx
⇒ y(x) = c 1 e e ± c2 e e
But remember, e iθ
= cos θ + i sin θ .
So we get:
αx αx αx
y(x) = c 1 e (cos(βx) + i sin(βx)) + c 2 e (cos(βx) − i sin(βx))⇒ y(x) = e (c 1 cos(βx) + c 2 cos(βx) + c 1 i sin(βx) − c 2 i sin(βx))
αx
⇒ y(x) = e ((c 1 + c 2 ) cos(βx) + (c 1 − c 2 )i sin(βx))
Now say A = c 1 + c 2 , B = (c 1 − c 2 )i
αx
y(x) = e (A cos(βx) + B sin(βx))
e.g.
3 2
m + m + m = 0
2
⇒ m(m + m + 1) = 0
We know m 1 = 0
Remainder Theorem
x−k
We would use this to find a k where the remainder is 0, and divide by that
Undetermined Coefficients
The general solution is y(x) = y c + yp where y is the solution to the homogeneous component, and y is the particular solution. Now,
c p
these functions
So on and so forth
and so on
y = x
y
′
′′
′′
yp = u1 y
a(u 1 y
2
∣
A lot of these will be on your formula sheet so do not fret.
1. Find y by solving ay
2. Find y
Say y
′
yp = u1 y
p
c
′′
1
= u1 y1 + u2 y2
′
1
y u
W1 =
u
′
1
′
1
′
2
= r(r − 1)x
′
1
′
1
Cauchy-Euler Equations
+ y2 u
+ y u
W1
W2
W
′
2
f (x)
′
2
′
2
= 0
′
1
y
′
′′
= rx
r−1
r−2
′
1
+ u2 y
f (t)
y2
′
a
,W
2
′
p
+ u y
+ Bx + C
But, what if the guess is the same as one of your homogeneous solutions? In such a case, we must multiply the guess by x
Variation of Parameters
′′
2
2
′′
′
2
=
′
+ by + cy = f (t)
+ by + cy = 0
′
2
′′
2
1
1
+ u y2
y1
y1
′
1
+ u y
+ u y ) + b(u 1 y
2
′
′
′
2
′
2
′
2
′
2
+ u 2 y ) = f (t)
r
0
f (x)
2
′
2
2
′
1
We can factor them out to get the equation. We then look for all other common terms and factor them out, as shown below
u 1 (ay
′′
1
+ by
′
1
+ cy 1 ) + u 2 (ay
′
2
′
′′
2
+ by
′
′
2
,W
We then substitute these values into the original differential equation and it ends up simplifying nicely. Let us see an example:
= rx
= r(r − 1)x
r−1
r−2
c
′
1
+ yp
=
′
2
+ cy 2 ) + a(u y
y1
y1
or y
′
′
+ u 2 y ) + c(u 1 y 1 + u 2 y 2 ) = f (t)
2
′
1
y2
y2
′
1
2
′
1
ax r(r − 1)x
′
1
+ y u
and
nx
n
′′
1 y1 ,
ax y
′
2
ar(r − 1)x
′
2
(n)
2
′
+ u 2 y ) = f (t)
2
=
′
u1 y , u1 y1
1
+ a n−1 x
′′
r−2
r
f (t)
+ bxrx
+ brx
n−1
′
+ bxy + cy = 0
)
r
y
r−1
+ cx
′′
(n−1)
+ by + c = 0
+ cx
r
⋯ + a2 x y
= 0
r
= 0
2
. Thus, the terms in the brackets MUST equal
′′ ′
+ a 1 xy + a 0 y = 0
r
x (ar(r − 1) + br + c) = 0
Thus, we know ar(r − 1) + br + c = 0 and this is a quadratic that we can solve for r in.
**Case 1 r 1, r2 , r1 ≠ r2
Case 2 r only 1
α βi ln x
x e
α
x (cos β ln x + i sin β ln x)
We then say
dy dy dt 1 dy
= ⋅ =
dx dt dx x dt
2 2
And d y
dx
2
= −
x
1
2
dy
dt
+
1
x
2
d y
dt
2
If we substitute this back into the original Cauchy-Euler equation, they should cancel out and we should be left with a differential
equation with constant coefficients.
A spring/mass system consists of a flexible spring suspended vertically from a rigid support with a mass m attached to its free end the
amount of stretch s of the spring will depend on the mass. Hooke's Law states that F = −ks where k > 0 is a constant proportional to
the spring, called the spring constant, and F is the force exerted upon the string.
To determine F , force, we can say that F = ma where a is the acceleration of a mass. In the case of a Spring/Mass system, that would
be g, the acceleration due to gravity of 9.8 m
s
2
.
The point of equilibrium is the position at which the spring stops oscillating . This is when F g = Fs or when mg = ks or mg − ks = 0.
If the mass is displaced by an amount x from its equilibrium, the restoring force of the spring is then k(x + s)
Assuming no external forces acting on the system, we can equate Newton's second law with the net, or resultant, force of the restoring
force and the weight.
2
d x
m = −k(s + x) + mg = −kx + mg − ks = −kx
2
dt
The negative sign indicates that the restoring force of the spring acts opposite to the direction of motion.
By dividing the equation m by the mass m we obtain the second-order differential equation or
2 2
d x d x k
2
= −kx 2
+ x = 0
dt dt m
2
d x
2
+ ω x = 0
2
dt
This equation is said to describe simple harmonic motion or free undamped motion. Two obvious initial conditions associated are
x(0) = x 0 , the amount of initial displacement, and x (0) = x , the initial velocity of the mass.
′
1
To solve an equation we note that the solutions of the auxiliary equation m are the complex numbers
2
d x 2 2 2
2
+ ω x = 0 + ω 4
dt
ω
and the frequency is f =
1
T
=
ω
2π
.
Suppose two parallel springs with constants k 1, k2 are attached to a common rigid support and then to a metal plate of negligible mass.
A single mass m is attached to the center of the plate in the double-spring arrangement (essentially hanging from both springs
simultaneously). If the mass is displaced from its equilibrium position by x, the net restoring force is −(k + k )x 1 2
k ef f = k 1 + k 2
How about springs in series? Essentially if a mass is hanging from a spring 1 which is hanging from another spring 2 and it moves
spring 1 down by a term of x and spring 2 down by a term of x then we can say that x = x
1 2 1 + x2 .
−k
and thus:
k ef f
=
F
k1
+
k2
F
or 1
k ef f
=
1
k1
+
1
k2
Thus, k ef f =
k1 k2
k 1 +k 2
This will not be on the exam so feel free to skip this section
It is reasonable to expect that when a spring/mass system is in motion for a long period the spring would weaken, in other words, the
spring constant would vary/decay with time. In one model for an aging spring, the spring constant k is replaced by the decreasing
function K(t) = ke . The linear differential equation mx + ke x = 0 however cannot be solved with the methods we know.
−αt ′′ −αt
dt
to represent friction of air for example, like so:
2
d x dx
m = −kx − β
2
dt dt
or
2
d x dx
2
+ 2λ + ω x = 0
2
dt dt
Where 2λ = β
m
,ω 2
=
m
k
Some terminology:
When λ 2
− ω
2
> 0 , the spring/mass system is called "overdamped"
When λ 2
− ω
2
= 0 , the spring/mass system is called "critically damped"
When λ 2
− ω
2
< 0 the spring/mass system is called "under damped"
Now, say we added a driving force, like an external force other than the damping to the system. This will look like:
2
d x dx
2
m + 2λ + ω x = F (t)
2
dt dt
Let f be a function defined on [0, ∞). Then the function F is defined by:
∞
−st
F (s) = ∫ e f (t)dt
0
is said to be the Laplace transform of f . The domain of F (s) is the set of values of s for which the improper integral converges. e −st
is
known to be the kernel of the laplace transform.
1
L{t} = 2
s
L{e
−ct
} =
s−c
1
, s > c for any constant c
c
L{sin(ct)} = 2 2
, s > 0
s +c
s
L{cos(ct)} = 2 2
, s > 0
s +c
ict s c
L{e } = 2 2
+ 2
i
s +c s +4
s−1
, we also know that L{e at
} =
1
a
⋅ s
1
−1
a
Some examples...
−1 1
1 = L { }
s
n −1 n!
t = L { n+1
}
s
−1 k
sin kt = L { 2 2
}
s +k
And so on.
Linearity of the Inverse Laplace Transformation
−1 −1 −1
L {αF (s) + βG(s)} = αL {F (s)} + βL {G(s)}
Partial Fractions are very useful towards solving for inverse laplace transformations, as we can utilize the linearity to split it up into
multiple different fractions and find the laplace transform each one individually summed.
Transform of a Derivative
′
L{f (t)} = sF (s) − f (0)
The laplace transform is ideally suited for solving linear initial value problems in which the differential equation has constant
coefficients.
We can turn this into a polynomial using the laplace transform of a derivative formula derived above
n (n−1)
a n [s Y (s) − ⋯ − y (0)] + ⋯ + a 0 Y (s) = G(s)
Q(s)+G(s) n n−1
Y (s) = , P (s) = a n s + a n−1 s + ⋯ + a0
P (s)
Where Q(s) is a polynomial in s of degree less than or equal to n − 1 consisting of the various products of the coefficients
a i , i = 1, . . . , n
The laplace transform of a linear differential equation with constant coefficients becomes an algebraic equation in Y (s)
And there we go, we have the solution y(t) of the original IVP.
Behavior of F(s) as s → ∞
If a function f is piecewise continuous on [0, ∞) and of exponential order with c and L{f (t)} = F (s), then lim s→∞ F (s) = 0
Translation Theorems
If L{f (t)} = F (s) exists for s > c and α is any constant, then
for s > α + c
For s > c
And,
Derivatives of Transforms
Transforms of Integrals
f ∗ g = g ∗ f (commutative law)
f ∗ (g ∗ h) = (f ∗ g) ∗ h (associative law)
L
−1
L{t
L{e
{F (s − a)} = L
u(t − a) = {
{e
αt
L{u(t − a)} =
−as
n
−1
For some function f (t), shifting it by a and multiplying it by the unit step function like so:
f (t − a)u(t − a) = {
0,
1,
0,
∣
f (t)} = F (s − a)
t ≥ a
F (s)} = f (t − a)u(t − a)
ds
s→s−a
0 ≤ t < a,
−as
s
} = e
at
f (t)
When a function f is multiplied by Ω(t − a), the unit step function "turns off" a portion of the graph of that function. This is because it
f (t − a),
We get the function of t shifted to the right by a units, and having all values from [0, a) be equal to 0.
If L{f (t)} = F (s) exists for 0 ≤ t < T , and a is a positive constant, then
−as
0 ≤ t < a
t ≥ a
F (s)
If the function f is piecewise continuous on [0, ∞) and of exponential order with c, and L{f (t)} = F (s), then for n = 1, 2, 3, . . . and s > c
f (t)} = (−1)
n
d
n
n
F (s)
If functions f and g are piecewise continuous on the interval [0, ∞) then the convolution of f and g, denoted by the symbol f ∗ g, is a
function defined by the integral f ∗ g = ∫ f (τ )g(t − τ )dτ . Because we are integrating with respect to τ , the convolution is a function of t.
t
The laplace transform of some integral of f (t) where L{f (s)} = F (s) is L{∫ f (τ )dτ } =
F (s)
As the notation f ∗ g suggests, the convolution is often interpreted as a generalized product of two functions f and g. Thus, convolution
s
f ∗ (g + h) = (f ∗ g) + (f ∗ h) (distributive laws)
(cf ) ∗ g = f ∗ (cg) = c(f ∗ g) Where c is a constant
However, be careful, f ∗ 1 ≠ f
Convolution Theorem
If f and g are both piecewise continuous for t ≥ 0, then the laplace transform of a sum f + g is the sum of the individual transforms.
However this does not hold for f g. We will however see in the next theorem, called the convolution theorem, that for convolutions of f
and g we can separate them as so
Integrodifferential Equations
An equation that combines both integrals and derivatives of a function. These equations often arise in the modeling of systems that
depend on both the rate of change and the cumulative effects of variables, such as electrical circuits, population models, mechanical
systems. Taking the laplace transform of one of these could make it much easier to solve as it could get rid of the integral (in form of a
convolution) and the derivatives as well.
If a periodic function f has period T , T > 0, then f (t + T ) = f (t). The Laplace transform of a period function can be obtained by
integration over one period.
T
1
−st
L{f (t)} = ∫ e f (t)dt
−sT
1 − e 0
Mechanical systems are often acted on by an external force of large magnitude that acts only for a very short period of time. The graph
of the piecewise function
⎧0, 0 ≤ t < t0 − a
1
δ a (t − t 0 ) = ⎨ , t0 − a ≤ t < t0 + a
2a
⎩
0, t ≥ t0 + a
where a > 0, t > 0 could serve as a model of such force. For a small value of a, δ (t − t ) is essentially a constant function of large
0 a 0
magnitude that is "on" for just a very short period of time, around t . This function is called a unit impulse since it possesses the
0
integration property √2t(t − t )dt = 1. In practice, it is convenient to work with another type of unit impulse, a "function" that
0
δ(t − t 0 ) = lim δ a (t − t 0 )
a→0
∞, t = t0
(i) δ(t − t 0 ) {
0, t ≠ t0
Orthogonal Functions
A Taylor Series (Remember Calc II) is an infinite expansion in powers of (x − a). A Fourier Series is an infinite expansion of a function
in trigonometric functions.
Our goal is to take a function (preferably periodic), defined over a closed interval [a, b] and write it as the sum of sines and cosines.
∞
a0 nπ nπ
f (x) = + ∑(a n cos x + b n sin x)
2 p p
n=1
Our hope is that it will be sufficient to take a finite number of terms (partial sum) to properly approximate the function.
To actually find Fourier Series of functions, we need the properties of certain sines/cosines function sets. Namely: Orthogonality and
Completeness. Let us start with a generalization:
Inner Product
(u, v) = u 1 v 1 + u 2 v 2 + u 3 v 3 = ∑ u k v k
k=1
(u, v) = (v, u)
(u + v, w) = (u, w) + (v, w)
Unlike vectors, orthogonality of function has no geometrical significance. However, we can still think in terms of vector properties.
(ϕ m , ϕ n ) = ∫ ϕ m (x)ϕ n (x)dx = 0, m ≠ n
a
This essentially means that for a set to be orthogonal, it must hold for any pair of functions within the set.
Norm of a Function
f (x) = ∑ c n ϕ n (x)
n=0
where
b
∫ f (x)ϕ(x)dx
a
c =
2
||ϕ n (x)||
Also, the orthogonal series expansion of a function is a linearly independent one, and finally, the generalized fourier series as well
A set of real-valued functions ϕ 0 (x), ϕ 1 (x), ϕ 2 (x), . . . is said to be orthogonal with respect to a weight function w(x) on an interval [a, b] if
b
Complete Sets
If a function ( f (x) ) is orthogonal to every function ( ϕ n (x) ) in the set, then ( f (x) ) must be the zero function:
b
(f , ϕ n ) = ∫ f (x)ϕ n (x)w(x) dx = 0 ∀n ⟹ f (x) = 0
a
1. If the set of orthogonal functions ( {ϕ n (x) } ) is not complete, it means there exist nonzero functions ( f (x) ) that are orthogonal to
every member of the set.
2. Such functions would result in the Fourier coefficients ( c ) being zero for all ( n ), which would render the series representation
n
ineffective:
b
∫ f (x)ϕ n (x)w(x) dx
a
cn = 2
= 0, n = 0, 1, 2, …
∥ϕ n (x)∥
Assumption of Completeness:
To avoid such scenarios, we assume the set of orthogonal functions is complete. This ensures that any continuous function ( f (x) )
on ([a, b]) can be expressed as an orthogonal series expansion using the set ( {ϕ (x)} ): n
∞
f (x) = ∑ c n ϕ n (x),
n=0
A set of orthogonal functions ( {ϕ n (x) } ) on ( [a, b] ) is complete if the only continuous function orthogonal to every member of the set is
the zero function ( f (x) = 0 ).
Fourier Series
The Fourier Series of a function f defined on the interval (−p, p) is given by
∞
a0 nπ nπ
f (x) = + ∑(a n cos x + b n sin x)
2 p p
n=1
where
p
1
a0 = ∫ f (x)dx
p −p
p
1 nπ
an = ∫ f (x) cos xdx
p −p
p
p
1 nπ
bn = ∫ f (x) sin xdx
p −p
p
Essentially, a fourier series expresses a periodic function as a sum of sine and cosine terms. This is based on the orthogonality of the
sine and cosine functions. Fourier series are useful for solving boundary-value problems in physics and engineering.
πx πx 2πx 2πx
{1, sin , cos , sin , cos }
p p p p
Essentially, an even function is symmetrical around the y-axis, and an odd function about the x-axis.
e. If f is odd, then ∫
a
f (x)dx = 0
−a
where a and a
2 p 2 p nπ
0 = ∫ f (x)dx n = ∫ f (x) cos xdx
p 0 p 0 p
If we are interested in a function defined on (0, L) rather than (−p, p), we may supply an arbitrary definition of f on (−L, 0) by either
i. Reflecting the graph of the function about the y-axis onto (−L, 0) so the function is even on (−L, L)
ii. Reflecting the graph of the function through the origin of the function onto (−L, 0) so the function is odd on (−L, L)
iii. Defining f on (−L, 0) by f (x) = f (x + L)