Mathematical Physics (Part - 03) (BSC Physics Notes)
Mathematical Physics (Part - 03) (BSC Physics Notes)
).
Matrix:
03
We have from our previous discussion on matrix Eigen value equation that according to
Cayley – Hamilton’s theorem, every (𝐧 × 𝐧) square matrix 𝐀 has its own characteristics
t:
equation
ar
𝐀 − 𝛌𝐈 = 𝟎 ⇒ (−𝟏)𝐧 𝛌𝐧 + 𝐚𝟏 𝛌𝐧−𝟏 + 𝐚𝟐 𝛌𝐧−𝟐 + … … . +𝐚𝐧 = 𝟎
(P
And then the equation can be replaced by matrix equation by replacing 𝛌 by matrix A where
s
we get
ic
𝐀𝐧 + 𝐚𝟏 𝐀𝐧−𝟏 + 𝐚𝟐 𝐀𝐧−𝟐 + … … . +𝐚𝐧 𝐈 = 𝟎 Or 𝐟 𝐀 = 𝟎 where
ys
𝐟 𝐀 is a polynomial of 𝐀 and 𝐟 𝐀 = 𝐀𝐧 + 𝐚𝟏 𝐀𝐧−𝟏 + 𝐚𝟐 𝐀𝐧−𝟐 + … … . +𝐚𝐧 𝐈
Ph
This is function of the matrix 𝐀 and for this characteristics equation for matrix 𝐀 we can
obtain inverse of matrix 𝐀 i.e. 𝐀−𝟏 directly from this Cayley Hamilton’s theorem when we are
al
This power of the matrix can be obtain by the method of matrix Diagonalisation through
at
similarity transformation.
m
𝟏 𝟐
As for example, let us consider a (𝟐 × 𝟐) matrix 𝐀= for which we have from
𝟐 −𝟏
he
characteristics equation
𝟏−𝛌 𝟐
at
𝐀 − 𝛌𝐈 = 𝟎 ⇒ = 𝟎 ⇒ 𝟏 − 𝛌 −𝟏 − 𝛌 − 𝟒 = 𝟎 ⇒ 𝛌𝟐 − 𝟓 = 𝟎
𝟐 −𝟏 − 𝛌
.M
Before discussion about the power of matrix we should now mention that
1 / 22
Where
a) The square matrix 𝐏, which diagonalize 𝐀, is found by grouping the eigen vectors of 𝐀 into
square-matrix and the resulting diagonal matrix has the eigen values of 𝐀 as its diagonal
elements.
).
b) The transformation of a matrix 𝐀 𝐭𝐨 𝐏 −𝟏 𝐀𝐏 is known as a similarity transformation.
03
c) The reduction of 𝐀 to a diagonal matrix is, obviously, a particular case of similarity
transformation.
t:
d) The matrix 𝐏 which diagonalize 𝐀 is called the modal matrix of 𝐀 and the resulting diagonal
ar
matrix 𝐃 is known as the spectra matrix of 𝐀.
(P
Since Similarity transformation is very important for diagonalizing the given square matrix
through the Eigen vector matrix or model matrix 𝐏 we should have
s
ic
Let 𝐀 𝐚𝐧𝐝 𝐁 be two square matrices of order 𝐧. Then 𝐁 is said to be similar to 𝐀 if there
exists a non-singular matrix 𝐏 such that 𝐁 = 𝐏 −𝟏 𝐀𝐏 and this equation is called a similar
ys
transformation.
Ph
When this non-singular matrix 𝐏 is itself a model matrix, the converted matrix 𝐁 will be a
diagonal matrix or spectral matrix 𝐃 of the matrix 𝐀 where the Eigen values will appear as the
diagonal elements of that matrix 𝐃 and thus 𝐃 = 𝐏 −𝟏 𝐀𝐏
al
ic
Let us now come to the point how do we get the power of a given matrix through matrix
Diagonalisation?
at
𝐃𝟐 = 𝐏 −𝟏 𝐀𝐏 𝐏 −𝟏 𝐀𝐏 = 𝐏 −𝟏 𝐀 𝐏 𝐏 −𝟏 𝐀𝐏 = 𝐏 −𝟏 𝐀𝟐 𝐏
he
So the steps which are taken to find any power of the given square non singular matrix 𝐀 are
(a) Find eigen values for a square matrix 𝐀 (b) Find eigen vectors to get the modal matrix 𝐏
(c) Find the diagonal matrix 𝐃, by the formula 𝐃 = 𝐏 −𝟏 𝐀𝐏 (d) Obtain 𝐀𝐧 by the formula
𝐀𝐧 = 𝐏 𝐃𝐧 𝐏 −𝟏 where simply for (𝐧 × 𝐧) square matrix 𝐀,
2 / 22
𝛌𝐧𝟏 ⋯ 𝟎
𝐧
We have 𝐃 = ⋮ ⋱ ⋮
𝟎 ⋯ 𝛌𝐧𝐧
Now we will discuss how to solve the coupled linear ordinary differential equation by matrix
).
method. But before of such discussion, it is urgent to have knowledge about the exponential
03
of a square matrix.
More clearly if A be a square matrix then such exponential of the matrix may be presented as
t:
𝐞𝐀 or 𝐞𝐱𝐀 (where x is a variable)
ar
𝐱𝟐 𝐱𝟑 ∞ 𝐱
𝐧
(P
We all know that the exponential series is given by 𝐞𝐱 = 𝟏 + 𝐱 + 𝟐! + 𝟑! + … … . . = 𝐧=𝟎 𝐧!
s
𝐞𝐱𝐀 = 𝟏 + 𝐱𝐀 +
(𝐱𝐀)𝟐
𝟐!
+
(𝐱𝐀)𝟑
𝟑!
+ ……..= ∞ 𝐱
𝐧
𝐧=𝟎 𝐧! ic
𝐀𝐧 where 𝐀𝐧 is so called nth power of the
ys
matrix A and that can be obtained by the process of matrix Diagonalisation.
Ph
𝐧
𝐃 = ⋮ ⋱ ⋮ and 𝐃 = ⋮ ⋱ ⋮ .
𝟎 ⋯ 𝛌𝐧 𝟎 ⋯ 𝛌𝐧𝐧
ic
𝐱𝐧 𝐱 𝐧 𝐧 −𝟏 (𝐱𝐃)𝐧 −𝟏
m
∞ ∞ ∞
𝐱𝐀
𝐞 = 𝐏 𝐃𝐧 𝐏 −𝟏 = 𝐏 𝐃 𝐏 =𝐏 𝐏 = 𝐏𝐞𝐱𝐃 𝐏 −𝟏
𝐧=𝟎 𝐧! 𝐧=𝟎 𝐧! 𝐧=𝟎 𝐧!
he
𝐃𝟐 𝐃𝟑
But 𝐞𝐃 = 𝐈 + 𝐃 + + + ……
𝟐! 𝟑!
at
𝛌𝟏 ⋯ 𝟎 𝟐
𝟏 ⋯ 𝟎 𝟏 𝛌𝟏 ⋯ 𝟎
.M
= ⋮ ⋱ ⋮ + ⋮ ⋱ ⋮ + ⋮ ⋱ ⋮ + ⋯……
𝟐!
𝟎 ⋯ 𝟏 𝟎 ⋯ 𝛌𝐧 𝟎 ⋯ 𝛌𝟐𝐧
Thus we get
3 / 22
𝛌𝟏 𝟐
𝟏 + 𝛌𝟏 + + ⋯. ⋯ 𝟎 𝐞𝛌𝟏 ⋯ 𝟎
𝟐!
𝐃
𝐞 = ⋮ ⋱ ⋮ = ⋮ ⋱ ⋮ .
𝛌𝐧 𝟐 𝟎 ⋯ 𝐞𝛌𝐧
𝟎 ⋯ 𝟏 + 𝛌𝐧 + + ⋯..
𝟐!
).
𝐞𝐱𝛌𝟏 ⋯ 𝟎 𝐞𝐱𝛌𝟏 ⋯ 𝟎
03
𝐱𝐃 𝐱𝐀
𝐞 = ⋮ ⋱ ⋮ . So finally we get 𝐞 = 𝐏 ⋮ ⋱ ⋮ 𝐏 −𝟏 . This is the
𝐱𝛌𝐧
𝟎 ⋯ 𝐞 𝟎 ⋯ 𝐞𝐱𝛌𝐧
actual to find the exponential of any non singular square matrix.
t:
ar
3. Solutions of Coupled Linear Ordinary Differential Equations:
(P
Let us now discuss the process of solving the coupled ordinary differential equation where
s
two or more than two variables are made coupled in the given differential equations. As for
ic
example consider two such coupled differential equations
𝐝𝐱
𝐝𝐭
= 𝐲 and
𝐝𝐲
𝐝𝐭
= −𝐱
ys
If we solve these two differential equations in ordinary method then it is very easy to solve
Ph
𝐝𝟐 𝐱
these two equations because for these two equations we can write 𝐝𝐭 𝟐 + 𝐱 = 𝟎 and the
solution will then become 𝐱 𝐭 = 𝐂𝟏 𝐂𝐨𝐬𝐭 + 𝐂𝟐 𝐒𝐢𝐧𝐭 and obviously then
al
𝐲 𝐭 = 𝐂𝟑 𝐂𝐨𝐬𝐭 − 𝐂𝟒 𝐒𝐢𝐧𝐭
ic
But we now want to solve these two coupled differential equations by matrix method and
at
To discuss that it is at first mandatory to express the above coupled differential in matrix
𝐝 𝐱 𝟎 𝟏 𝐱
he
𝐲
form. That can be written as 𝐝𝐭 𝐲 = 𝐲 = ------------------- (1)
−𝟏 𝟎 −𝐱
at
𝐝 𝐱
This matrix differential equation can be written as 𝐗 = 𝐀[𝐗] when 𝐗 = 𝐲 and
𝐝𝐭
.M
𝟎 𝟏
𝐀=
−𝟏 𝟎
The solution of this above matrix differential equation can now be written as
𝐱
𝐗 = 𝐲 = 𝐂𝐞𝐀𝐭 and then we have from our previous discussion
𝐱 𝐞𝐭𝛌𝟏 𝟎 𝐏 −𝟏 where 𝐂 = 𝐂𝐈 = 𝐚 𝐛
𝐲 = 𝐂𝐏
𝟎 𝐞𝐭𝛌𝟐 𝐜 𝐝
4 / 22
𝟎 𝟏
But for the square matrix 𝐀 = we have the Eigen values 𝛌 = ±𝐢 and the
−𝟏 𝟎
𝟏 𝟏
corresponding Eigen vectors will then become 𝐗𝟏 = and 𝐗 𝟐 = which gives
𝐢 −𝐢
𝟏 𝟏 𝐢 −𝐢 −𝟏
the model matrix 𝐏 = then we obviously have 𝐏 −𝟏 = 𝟐
𝐢 −𝐢 −𝐢 𝟏
).
Finally we get the solution
03
𝐱
𝐗 = 𝐲 = 𝐂𝐞𝐀𝐭 = 𝐂𝐏 𝐞
𝐭𝛌𝟏
𝟎 𝐏 −𝟏 = 𝐂 𝟏 𝟏 𝐞𝐢𝐭 𝟎 𝐢 −𝐢 −𝟏
𝟎 𝐞𝐭𝛌𝟐 𝐢 −𝐢 𝟎 𝐞−𝐢𝐭 𝟐 −𝐢 𝟏
t:
𝐢𝐭 −𝐢𝐭
=
𝐂𝐢 𝐞𝐢𝐭 𝐞−𝐢𝐭 −𝐢 −𝟏
= 𝟐 −𝐢𝐞𝐢𝐭 − 𝐢𝐞
𝐜𝐢 −𝐞𝐢𝐭 + 𝐞−𝐢𝐭 = 𝐂𝐨𝐬𝐭 𝐒𝐢𝐧𝐭 𝐚 𝐛
ar
𝟐 𝐢𝐞𝐢𝐭 −𝐢𝐞−𝐢𝐭 −𝐢 𝟏 𝐞 −𝐞 −𝐢𝐭
−𝐢𝐞𝐢𝐭 − 𝐢𝐞−𝐢𝐭 −𝐒𝐢𝐧𝐭 𝐂𝐨𝐬𝐭 𝐜 𝐝
(P
And finally we get the solutions
s
ic
ys
Ph
al
ic
at
m
he
at
.M
5 / 22
Linear Vector Space
).
03
A vector space is a collection of vectors and scalars when they are mutually co related
through several mathematical operations and rules. It is also known as a linear vector
space or a linear space because only linear combinations of vectors are considered.
t:
This vector space basically contain group of vectors and field scalar with a few
ar
mathematical operations.
(P
A vector space over the field 𝐅 is often denoted as 𝐕(𝐅) of simply 𝐕. The elements of
𝐕 are called vectors and the elements of 𝐅 are called scalars. We should note that
s
there are four mathematical operations are involved with vector spaces which are two
ic
field operations ′+′ 𝐚𝐧𝐝 ′ ∙ ′, and two operations involving vectors – vector to vector
addition ⊕ and vector to scalar multiplication ⊙. This vector space is now defined as
ys
If {𝐅, +, ∙} be a field and {𝐕, ⊕, ⊙} be an abelian group such that a scalar
Ph
(i) 𝐚 + 𝐛 ⊙ 𝛂 = 𝐚 ⊙ 𝛂 ⊕ (𝐛 ⊙ 𝛂)
at
(ii) 𝐚 ⊙ 𝛂 ⊕ 𝛃 = 𝐚 ⊙ 𝛂 ⊕ ( 𝐚 ⊙ 𝛃)
m
(iii) 𝐚 ∙ 𝐛 ⊙ 𝛂 = 𝐚 ⊙ (𝐛 ⊙ 𝛂)
he
Extension to Subspaces:
.M
If 𝐒 be the subset of the vectors of the group 𝐕 then 𝐒(𝐅) is itself a vector space with
respect to the operations defined on 𝐕(𝐅). In this case 𝐒(𝐅) is the subspace of the
vector space 𝐕(𝐅). For the subspace of the vector space or subset of the vector
Abelian group (the group is called abelian when the two elements of that group
commutes through the said operations) we should note that
6 / 22
For a subset 𝐒 of the vectors of a vector group 𝐕, 𝐒(𝐅) is a subspace of 𝐕(𝐅) if and
only if 𝐒(𝐅) is closed with respect to vector addition and scalar multiplication as
defined in 𝐕(𝐅) i.e. all the elements in 𝐒(𝐅) must follow the mathematical rules or
axioms as defined in 𝐕(𝐅)
).
03
Then for every 𝛂 𝛜 𝐒, −𝟏 ⊙ 𝛂 = −𝛂 𝛜 𝐒 and hence 𝛂 ⊕ −𝛂 = 𝛉 𝛜 𝐒 [𝛉 is then
called null vector]. The other axioms, viz., associativity and commutatively of vector
addition and the properties of scalar multiplication are inherited in 𝐒 𝐟𝐫𝐨𝐦 𝐕 because
t:
the operations are the same.
ar
2. Linear Independence and Dependence of Vectors in Vector Space:
(P
The concept of linear dependence or independence of vectors is one of the most
important concepts in the study of vector spaces.
s
ic
A set {𝛂𝟏 , … … , 𝛂𝐧 } of vectors of a vector space 𝐕(𝐅) is said to be linearly dependent if
ys
and only if there exist scalars 𝐚𝟏 , 𝐚𝟐 , … … , 𝐚𝐧 𝐢𝐧 𝐅 not all of which are zero such that
Ph
𝐚𝟏 𝛂𝟏 + 𝐚𝟐 𝛂𝟐 + ⋯ + 𝐚𝐧 𝛂𝐧 = 𝛉 𝛜 𝐕
𝟏
he
Thus 𝛂𝐢 depends linearly on the other vectors. On the other hand if the vectors are
linearly independent then it is impossible to express any one of the vectors as a linear
.M
(b) Any set which contains the zero vector or null vector is linearly dependent, as
𝐚𝛉 = 𝛉 for any scalar 𝐚 𝛜 𝐅.
7 / 22
3. Basis and Dimensions of a Vector Space:
A basis for a vector space 𝐕(𝐅) is a linearly independent set of vectors which spans 𝐕.
More clearly these basis vectors of the given vector space will construct the other
vectors of the same vector space as any vector in that vector space can be presented
as the linear combination of those linearly independent basis vectors.
).
03
The number of those basis vectors in vector space is called the dimension of that
vector space. If the number of vectors in a basis of 𝐕 is 𝐧 then 𝐕(𝐅) is said to be 𝐧-
dimensional vector space and is denoted as 𝐕𝐧 (𝐅). So if the number of vectors in a
t:
basis of 𝐕(𝐅) is finite 𝐕(𝐅) is said to be finite-dimensional; otherwise it is said to be
ar
infinite-dimensional.
(P
We shall be mainly concerned with finite-dimensional vector spaces. We note that a
set of vectors which forms a basis for 𝐕 has two properties:
s
(a) It is linearly independent.
ic
ys
(b) Every vector of 𝐕 can be expressed as a linear combination of the basis vectors of
this set.
Ph
(c) No set of more than 𝐧 vectors in 𝐕𝐧 (𝐅) can be linearly independent and 𝐕𝐧 (𝐅) also
can not be spanned by less than 𝐧 vectors.
al
Let 𝐕(𝐅) be a vector space over the field of real (𝐅 = 𝐑) or complex (𝐅 = 𝐂) numbers.
at
𝐢𝐟 𝛂 ≠ 𝛉 𝐚𝐧𝐝 𝛉, 𝛉 = 𝟎
.M
A vector space 𝐕(𝐅) on which an inner product is defined is called an inner product
space. A real inner product space is called a Euclidean space and a complex inner
product space is called a Unitary space.
Condition (i) states that 𝛂, 𝛃 is the complex conjugate of 𝛃, 𝛂 . For a real space this
reduces to 𝛂, 𝛃 = 𝛃, 𝛂 .
8 / 22
For a complex space we take the complex conjugate; otherwise the inequality in (iii)
may not make sense.
).
Condition (ii) states that an inner product is a linear function of the first component.
03
Making use of (i), (ii) we can write
𝛄, 𝐚𝛂 + 𝐛𝛃 = 𝐚𝛂 + 𝐛𝛃, 𝛄 = 𝐚 𝛂, 𝛄 + 𝐛 𝛃, 𝛄 = 𝐚 𝛂, 𝛄 + 𝐛 𝛃, 𝛄 = 𝐚 𝛄, 𝛂 + 𝐛 𝛄, 𝛃
t:
Applications:
ar
(P
a) Norm (or length) of a Vector:
In any vector space the non-negative real number 𝛂, 𝛂 𝟏/𝟐 is called the norm or
s
length of the vector 𝛂 and is denoted by 𝛂 : Thus 𝛂 ≡ 𝛂, 𝛂 𝟏/𝟐
ic
WE should remember that in vector algebra the length of a vector is expressed in
ys
terms of the scalar product or dot product as (𝛂 ∙ 𝛂)𝟏/𝟐 .
Ph
If 𝛂 = 𝟏 then we say that the vector 𝛂 is normalized. In more familiar terms we say
𝛂
that it is a unit vector. For any vector 𝛂, the vector 𝛂 is normalized.
al
vectors 𝛂 𝐚𝐧𝐝 𝛃. A vector space in which a distance is defined is called a metric space.
m
c) Orthogonality:
he
Two vectors 𝛂 𝐚𝐧𝐝 𝛃 are said to be orthogonal if and only if their inner product is
at
d) Schwarz Inequality:
9 / 22
Because if 𝛂 = 𝛉 𝐨𝐫 𝛃 = 𝛉 the inequality reduces to 𝟎 ≤ 𝟎 which is obviously true.
Therefore we assume that 𝛂 𝐚𝐧𝐝 𝛃 are non-zero. For any real number 𝐤 we have
𝛂 − 𝛂, 𝛃 𝐤𝛃 𝟐 ≥ 𝟎 𝐎𝐫 𝛂 − 𝛂, 𝛃 𝐤𝛃, 𝛂 − 𝛂, 𝛃 𝐤𝛃 ≥ 𝟎
𝐎𝐫 𝛂, 𝛂 − 𝛂, 𝛃 𝐤 𝛂, 𝛃 − 𝛂, 𝛃 𝐤 𝛃, 𝛂 + 𝛂, 𝛃 𝛂, 𝛃 𝐤 𝟐 𝛃, 𝛃 ≥ 𝟎
).
𝟐 𝟐 𝟐
𝐎𝐫 𝛂 − 𝟐𝐤 𝛂, 𝛃 + 𝛂, 𝛃 𝐤𝟐 𝛃 𝟐
≥𝟎.
03
𝛂,𝛃 𝟐 𝛂,𝛃 𝟐
Let us now put 𝐤 = 𝟏/ 𝛃 𝟐 . Then we get 𝛂 𝟐
−𝟐 𝟐
+ ≥𝟎
𝛃 𝛃 𝟐
t:
𝟐 𝛂,𝛃 𝟐 𝟐 𝟐 𝟐
𝐎𝐫 𝛂 − ≥ 𝟎 𝐎𝐫 𝛂 𝛃 − 𝛂, 𝛃 ≥ 𝟎 𝐎𝐫 𝛂, 𝛃 ≤ 𝛂 𝛃
ar
𝛃 𝟐
(P
This proves the inequality
s
ic
ys
Ph
al
ic
at
m
he
at
.M
10 / 22
Linear Vector Space
5. Gram-Schmidt Orthogonalization:
).
If we are given a set of linearly independent vectors 𝐗 𝟏 , 𝐗 𝟐 , … 𝐗 𝐧 , we can construct an
03
orthogonal set of vectors from them as follows:
t:
𝐋𝐞𝐭 𝐘𝟏 = 𝐗 𝟏 , 𝐘𝟐 = 𝐗 𝟐 − 𝐤 𝟏 𝐘𝟏 . Here 𝐘𝟏 , 𝐘𝟐 are orthogonal ⇒ 𝐘𝟏 , 𝐘𝟐 = 𝟎
ar
𝐘𝟏 ,𝐗 𝟐
⇒ 𝐘𝟏 , 𝐗 𝟐 − 𝐤 𝟏 𝐘𝟏 = 𝟎 ⇒ 𝐘𝟏 , 𝐗 𝟐 − 𝐤 𝟏 𝐘𝟏 , 𝐘𝟏 = 𝟎 ⇒ 𝐤 𝟏 = .
𝐘𝟏 ,𝐘𝟏
(P
𝐘𝟏 ,𝐗 𝟐
We get 𝐘𝟐 = 𝐗 𝟐 − 𝐘𝟏 . Again we take𝐘𝟑 = 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟏 − 𝐤 𝟑 𝐘𝟐 . Since 𝐘𝟑 is
𝐘𝟏 𝟐
s
orthogonal to 𝐘𝟏 𝐚𝐧𝐝 𝐘𝟐 . Therefore 𝐘𝟏 , 𝐘𝟑 = 𝟎, 𝐘𝟐 , 𝐘𝟑 = 𝟎 and also 𝐘𝟏 , 𝐘𝟐 = 𝟎
ic
Since 𝐘𝟏 , 𝐘𝟑 = 𝟎 ⇒ 𝐘𝟏 , 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟏 − 𝐤 𝟑 𝐘𝟐 = 𝟎 ⇒ 𝐘𝟏 , 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟏, 𝐘𝟏 − 𝐤 𝟑 𝐘𝟏 , 𝐘𝟐 = 𝟎
ys
𝐘𝟏 ,𝐗 𝟑 𝐘𝟏 ,𝐗 𝟑
Thus we get 𝐘𝟏 , 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟏 , 𝐘𝟏 = 𝟎 ⇒ 𝐤 𝟐 = ⇒ 𝐤𝟐 =
Ph
𝐘𝟏 ,𝐘𝟏 𝐘𝟏 𝟐
Again
al
𝐘𝟐 , 𝐘𝟑 = 𝟎 ⇒ 𝐘𝟐 , 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟏 − 𝐤 𝟑 𝐘𝟐 = 𝟎 ⇒ 𝐘𝟐 , 𝐗 𝟑 − 𝐤 𝟐 𝐘𝟐 , 𝐘𝟏 − 𝐤 𝟑 𝐘𝟐 , 𝐘𝟐 = 𝟎
ic
𝐘𝟐 ,𝐗𝟑 𝐘𝟐 ,𝐗𝟑
⇒ 𝐘𝟐 , 𝐗 𝟑 − 𝐤 𝟑 𝐘𝟐 , 𝐘𝟐 = 𝟎 [𝐬𝐢𝐧𝐜𝐞 𝐘𝟐 , 𝐘𝟏 = 𝟎 ] ⇒ 𝐤 𝟑 = ⇒ 𝐤𝟑 =
at
𝐘𝟐 ,𝐘𝟐 𝐘𝟐 𝟐
𝐘𝟏 ,𝐗 𝟑 𝐘𝟐 ,𝐗 𝟑
m
For two column matrices 𝐗 and 𝐘 as two vector elements of a vector space, a linear
transformation 𝐘 = 𝐀𝐗 where 𝐀 is Unitary matrix (i.e. 𝐀+𝐀 = 𝐀𝐀+ = 𝐈𝐧 ) is called a
.M
unitary transformation.
11 / 22
𝟐 𝟐
⇒ 𝐘 = 𝐗 ⇒ 𝐘 = 𝐗 (Since 𝐀+𝐀 = 𝐀𝐀+ = 𝐈𝐧 )
Thus whenever 𝐀 is unitary, then the length of the vectors are preserved. Conversely,
let the lengths be preserved, so that 𝐗 = 𝐘
𝟐 𝟐
𝐍𝐨𝐰, 𝐗 = 𝐘 ⇒ 𝐗 +𝐗 = 𝐘 + 𝐘 ⇒ (𝐀𝐗)+ 𝐀𝐗 = 𝐗 + 𝐀+𝐀 𝐗 = 𝐈𝐧
).
⇒ 𝐗 + 𝐀+𝐀 𝐗 − 𝐈𝐧 = 𝟎 ⇒ 𝐀+𝐀 − 𝐈𝐧 = 𝟎 ⇒ 𝐀+𝐀 = 𝐈𝐧 i.e. 𝐀 is unitary.
03
7. Linear Transformations:
t:
Let 𝐔(𝐅) 𝐚𝐧𝐝 𝐕 𝐅 be two vector spaces over the same field 𝐅. Then a mapping or
ar
transformation 𝐓 which transforms 𝐔 𝐅 to 𝐕(𝐅) is called a linear transformation of
𝐔 𝐅 into 𝐕(𝐅) if
(P
𝐢 𝐓 𝛂 + 𝛃 = 𝐓 𝛂 + 𝐓 𝛃 𝐟𝐨𝐫 𝛂, 𝛃 𝛜 𝐔 𝐚𝐧𝐝 𝐢𝐢 𝐓 𝐚𝛂 = 𝐚𝐓 𝛂 𝐟𝐨𝐫 𝐚 𝛜 𝐅, 𝛂 𝛜 𝐔.
s
The conditions (i) and (ii) above can be combined as
ic
ys
𝐓 𝐚𝛂 + 𝐛𝛃 = 𝐚𝐓 𝛂 + 𝐛𝐓 𝛃 𝐟𝐨𝐫 𝐚, 𝐛 𝛜 𝐅, 𝛂, 𝛃 𝛜 𝐔
Ph
a) This linear transformation 𝐓 is also called a linear operator and in that case 𝐓 ≡ 𝐎.
al
b) Such one to one linear transformation of 𝐔(𝐅) 𝐨𝐧𝐭𝐨 𝐕(𝐅) is called an isomorphism.
ic
In case, there exist an isomorphism of 𝐔(𝐅) 𝐨𝐧𝐭𝐨 𝐕(𝐅), we say that 𝐔(𝐅) is
isomorphic to 𝐕(𝐅) and we write 𝐔(𝐅) ≅ 𝐕(𝐅).
at
Because if 𝐔 𝐅 𝐚𝐧𝐝 𝐕(𝐅) be two vector spaces over the same field 𝐅 then 𝐎: 𝐔 → 𝐕 is
defined by 𝐎 𝛂 = 𝟎 𝐟𝐨𝐫 𝛂 𝛜 𝐔 is a zero operator. With this zero operator,
12 / 22
for𝛂, 𝛃 𝛜 𝐔 𝐚𝐧𝐝 𝐚, 𝐛 𝛜 𝐅 𝐎 𝐚𝛂 + 𝐛𝛃 = 𝟎 = 𝟎 + 𝟎 = 𝐚𝐎 𝛂 + 𝐛𝐎 𝛃 . Hence 𝐎 is a
linear operator.
Because if 𝐔 𝐅 𝐚𝐧𝐝 𝐕(𝐅) be two vector spaces over the same field 𝐅 then 𝐎: 𝐔 → 𝐕 is
).
defined by 𝐎 𝛂 = 𝛂 𝐟𝐨𝐫 𝛂 𝛜 𝐔 is identity operator. With this identity operator,
03
for 𝛂, 𝛃 𝛜 𝐔 𝐚𝐧𝐝 𝐚, 𝐛 𝛜 𝐅, 𝐎 𝐚𝛂 + 𝐛𝛃 = 𝐚𝛂 + 𝐛𝛃 = 𝐚𝐎 𝛂 + 𝐛𝐎 𝛃 . Hence 𝐎 is a
linear operator.
t:
For such linear transformation 𝐓: 𝐔 → 𝐕, 𝐔 is called domain and 𝐕 is called co domain
ar
for the transformation 𝐓. Not only on vector space but also such transformation can
be taken for the transformation from one set to another set and similarly in that case
(P
the former set is called domain and later is called co domain.
s
As for example, let 𝐓: 𝐑 → 𝐂 when 𝐑 =set of all real numbers, 𝐂 = set of all complex
ic
numbers, if we consider 𝐓 𝐱 = 𝟐𝐱 + 𝟑𝐢 for 𝐱 𝛜 𝐑 then for any element, say, 𝐱 = 𝟔
in R, we get an element 𝐓 𝐱 = 𝐓 𝟔 = 𝟐 𝟔 + 𝟑𝐢 = 𝟏𝟐 + 𝟑𝐢 = 𝐳 𝐬𝐚𝐲 when 𝐳 𝛜 𝐂.
ys
In that case the transformation 𝐓 as defined by 𝐓 𝐱 = 𝟐𝐱 + 𝟑𝐢 for 𝐱 𝛜 𝐑 is a
transformation 𝐓 ∶ 𝐑 → 𝐂 for 𝐑 is the domain and 𝐂 is the co-domain of 𝐓.
Ph
write down
= 𝐚 𝐱 𝟏 , 𝟎 + 𝐛 𝐲𝟏 , 𝟎 = 𝐚𝐓 𝐱 𝟏 , 𝐱 𝟐 + 𝐛𝐓 𝐲𝟏 , 𝐲𝟐 = 𝐚𝐓 𝐱 + 𝐛𝐓(𝐲).
.M
So 𝐓 is a linear transformation.
13 / 22
Here the left hand side of these equations can be considered as the linear
transformations of 𝐓
𝐱𝟏 𝟐 𝟑 −𝟏 𝐱 𝟏 𝟐𝐱 𝟏 + 𝟑𝐱 𝟐 − 𝐱 𝟑
𝐓 𝐱𝟐 = 𝟓 𝟔 −𝟑 𝐱 𝟐 = 𝟓𝐱 𝟏 + 𝟔𝐱 𝟐 − 𝟑𝐱 𝟑
𝐱𝟑 𝟏 𝟏 𝟏 𝐱𝟑 𝐱𝟏 + 𝐱𝟐 + 𝐱𝟑
).
𝐱𝟏 𝟐 𝟑 −𝟏
03
𝐱
And this can be now written as 𝐓 𝐗 = 𝐀𝐗 for 𝐗 = 𝟐 and 𝐀 = 𝟓 𝟔 −𝟑 .
𝐱𝟑 𝟏 𝟏 𝟏
Such linear transformation 𝐓 ≡ 𝐀 is then called matrix transformation. For this matrix
t:
𝟐 𝟑 −𝟏 𝐱 𝟏 𝟐
𝐱
ar
transformation as taken in this example, 𝐀𝐗 = 𝟓 𝟔 −𝟑 𝟐 = 𝟏𝟎 = 𝐁
𝟏 𝟏 𝟏 𝐱 𝟑 𝟖
(P
i.e. 𝐓 𝐗 = 𝐀𝐗 = 𝐁
s
An example of Matrix Transformation:
ic
Consider two matrices 𝟐, −𝟑 𝐚𝐧𝐝 (𝟏, 𝟏) which are transformed into another two
ys
matrices 𝟒, 𝟓 𝐚𝐧𝐝 (𝟑, 𝟏) respectively.
Ph
𝟏 𝟐𝟏 𝐚𝟐𝟐 𝟏 𝟏
at
𝟏𝟑 𝟐
Solving 𝟐𝐚𝟏𝟏 − 𝟑𝐚𝟏𝟐 = 𝟒 and 𝐚𝟏𝟏 + 𝐚𝟏𝟐 = 𝟑, we get 𝐚𝟏𝟏 = , 𝐚𝟏𝟐 = 𝟓
𝟓
he
𝟖 −𝟑
Again solving 𝟐𝐚𝟐𝟏 − 𝟑𝐚𝟐𝟐 = 𝟓 and 𝐚𝟐𝟏 + 𝐚𝟐𝟐 = 𝟏, we get 𝐚𝟐𝟏 = 𝟓 , 𝐚𝟐𝟐 = 𝟓
at
𝟏𝟑 𝟐
𝐚𝟏𝟏 𝐚𝟏𝟐
.M
𝟓 𝟓
𝐇𝐞𝐧𝐜𝐞, 𝐀= 𝐚 𝐚𝟐𝟐 = 𝟖 −𝟑 . This is the matrix 𝐀 for the given matrix
𝟐𝟏
𝟓 𝟓
transformation and such matrix transformation is given by
𝟏𝟑 𝟐 𝟏𝟑 𝟐
𝟒 𝟓 𝟓 𝟐 𝟑 𝟓 𝟓 𝟏
= 𝟖 −𝟑 and also = 𝟖 −𝟑
𝟓 −𝟑 𝟏 𝟏
𝟓 𝟓 𝟓 𝟓
14 / 22
Linear Vector Space
Two vector spaces 𝐕(𝐅) 𝐚𝐧𝐝 𝐕 ′ (𝐅) over the same scalar field 𝐅 are said to be
).
isomorphic to each other if it is possible to establish a one-to-one and onto
03
correspondence (or mapping) 𝐟 between the
′ ′ ′
elements 𝛂, 𝛃, … . 𝐞𝐭𝐜. 𝛜 𝐕 𝐚𝐧𝐝 𝛂 , 𝛃 , … . . 𝐞𝐭𝐜. 𝛜 𝐕′ Such that if 𝐟 𝛂 = 𝛂 𝐚𝐧𝐝 𝐟 𝛃 = 𝛃′,
t:
then
ar
1. 𝐟 𝛂 + 𝛃 = 𝛂′ + 𝛃′ = 𝐟 𝛂 + 𝐟(𝛃) 2. 𝐟 𝐚𝛂 = 𝐚𝛂′ = 𝐚𝐟(𝛂) For 𝐚 𝛜 𝐅
(P
The correspondence 𝐟 is called an isomorphism of 𝐕(𝐅) onto 𝐕 ′ (𝐅) . In other words an
isomorphism of two vector spaces is a one-to-one and onto correspondence which
s
preserves the vector space operations of vector addition and scalar multiplication.
ic
On the other hand a group homomorphism is a map between groups that preserves
ys
the group operation. This implies that the group homomorphism maps the identity
element of the first group to the identity element of the second group, and maps the
Ph
inverse of an element of the first group to the inverse of the image of this element.
Thus a semi group homomorphism between groups is necessarily a group
al
homomorphism.
ic
We all know that for any matrix 𝐀 if 𝐀𝐀𝐓 = 𝐈, then 𝐀 is known as orthogonal matrix.
m
Thus any matrix transformation through such orthogonal matrix is called orthogonal
transformation.
he
More clearly, let us mow consider that we have a matrix 𝐘 = 𝐀𝐗 through the matrix 𝐀
at
𝐱𝟏 𝐲𝟏
𝐱𝟐 𝐲𝟐
where 𝐗 and 𝐘 are two column matrix as 𝐗 = ⋮ and = ⋮ . So we get
.M
𝐱𝐧 𝐲𝐧
𝐱𝟏
𝐱𝟐 𝟐 𝟐 𝟑
𝐗𝐓𝐗 = 𝐱𝟏, 𝐱𝟐, … … … . , 𝐱𝐧 𝟐
⋮ = 𝐱 𝟏 + 𝐱 𝟐 + 𝐱 𝟐 . … … … . . +𝐱 𝐧 and also
𝐱𝐧
15 / 22
𝐲𝟏
𝐲𝟐 𝟐 𝟐 𝟑
𝐘 𝐓 𝐘 = 𝐲𝟏 , 𝐲𝟐 , … … … . , 𝐲𝐧 𝟐
⋮ = 𝐲𝟏 + 𝐲𝟐 + 𝐲𝟐 . … … … . . +𝐲𝐧 .
𝐲𝐧
).
𝐲𝟏𝟐 + 𝐲𝟐𝟐 + 𝐲𝟐𝟑 . … … … . . +𝐲𝐧𝟐 = (𝐱𝟏𝟐 + 𝐱 𝟐𝟐 + 𝐱𝟐𝟑 . … … … . . +𝐱𝐧𝟐 ) then we have 𝐘 𝐓 𝐘 = 𝐗 𝐓 𝐗 .
03
And then we get 𝐘 𝐓 𝐘 = 𝐗 𝐓 𝐗 ⇒ 𝐀𝐗 𝐓
𝐀𝐗 = 𝐗 𝐓 𝐗 ⇒ 𝐗 𝐓 (𝐀𝐓 𝐀)𝐗 = 𝐗 𝐓 𝐗 ⇒ 𝐀𝐀𝐓 = 𝐈 .
t:
Thus under above given condition, the matrix transformation 𝐘 = 𝐀𝐗 is called
ar
orthogonal transformation. So we can conclude that a matrix transformation will be
orthogonal transformation under specific condition.
(P
A few examples:
s
1. Verify that the following is an inner product in R2: 𝛂, 𝛃 = 𝐱 𝟏 𝐲𝟏 − 𝐱 𝟏 𝐲𝟐 − 𝐱 𝟐 𝐲𝟏 +
𝟑𝐱 𝟐 𝐲𝟐 where 𝛂 = 𝐱 𝟏 , 𝐱 𝟐 , 𝛃 = 𝐲𝟏 , 𝐲𝟐
ic
ys
Ans: Here given that 𝛂, 𝛃 = 𝐱 𝟏 𝐲𝟏 − 𝐱 𝟏 𝐲𝟐 − 𝐱 𝟐 𝐲𝟏 + 𝟑𝐱 𝟐 𝐲𝟐
Ph
So we get 𝛃, 𝛂 = 𝐲𝟏 𝐱 𝟏 − 𝐲𝟏 𝐱 𝟐 − 𝐲𝟐 𝐱 𝟏 + 𝟑𝐲𝟐 𝐱 𝟐 = 𝛃, 𝛂 = 𝛂, 𝛃
Again we have
al
𝛂, 𝛂 = 𝐱 𝟏 𝐱 𝟏 − 𝐱 𝟏 𝐱 𝟐 − 𝐱 𝟐 𝐱 𝟏 + 𝟑𝐱 𝟐 𝐱 𝟐 = 𝐱 𝟏𝟐 + 𝟑𝐱 𝟐𝟐 − 𝟐𝐱 𝟏 𝐱 𝟐 = (𝐱 𝟏 − 𝐱 𝟐 )𝟐 + 𝟐𝐱 𝟐𝟐 > 0
ic
2. Find the norm of the vector 𝛂 = 𝟑, 𝟒 𝛜𝐑𝟐 with respect to the inner product as
m
Ans: Here the inner product of two vectors as defined in the previous question is
𝛂, 𝛃 = 𝐱 𝟏 𝐲𝟏 − 𝐱 𝟏 𝐲𝟐 − 𝐱 𝟐 𝐲𝟏 + 𝟑𝐱 𝟐 𝐲𝟐
at
𝟏/𝟐
𝛂 ≡ 𝛂, 𝛂 = (𝐱 𝟏 − 𝐱 𝟐 )𝟐 + 𝟐𝐱 𝟐𝟐 = (𝟑 − 𝟒)𝟐 + 𝟑𝟐 = 𝟑𝟑
Ans: Let us consider two vectors 𝛂 𝐚𝐧𝐝 𝛃 such that 𝛂 = 𝐱 𝟏 + 𝐢𝐲𝟏 , 𝛃 = 𝐱 𝟐 + 𝐢𝐲𝟐
16 / 22
So we have 𝐓 𝛂 = 𝐱 𝟏 − 𝐢𝐲𝟏 and 𝐓 𝛃 = 𝐱 𝟐 − 𝐢𝐲𝟐
And we get
).
𝐓 𝐚𝛂 + 𝐛𝛃 = 𝐚𝐱𝟏 + 𝐛𝐱 𝟐 − 𝐢 𝐚𝐲𝟏 + 𝐛𝐲𝟐 = 𝐚 𝐱𝟏 − 𝐢𝐲𝟏 + 𝐛 𝐱𝟐 − 𝐢𝐲𝟐
03
= 𝐚𝐓 𝛂 + 𝐛𝐓 𝛃
t:
4. Let S and T are two linear transformations on a two dimensional vector space and
ar
𝐱 𝟎 𝐱 𝐱
they are defined as 𝐒 𝐲 = and 𝐓 𝐲 = . Show that 𝐒𝐓 ≠ 𝐓𝐒 and 𝐓 𝟐 = 𝐓
𝐱 𝟎
(P
𝐱 𝐱 𝟎 𝐱 𝟎 𝟎
Ans: Here we have 𝐒𝐓 𝐲 = 𝐒 = and 𝐲 = 𝐓 = . Thus 𝐒𝐓 ≠ 𝐓𝐒
𝟎 𝐱 𝐱 𝟎
s
𝐱 𝐱 𝐱 𝐱 𝐱 𝐱
On the other hand 𝐓 𝟐 𝐲 = 𝐓. 𝐓 𝐲 = 𝐓
𝟎
=
𝟎 ic
and 𝐓 𝐲 =
𝟎
. So 𝐓 𝟐 = 𝐓
ys
Ph
al
ic
at
m
he
at
.M
17 / 22
Integral Transform
).
03
Fourier transforms decomposes any function into a sum of sinusoidal basis function. Each of
the basis functions is a complex exponential. Its importance is due to its practical application
is various fields of science and engineering where many difficult problems are made easier
t:
through its powerful tools.
ar
When 𝐟(𝐱) is a periodic function of x then Fourier Integral or Fourier Complex Integral of 𝐟(𝐱)
(P
may be written as
𝟏 ∞ ∞
𝐞−𝐢𝐬𝐱 𝐝𝐬 −∞ 𝐟(𝐭)𝐞𝐢𝐬𝐭 𝐝𝐭
s
𝐟 𝐱 = 𝟐𝛑 −∞
………… (A)
𝟐𝛑 −∞
𝟏 ∞
where 𝐅 𝐬 = 𝐟 𝐭 𝐞𝐢𝐬𝐭 𝐝𝐭 … … … … (𝟐)
𝟐𝛑 −∞
al
The function 𝐅(𝐬) given in (2) is the Fourier transform of 𝐟 𝐭 and thus for any periodic
ic
𝟏 ∞
function 𝐟 𝐱 , it’s Fourier transform is given by 𝐅 𝐒 = 𝐞𝐢𝐬𝐱 𝐟 𝐱 𝐝𝐱 as followed by
at
𝟐𝛑 −∞
equation (2) and obviously the corresponding inverse Fourier transform is given by equation
m
𝟏 ∞
(1) which is 𝐟 𝐱 = 𝐞−𝐢𝐬𝐱 𝐅 𝐬 𝐝𝐬 .
𝟐𝛑 −∞
he
𝟏 ∞
Fourier transform 𝐅 𝐟 𝐱 =𝐅 𝐬 = 𝐞𝐢𝐬𝐱 𝐟 𝐱 𝐝𝐱
𝟐𝛑 −∞
.M
𝟏 ∞
And inverse Fourier Transform 𝐅 −𝟏 𝐅 𝐬 = 𝐟 𝐱 = 𝐞−𝐢𝐬𝐱 𝐅 𝐬 𝐝𝐬
𝟐𝛑 −∞
Where the factor 𝐞𝐢𝐬𝐱 is called Kernel function for Fourier transform and is given by
𝐊 𝐬, 𝐱 = 𝐞𝐢𝐬𝐱 which connects one variable space to another variable space and here the
𝟏
factor is a scale factor or weight factor for Fourier transform which can be chosen
𝟐𝛑
arbitrarily from the original equation (A)
18 / 22
Since the function 𝐟 𝐱 is a periodic function, which satisfies Dirichlet’s conditions, for 𝐟 𝐱
defined in −𝛌, 𝛌 , 𝐟(𝐱) can be expand in Fourier series
∞ ∞
𝟏 𝐧𝛑𝐱 𝐧𝛑𝐱
𝐟 𝐱 = 𝐚𝟎 + 𝐚𝐧 𝐂𝐨𝐬 + 𝐛𝐧 𝐒𝐢𝐧 … … … … (𝟑)
𝟐 𝛌 𝛌
𝐢=𝟏 𝐢=𝟏
).
𝛌 𝛌 𝛌
𝟏 𝟏 𝐧𝛑𝐭 𝟏 𝐧𝛑𝐭
03
𝐰𝐡𝐞𝐫𝐞 𝐚𝟎 = 𝐟 𝐭 𝐝𝐭, 𝐚𝐧 = 𝐟 𝐭 𝐂𝐨𝐬 𝐝𝐭, 𝐛𝐧 = 𝐟 𝐭 𝐒𝐢𝐧 𝐝𝐭 𝐭𝐡𝐞𝐧
𝛌 −𝛌 𝛌 −𝛌 𝛌 𝛌 −𝛌 𝛌
t:
ar
𝛌 ∞ 𝛌
𝟏 𝟏 𝐧𝛑(𝐭 − 𝐱)
𝐟 𝐱 = 𝐟 𝐭 𝐝𝐭 + 𝐟 𝐭 𝐂𝐨𝐬 𝐝𝐭 … … … … (𝟒)
𝟐𝛌 𝛌 𝛌
(P
−𝛌 𝐧=𝟏 −𝛌
s
Again in 𝐥𝐢𝐦𝛌→∞ 𝛌
𝟏 ∞ ∞
𝐧=𝟏 −∞ 𝐟 𝐭 𝐂𝐨𝐬
𝐧𝛑(𝐭−𝐱)
𝛌 ic
𝐝𝐭 , Putting
𝛑
𝛌
= 𝐜 𝐚𝐬 𝛌 → ∞, 𝐜 → 𝟎
ys
∞ ∞ ∞ ∞
𝟏 𝟏
𝐥𝐢𝐦 𝐜 𝐟 𝐭 𝐂𝐨𝐬 𝐜(𝐭 − 𝐱)𝐝𝐭 = 𝐟 𝐭 𝐂𝐨𝐬 𝐜 𝐭 − 𝐱 𝐝𝐭 𝐝𝐜
Ph
𝐜→𝟎 𝛑 −∞ 𝛑 𝟎 −∞
𝐧=𝟏
𝟏 ∞ ∞
Hence 𝐟 𝐱 = 𝛑 𝟎 −∞
𝐟 𝐭 𝐂𝐨𝐬 𝐜 𝐭 − 𝐱 𝐝𝐭 𝐝𝐜 … … … … (𝟓)
al
∞ ∞ ∞ ∞
𝟏 𝟏
𝐟 𝐱 = 𝐂𝐨𝐬 𝐜𝐱𝐝𝐜 𝐟 𝐭 𝐂𝐨𝐬 𝐜𝐭 𝐝𝐭 + 𝐒𝐢𝐧 𝐜𝐱𝐝𝐜 𝐟 𝐭 𝐒𝐢𝐧 𝐜𝐭 𝐝𝐭 … … (𝟔)
𝛑 𝛑
he
𝟎 −∞ 𝟎 −∞
∞
When 𝐟 𝐱 is an odd function 𝐟 𝐭 𝐂𝐨𝐬 𝐜𝐭 is also an odd function and −∞
𝐟 𝐭 𝐂𝐨𝐬 𝐜𝐭 𝐝𝐭
at
vanishes then
.M
∞ ∞
𝟐
𝐟 𝐱 = 𝐒𝐢𝐧 𝐜𝐱𝐝𝐜 𝐟 𝐭 𝐒𝐢𝐧 𝐜𝐭 𝐝𝐭 … … … (𝟕)
𝛑 𝟎 𝟎
19 / 22
∞ ∞
𝟐
𝐟 𝐱 = 𝐂𝐨𝐬 𝐜𝐱 𝐝𝐜 𝐟 𝐭 𝐂𝐨𝐬 𝐜𝐭 𝐝𝐭 … … … (𝟖)
𝛑 𝟎 𝟎
).
Thus we can have
03
𝟏 ∞ ∞ 𝟏 ∞ ∞
𝟎
𝐟 𝐭 𝐝𝐭. −∞
𝐒𝐢𝐧𝐜 𝐭 − 𝐱 𝐝𝐜 = 𝟎 Or, 𝐟 𝐭 𝐒𝐢𝐧𝐜 𝐭 − 𝐱 𝐝𝐭 𝐝𝐜 = 𝟎
𝛑 𝛑 𝟎 −∞
t:
ar
…………….. (9)
(P
So from equation (5) we have
∞ ∞ ∞ ∞
𝟏 𝟏
𝐟 𝐱 = 𝐟 𝐭 𝐂𝐨𝐬 𝐜 𝐭 − 𝐱 𝐝𝐭 𝐝𝐜 + 𝐢 𝐟 𝐭 𝐒𝐢𝐧𝐜 𝐭 − 𝐱 𝐝𝐭 𝐝𝐜
s
𝛑 𝟎 −∞ 𝛑 𝟎 −∞
Finally we get 𝐟 𝐱 = 𝛑
𝟏 ∞ ∞
𝐟 𝐭 . 𝐞𝐢𝐜 𝐭−𝐱 𝐝𝐭 𝐝𝐜 = 𝟐𝛑
ic 𝟏 ∞ ∞
𝐟 𝐭 . 𝐞𝐢𝐜 𝐭−𝐱 𝐝𝐭 𝐝𝐜
ys
𝟎 −∞ −∞ −∞
𝟏 ∞ ∞
𝐞−𝐢𝐜𝐱 𝐝𝐜 −∞ 𝐟 𝐭 𝐞𝐢𝐜𝐭 𝐝𝐭 --------------- (10)
Ph
That is 𝐟 𝐱 = 𝟐𝛑 −∞
∞ ∞
𝟏
at
𝐟 𝐱 = 𝐞−𝐢𝐬𝐱 𝐝𝐬 𝐟 𝐭 𝐞𝐢𝐬𝐭 𝐝𝐭
𝟐𝛑 −∞ −∞
m
And this is surprisingly identical with our original equation (A) as taken for any periodic
he
function 𝐟 𝐱 and it is the base integral of both Fourier transform and Inverse Fourier
transform.
at
∞
𝟐
𝐅𝐬 𝐬 = 𝐟(𝐱) 𝐒𝐢𝐧 𝐬𝐱 𝐝𝐱 … … … (𝟏)
𝛑 𝟎
20 / 22
∞
𝟐
𝐟 𝐱 = 𝐅(𝐬) 𝐒𝐢𝐧 𝐬𝐱 𝐝𝐱 … … … … (𝟐)
𝛑 𝟎
).
∞
𝟐
𝐅𝐜 𝐬 = 𝐟(𝐱) 𝐂𝐨𝐬 𝐬𝐱 𝐝𝐱 … … … … (𝟑)
03
𝛑 𝟎
t:
ar
∞
𝟐
𝐟 𝐱 = 𝐅𝐜 𝐬 𝐂𝐨𝐬 𝐬𝐱 𝐝𝐬 … … … … (𝟒)
𝛑
(P
𝟎
𝟏 ∞
This is because we have from Fourier transform 𝐅 𝐟 𝐱 =𝐅 𝐬 = −∞
𝐞𝐢𝐬𝐱 𝐟 𝐱 𝐝𝐱 we get
s
𝟐𝛑
from this transformation
ic
ys
𝟏 ∞ 𝟏 ∞
𝐅 𝐬 = −∞
𝐞𝐢𝐬𝐱 𝐟 𝐱 𝐝𝐱 = 𝟐 𝟎
𝐂𝐨𝐬 𝐬𝐱 + 𝐢𝐒𝐢𝐧 𝐬𝐱 𝐟 𝐱 𝐝𝐱 = 𝐅𝐜 𝐬 + 𝐢𝐅𝐬 𝐬
𝟐𝛑 𝟐𝛑
Ph
𝟐 ∞ 𝟐 ∞
where 𝐅𝐜 𝐬 = 𝐟(𝐱) 𝐜𝐨𝐬 𝐬𝐱 𝐝𝐱 and 𝐅𝐬 𝐬 = 𝐟(𝐱) 𝐬𝐢𝐧 𝐬𝐱 𝐝𝐱
𝛑 𝟎 𝛑 𝟎
al
a) Linear Property:
at
∞ ∞
𝟏 𝟏
𝐅𝟏 𝐬 = 𝐞𝐢𝐬𝐱 𝐟𝟏 𝐱 𝐝𝐱 𝐚𝐧𝐝 𝐅𝟐 𝐬 = 𝐞𝐢𝐬𝐱 𝐟𝟐 𝐱 𝐝𝐱
.M
𝟐𝛑 −∞ 𝟐𝛑 −∞
∞
𝟏
𝐓𝐡𝐮𝐬 𝐅[𝐚𝐟𝟏 𝐱 + 𝐛𝐟𝟐 𝐱 ] = 𝐞𝐢 𝐬𝐱 𝐚𝐟𝟏 𝐱 + 𝐛𝐟𝟐 𝐱 𝐝𝐱
𝟐𝛑 −∞
∞ ∞
𝟏 𝐢 𝐬𝐱
𝟏
=𝐚 𝐞 𝐟𝟏 𝐱 𝐝𝐱 + 𝐛 𝐞𝐢 𝐬𝐱 𝐟𝟐 𝐱 𝐝𝐱
𝟐𝛑 −∞ 𝟐𝛑 −∞
21 / 22
= 𝐚𝐅𝟏 𝐬 + 𝐛𝐅𝟐 𝐬
).
𝟏 ∞
Because here as we know 𝐅 𝐬 = −∞
𝐞𝐢 𝐬𝐱 𝐟 𝐱 𝐝𝐱
𝟐𝛑
03
𝐬 𝟏 ∞ 𝐬
Then 𝐅 = −∞
𝐞𝐱𝐩 𝐢 𝐱 𝐟 𝐱 𝐝𝐱 Now
𝐚 𝟐𝛑 𝐚
t:
∞ ∞
𝟏 𝟏 𝟏 𝐬 𝟏 𝐬
ar
𝐅 𝐟 𝐚𝐱 = 𝐞𝐢 𝐬𝐱 𝐟 𝐚𝐱 𝐝𝐱 = 𝐞𝐱𝐩 𝐢 𝐲 𝐟 𝐲 𝐝𝐲 = 𝐅 , 𝐚≠𝟎
𝟐𝛑 −∞ 𝟐𝛑 𝐚 −∞ 𝐚 𝐚 𝐚
(P
c) Shifting Property:
s
If 𝐅(𝐬) is the Fourier transform of 𝐟(𝐱) then 𝐅 𝐟 𝐱 − 𝐚 = 𝐞𝐢 𝐬𝐚 𝐅 𝐬 … … … (𝟏)
Because as we know 𝐅 𝐬 =
𝟏 ∞
𝐞𝐢 𝐬𝐱 𝐟 𝐱 𝐝𝐱
ic
ys
𝟐𝛑 −∞
𝟏 ∞
Ph
Then 𝐅 𝐟 𝐱 − 𝐚 = 𝐞𝐢 𝐬𝐱 𝐟 𝐱 − 𝐚 𝐝𝐱
𝟐𝛑 −∞
∞
𝟏
𝐓𝐡𝐮𝐬 𝐅 𝐟 𝐱−𝐚 = 𝐞𝐢 𝐬 𝐲+𝐚 𝐟 𝐲 𝐝𝐲 𝐩𝐮𝐭𝐭𝐢𝐧𝐠 𝐱 − 𝐚 = 𝐲, 𝐝𝐱 = 𝐝𝐲
al
𝟐𝛑 −∞
ic
∞
𝐢 𝐬𝐱
𝟏
=𝐞 𝐞𝐢 𝐬𝐲 𝐟 𝐲 𝐝𝐲 = 𝐞𝐢 𝐬𝐚 𝐅(𝐬)
at
𝟐𝛑 −∞
m
22 / 22