100% found this document useful (1 vote)
213 views300 pages

SS PPT Srin

This document outlines the syllabus for a Signals and Systems course. It covers five units: introduction to signals and systems; Fourier series and Fourier transforms; analysis of linear systems; correlation; and sampling theorems. It defines key concepts like signals, systems, Fourier analysis, convolution, correlation, sampling, and transforms. It also lists course objectives like differentiating between signal and system classifications and analyzing signals in the frequency domain. Textbooks and reference materials are provided.

Uploaded by

anjum samreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
213 views300 pages

SS PPT Srin

This document outlines the syllabus for a Signals and Systems course. It covers five units: introduction to signals and systems; Fourier series and Fourier transforms; analysis of linear systems; correlation; and sampling theorems. It defines key concepts like signals, systems, Fourier analysis, convolution, correlation, sampling, and transforms. It also lists course objectives like differentiating between signal and system classifications and analyzing signals in the frequency domain. Textbooks and reference materials are provided.

Uploaded by

anjum samreen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 300

SIGNALS AND SYSTEMS

II B.Tech I Semister Regulation : 19

PREPARED BY
CH.SURYA BABU,Asst.Professor
SYLLABUS
UNIT- I: INTRODUCTION: Definition of Signals and Systems, Classification of Signals,
Classification of Systems, Operations on signals: time-shifting, time-scaling, amplitude-
shifting, amplitude-scaling. Problems on classification and characteristics of Signals
and Systems. Complex exponential and sinusoidal signals, Singularity functions and
related functions: impulse function, step function signum function and ramp function.
Analogy between vectors and signals, orthogonal signal space, Signal approximation
using orthogonal functions, Mean square error, closed or complete set of orthogonal
functions, Orthogonality in complex functions. Related Problems.

UNIT –II: FOURIER SERIES AND FOURIER TRANSFORM:


Fourier series representation of continuous time periodic signals, properties of Fourier
series, Dirichlet’s conditions, Trigonometric Fourier series and Exponential Fourier
series, Relation between Trigonometric and Exponential Fourier series, Complex
Fourier spectrum. Deriving Fourier transform from Fourier series, Fourier transform
of arbitrary signal, Fourier transform of standard signals, Fourier transform of
periodic signals, properties of Fourier transforms, Fourier transforms involving
impulse function and Signum function. Introduction to Hilbert
Transform.RelatedProblems.
SYLLABUS
UNIT-III: ANALYSIS OF LINEAR SYSTEMS: Introduction, Linear system, impulse
response, Response of a linear system, Linear time invariant (LTI) system, Linear
time variant (LTV) system, Concept of convolution in time domain and frequency
domain, Graphical representation of convolution, Transfer function of a LTI
system, Related problems. Filter characteristics of linear systems. Distortion less
transmission through a system, Signal bandwidth, system bandwidth, Ideal LPF,
HPF and BPF characteristics, Causality and Poly-Wiener criterion for physical
realization, relationship between bandwidth and rise time.

UNIT –IV:
CORRELATION: Auto-correlation and cross-correlation of functions, properties
of correlation function, Energy density spectrum, Parseval’s theorem, Power
density spectrum, Relation between Convolution and correlation, Detection of
periodic signals in the presence of noise by correlation, Extraction of signal from
noise by filtering.
SAMPLING THEOREM : Graphical and analytical proof for Band Limited Signals,
impulse sampling, Natural and Flat top Sampling, Reconstruction of signal from
its samples, effect of under sampling – Aliasing, Introduction to Band Pass
sampling, Related problems.
SYLLABUS

UNIT –V:
LAPLACE TRANSFORMS: Introduction, Concept of region of convergence (ROC)
for Laplace transforms, constraints on ROC for various classes of signals,
Properties of L.T’s, Inverse Laplace transform, Relation between L.T’s, and F.T. of
a signal. Laplace transform of certain signals using waveformsynthesis.
Z–TRANSFORMS: Concept of Z- Transform of a discrete sequence. Region of
convergence in Z-Transform, constraints on ROC for various classes of signals,
Inverse Z- transform, properties of Z-transforms. Distinction between Laplace,
Fourier and Z transforms.

TEXT BOOKS:
1. Signals, Systems & Communications - B.P. Lathi, BS Publications,2003.
2. Signals and Systems - A.V. Oppenheim, A.S. Willsky and S.H. Nawab, PHI,2nd Edn,1997
3. Signals & Systems - Simon Haykin and Van Veen, Wiley, 2ndEdition,2007

REFERENCE BOOKS:
1. Principles of Linear Systems and Signals – BP Lathi, Oxford University Press,2015
2. Signals and Systems – T K Rawat , Oxford University press,2011
COURSE OBJECTIVIES &
OUTCOMES
Course Objectives:
The main objectives of this course are given below:
1.To study about signals and systems.
2.To analyze the spectral characteristics of signal using Fourier series and
Fourier transforms.
3. To understand the characteristics of systems.
4.To introduce the concept of sampling process
5.To know various transform techniques to analyze the signals and systems.

Course Outcomes: At the end of this course the student will able to:
1. Differentiate the various classifications of signals and systems
2. Analyze the frequency domain representation of signals using Fourier
concepts
3. Classify the systems based on their properties and determine the response of
LTI Systems.
4. Know the sampling process and various types of sampling techniques.
5. Apply Laplace and z-transforms to analyze signals and Systems (continuous
&discrete).
UNIT – I
Signal
Analysis

2
Analogy between Vectors and Signals


There is a perfect analogy between vectors and signals
which gives better understanding of signal analysis.

A vector contains magnitude and
direction.

We shall denote all vectors by boldface type and
their magnitudes by lightface type.

For example, A is a certain vector with magnitude
A.
3
Analogy between Vectors and Signals

● Consider two vectors V1 and V2 as shown in Figure. Let


● the component of V1 along V2 be given by C12V2.
● Geometrically the component of a vector V1 along the
vector V2 is obtained by drawing a perpendicular from the
end of V1 on the vector V1.
● V1=C12V2+Ve V1
Ve

C12V2 V2

4
Analogy between Vectors and Signals
Minimum
V1
oerfrVeoris
Ve preswehnetn it is
dropped
C12V2 V2
perpendi
cular on
V1 V2.
V1
Ve2
Ve1

C2V2 V2
C1V2 V2

V1=C1V2+Ve1 V1=C2V2+Ve2
5
Analogy between Vectors and Signals

● If C12 is zero, then the vector has no component along the


other vector and hence the two vectors are mutually
perpendicular.

Such vectors are known as orthogonal
vectors.

Orthogonal vectors are thus independent
vectors.

6
Analogy between Vectors and Signals

● A.B = AB cosθ

● A.B = B.A
A .B
● Component of A along B = Acosθ= B
A.
● Component of B along A = Bcosθ= B
Component of V1 along V2 = V 1 .V 2A
= . C12 V2

V2

7
Analogy between Vectors and Signals

C 12=. V 1 .V 2 .=
V1 . V2
V 22 V 2 .V 2

● If V1 and V2 are orthogonal then V1.V2=0 and


C12=0

8
Analogy between Vectors and Signals

The concept of vector comparison and orthogonality can
be extended to signals.

● Let us consider two signals, f1(t) and f2(t) and approximate


f1(t) in terms of f2(t) over a certain interval (t1<t<t2)


f1(t) ~=C12 f2(t) for (t1<t<t2)

If a error function is defined between actual and
approximated function is minimum over the interval (t1<t<t2)
● fe(t)=f1(t) - C12
9
f2(t)
Analogy between Vectors and Signals

● Possible criterion for minimizing the error fe(t) over the


taken interval is to minimize the average value of fe(t)
over this,to minimize
t2

t 2−t
∫ [ f1 (t )−C f (t )] dt
12 2
1 t1
1

This criterion is inadequate because there can be large
positive and negative errors present that may cancel one
another in this process of averaging and error becomes
zero.
10
Analogy between Vectors and Signals


This can be corrected if we choose square of the error
instead of error itself.
t2

ε=
t2−t1
∫ e dt
[f (t )]2

t1
1
t2

ε=
t2−t1
∫[ f 1 (t )−C 12 f 2 (t )]2 dt
t1
1

11
Analogy between Vectors and Signals

● To find value of C12 which will minimize ε, we must


have

d ε =0
dC12
t2
That is [ d ∫
dC12 t 2−t 1 t
[ f 1 (t )−C 12 f 2 (t )]2 dt ]=0
1

12
Analogy between Vectors and Signals

● Changing the order of integration and differentiation, we


get
t1 t2 t2
f 1 (t )dt −2 ∫ f 1 (t ) f 2 (t )dt +2C 12∫ f 22 (t )dt ]=0
1 d 2
[
t2−t1 dC 12
∫ t 1 t
1

t2
The first integral is obviously zero and
hence
t2

∫ f 1 (t ) f 2 (tC)dt12= t 1
t1

∫f 22 (t ) dt
t1
13
Analogy between Vectors and Signals

● By analogy with vectors, f1(t) has a component of waveform


f2(t) and this component has a magnitude C12.

● If C12 disappears, then the signal f1(t) contains no


component of signal f2(t), so the two functions are
orthogonal over the interval(t1,t2).


Condition for orthogonality
t2

∫ f 1 (t ) f 2 (t ) dt =0
t1
14
Analogy between Vectors and Signals

● It can be shown that the functions sin nω0t and sin mω0t
are orthogonal over any interval (t0,t0+ 2π/ω0) for integral
values of ‘m’ and ‘n’.


Consider Integral I:
t0 +2 π /ω 0
I= ∫ sin n ω 0 t sin mω 0 t
t0 dt
t0 +2 π /ω
1
[cos (n−m)ω 0 t−cos (n+m)ω 0 t ] dt
0


I=
2
t0 15
Analogy between Vectors and Signals

● Since ‘n’ and ‘m’ are integers, (n-m) and (n+m) are also
integers

● In that case the integral I is zero.


● Hence, the two functions are orthogonal.
● Similarly, it can be shown that sin nω0t and cos mω0t are
orthogonal functions and cos nω0t , cos mω0t are also
mutually orthogonal.

16
Analogy between Vectors and Signals
Graphical Evaluation of a Component of one Function in the
other

17
Analogy between Vectors and Signals
Orthogonal Vector Space

Analogy can be extended further to 3-dimensional space.
z
z0 A(x0,y 0,z 0)

y0
y
x0

18
Analogy between Vectors and Signals
Orthogonal Vector Space
● Component of A along the x axis = A.ax
● Component of A along the y axis = A.ay
● Component of A along the z axis = A.az

A= x0ax+y0ay+z0az

ax.ay=ay.az=az.ax=0 ax.ax=ay.ay=az.az=1

19
Analogy between Vectors and Signals
Orthogonal Vector Space

am.an= 0 m≠n
= 1 m=n
Considering n mutually perpendicular
coordinates

A = C1x1+C2x2+C3x3+.....+Cnxn

xm.xn= 0 m≠n
= 1 m=n 20
Analogy between Vectors and Signals
Orthogonal Vector Space
Component Cr=A.xr For an

orthogonal vector space,

A.xr = Crxr.xr= Crkr


A .x r
C r=
kr
xm.xn= 0 m≠n
= km m=n

21
Analogy between Vectors and Signals
Orthogonal Vector Space
If vector space is complete, any vector F can be expressed as

F = C1x1+C2x2+C3x3+.....+Crxr+.....

F . xr F .x r
C r= =.
kr xr. x r

22
Analogy between Vectors and Signals
Orthogonal Signal Space
Let us consider a set of n functions
g1(t),g2(t),....,gn(t) which are Orthogonal to one
another over an interval t1 to t2
t
2 ∫ g j (t ) gk (t )dt j≠
=0 k
t1
And let
t2

∫g 2j(t )dt =K j
t1

23
Analogy between Vectors and Signals
Orthogonal Signal Space
Let an arbitrary function f(t) be approximated
over an interval (t1,t2) by a linear combination of
these n mutually orthogonal
Functions.

f(t)≈C1g1(t)+C2g2(t)+.......+Ckgk(t)+......Cngn(t)

f (t )=∑ Cr gr (t )
r=1

24
Analogy between Vectors and Signals
Orthogonal Signal
Space

f e (t )=f (t )−∑ Cr gr (t )
t
r=1 n
2

∫ [ f (t )− ∑ C r g r(t )] dt
1 2
ε=
t 2−t 1 t1
r =1

δε = δε =..= δε =....= δε =0
δ C1 δ C2 δ C j δ Cn

25
Analogy between Vectors and Signals
Orthogonal Signal
Space

δε =0
δCt j
δ [ [ f (t )− C g (t )] 2
n
∫ ∑
2

dt]=0
δ j t 1
r=1
r r

C
t2 t2 t2
δ ∫[f 2(t )] dt =δδC ∫[C 2r (t ) g2r(t )] dt= δ ∫[C f (t ) gr (t )] dt=0
δC δC
r
t1 t1 t1
j j j

26
Analogy between Vectors and Signals
Orthogonal Signal Space

This leaves only two non zero terms


t2
δ ∫[−2 C f (t ) g (t )+C 2 g 2(t )] dt =0
δε t j j j j
1

Changing the order of integration and differentiation


t2 t2
2 ∫ f (t ) g j(t )dt =2 C j ∫ g 2
j
(t ) dt
t1 t1

27
Analogy between Vectors and Signals
Orthogonal Signal
Space

Therefore,
t2

∫ f (t ) g j (t ) t2
C j= dt t
1
t2 = ∫
1 f (t ) g (t ) dt
Kj t j

∫ g 2j(t )dt
1

t1

28
Analogy between Vectors and Signals
Orthogonal Signal Space

●Given a set of n functions g1(t),g2(t),.......gn(t)


mutually orthogonal over the interval (t1,t2),it
is possible to approximate an arbitrary
function f(t) over the interval by a linear
combination of these
n functions.

f(t)≈C1g1(t)+C2g2(t)+.......+Ckgk(t)+......Cngn(t)
n 29
f (t )=∑ Cr gr (t )
r =1
Analogy between Vectors and Signals
Orthogonal Signal Space

For best approximation we have to


choose C1,C2,......Cn such that it will
minimize Mean of the square of the
error

over the interval

30
Analogy between Vectors and Signals

Evaluation of Mean Square


Error

Let it be to consider to find the value of ‘ε’ when
ovapltum
i esumofcoefficients Ct 1,C2,....,Cn n are chosen as to
2

∫ [ f (t )− ∑ C r g r(t )] dt
give 1 2
ε=
t 2−t 1 t1 r=1

t2 n t1 n t2

ε=
1
[∫ f 2 (t )+
t 2−t 1 t
∑C 2r g2r(t ) dt−2 ∑C r f (t ) g r(t ) dt ]
1

∫ ∫
r=1 t2 r =1 t1 31
Analogy between Vectors and Signals

Evaluation of Mean Square Error



But from previous
approximatiton, 2 t2

∫f (t ) g r(t )dt =C r ∫ (t ) dt= C K


g 2
r r r
t1 t1


Substituting this in above
equation
ε= 1 [ t ∫ n n
f 2 (t ) dt + ∑ C 2r K r−2 ∑ C 2r K r ]
2

t 2−t 1 t1 r =1 r =1

32
Analogy between Vectors and Signals
Evaluation of Mean Square Error

So, the error ε


t2 n
[∫ f 2 (t ) dt ∑ C 2r K r ]
1
ε=
t2−t1 t
− 1
r
=1

This implies mean square error can be evaluated


by
t2
[∫ f 2 (t ) dt −(C 21 K 1+C 22 K 2+....+C 2n K n )]
1
ε=
t2−t1 t
1

33
Analogy between Vectors and Signals

Representation of a Function by a Complete


Set of Mutually Orthogonal Signals


From above equation it is evident that if we increase
n, if we approximate f(t) by a larger number of
orthogonal functions, the error will be smaller.

But by its very definition, ε is a positive quantity;
i.e., in the limit as the number of terms is made
infinity, the
n t2

∑ r Kr
su may converge to
∫ f 2 (t )
2
C
m r=1 integral
34
dt
t1
Analogy between Vectors and Signals

Representation of a Function by a Complete


Set of Mutually Orthogonal Signals

When integral and summation converge then ‘ε’


vanishes. t n
∫f 2 (t ) dt= ∑ C 2r K r
2

t1 r
=1

Under these conditions f(t) is represented by the infinite


series:

f (t )=C1 g1 (t )+C2 g2 (t )+.... Cr gr (t )+.. .


35
Analogy between Vectors and Signals

Representation of a Function by a Complete


Set of Mutually Orthogonal Signals

The infinite series on the right-hand side of above
equation converges to f(t) such that the mean square of
the error is zero.

The series is said to converge in the
mean.

Note that f(t) is now
exact.

And should there be no other x(t) having orthogonality 36

with any gr(t).


Analogy between Vectors and Signals

Representation of a Function by a Complete


Set of Mutually Orthogonal Signals
● Let us now summarize the results. For a set {gr(t)},

(r=1,2,....) mutually orthogonal over the interval


(t1,t2),
t2 ∫ gm (t ) gn (t ) = 0 if m ≠ n
=Km if m=n
dt t
1

37
Analogy between Vectors and Signals

Representation of a Function by a Complete


Set of Mutually Orthogonal Signals


If this function set is complete, then any
function f(t), can be expressed as
f (t )=C1 g1 (t )+C2 g2 (t )+.... Cr gr (t )+.. .
t2 t2
wher ∫ f (t ) gr (t ) ∫t f (t ) gr(t )
e
C r= dt = dt
1
t 1
t2
Kr
∫g 2r(t )dt
t1 38
Analogy between Vectors and Signals

Representation of a Function by a Complete Set


of Mutually Orthogonal Signals
●This draws an analogy between vectors and
signals.
● Any vector can be expressed as a sum of its

components along ‘n’ mutually orthogonal vectors,


provided these vectors form a complete set.

Similarly, any function f(t) can be expressed as a sum
of its components along mutually orthogonal
functions, provided these functions form a closed or
complete set.
39
Analogy between Vectors and Signals

t2
A⋅B ~ ∫f A (t ) f B (t )
dt
t1
t2

A⋅A = A2 ~ ∫f 2A (t ) dt
t1

If a vector is expressed in terms of its mutually


orthogonal Components, the square of the length is
given by the sum of the squares of the lengths of the
component vectors.
40
Analogy between Vectors and Signals

● Representation of f(t) by a set of


infinite mutually orthogonal
functions is called generalized
Fourier Series Representation of
f(t).

41
Analogy between Vectors and Signals

Orthogonality in Complex Functions

● Let us consider two signals, f1(t) and f2(t) as complex


functions of real variable t, over a certain interval (t1<t<t2)

f1(t) ~=C12 f2(t) for (t1<t<t2)


t2

∫ f 1 (t ) f 2∗(t ) dt
C 12= tt
1
1

∫ f 2 (t ) f 2∗(t )
dt 42
t1
Analogy between Vectors and Signals

Orthogonality in Complex Functions

Condition for orthogonality


t2 t2

∫ f 1 (t )f 2∗(t ) dt =∫ f 1∗(t ) f 2(t )dt =0


t1 t1

43
Analogy between Vectors and Signals

Orthogonality in Complex Functions

For a set of complete functions {gr(t)}, (r=1,2,...)


mutually orthogonal over the interval (t1,t2):
t2
∫ gm (t ) gn∗(t )dt =0 m≠
t1 n
t2

∫ gm (t ) gn∗(t )dt = m=
Km n
t1
44
Analogy between Vectors and Signals

Orthogonality in Complex Functions

If this set of functions is complete, then any


function f(t) can be expressed as
f(t)≈C1g1(t)+C2g2(t)+.......+Crgr(t)+.....

t2

Cr = 1 ∫ f (t ) gr∗(t )dt
Krt
1

45
Analogy between Vectors and Signals

Orthogonality in Complex Functions

● If this set of functions is real, then gr*(t)=g(t) and all the


results
for complex functions reduce to those obtained for real
functions as shown the analysis of real functions.

46
Analogy between Vectors and Signals

Summary
i) With two functions

C 12=. V 1 .V 2 .=
V1 . V2 V1.V2=0 and
V 22 V 2 .V 2 C12=0
t2

∫ f 1(t ) f 2 (t )
C 12= tdt
1
t1 t2

∫f 22(t ) dt ∫ f1 (t) f 2 (t ) dt
t1 =t 0
1

47
Analogy between Vectors and Signals

Summary
ii) With n dimensional functions
A .x r
A= C r=
kr
C1x1+C2x2+C3x3+.....+Cnxn

f(t)≈C1g1(t)+C2g2(t)+.....Cngn(t) t2

∫ f ( t )gj(t ) t2

C j= dt = 1 ∫ f (t )g (j t )dt
t1
n
f (t )=∑ C r g r(t )
t2
K jt
∫g (t )dt
2
j
1

r t1
=1
48
Analogy between Vectors and Signals

Summary
iii) For a complete set of mutually orthogonal functions
F . xr F .x r
F= C r= =.
kr x r. x r
C1x1+C2x2+C3x3+.....+Crxr+.....

t2 t2

f (t )=C 1 g1 (t )+C2 g2 (t )+.... Cr gr (t )+... ∫ f (t ) gr (t )dt ∫t f (t ) gr (t )


C r=
t1
= dt t
1

Kr 2

∫g 2r(t ) dt
t1

49
Analogy between Vectors and Signals

Summary
iv) For Complex functions

f(t)≈C1g1(t)+C2g2(t)+.......+Crgr(t)+..... t2

∫ f 1 (t ) f 2∗(t
t2 C 12 = t)dt
1
t


1

Cr =
1
f (t ) gr∗(t )dt ∫ f 2 (t ) f 2∗(t
Kr t
1 )dt
t1 50
Classification of Signals and Systems

Signals

A signal is a function representing a physical quantity or
variable, and typically it contains information about the behavior
or nature
of the phenomenon.

Signals are represented by real- or complex-valued functions of
one or more independent variables.

They may be one-dimensional, that is, functions of only
one independent variable, or multidimensional.

51
Classification of Signals and Systems

Classification of Signals

Signals can be classified into:

1. Continuous-time and Discrete-time


signals
2. Analog and Digital Signals
3. Real and Complex Signals
4. Deterministic and Random Signals
5. Even and Odd signals
6. Periodic and Non-periodic signals 52
7. Energy and Power signals
Classification of Signals and Systems

Continuous-time and Discrete-time signals


A signal x(t) is a continuous-time signal if t is a
continuous variable.

If t is a discrete variable-that is, x(t) is defined at discrete
times- then x(t) is a discrete-time signal.

Since a discrete-time signal is defined at discrete times,
a discrete-time signal is often identified as a sequence of
numbers, denoted by {xn} or x[n], where n = integer.
53
Classification of Signals and Systems
Continuous-time and Discrete-time signals

Continuous Time Signal Discrete Time Signal

54
Classification of Signals and Systems
Continuous-time and Discrete-time signals

Representation of discrete signals


55
Classification of Signals and Systems
Analog and Digital Signals


If a continuous-time signal x(t) can take on any value in the
continuous interval (a, b), where a may be -∞ and b may be
+∞ , then the continuous-time signal x(t) is called an analog
signal.

If a discrete-time signal x[n] can take on only a finite number
of distinct values, then we call this signal a digital signal.

56
Classification of Signals and Systems
Analog and Digital Signals

57
Classification of Signals and Systems
Real and Complex Signals

A signal x(t) is a real signal if its value is a real number,


and a signal x(t) is a complex signal if its value is a
complex number.

A general complex signal x(t) is a function of the form

x (t )=x1 (t )+ jx2 (t )

58
Classification of Signals and Systems
Deterministic and Random Signals

Deterministic signals are those signals whose values are
completely specified for any given time. Thus, a deterministic
signal can be modeled by a known function of time t.

Random signals are those signals that take random values at
any given time and must be characterized statistically.

59
Classification of Signals and Systems
Even and Odd Signals

x (−t )= x (t x (−t )=−x (t )


)

x [−n]=x [ n] x [−n]=− x [n]

Odd Signal
Even Signal

60
Classification of Signals and Systems
Even and Odd Signals

61
Classification of Signals and Systems
Even and Odd Signals

Any signal can be split into even and odd
parts
● x(t) = xe(t) + xo(t)

x[n] = xe[n] + xo[n]

62
Classification of Signals and Systems
Even and Odd Signals


xe(t) = 1/2 {x(t) + x(- t)} even part of x(t)


xe[n] = 1/2 {x[n] + x[- n]} even part of
x[n]
● x (t) = 1/2 {x(t) - x(- t)} odd part of x(t)
o


xo[n] = 1/2 {x[n] - x[- n]} odd part of
x[n]

63
Classification of Signals and Systems
Periodic and Non-Periodic Signals

A continuous-time signal x(t) is said to be periodic with
period T if there is a positive nonzero value of T for which
x(t + T) = x(t) all t
x(t + mT) = x(t) for m an integer
● The fundamental period T0 of x(t) is the smallest positive value of
● T.
This definition does not work for a constant signal x(t) (known as

a dc signal).
a constant signal x(t) the fundamental period is undefined since
64
x(t) is periodic for any choice of T.
Classification of Signals and Systems
Periodic and Non-Periodic Signals

Continuous Periodic Signal


● Any continuous-time signal which is not periodic is
called a
nonperiodic signal. 65
Classification of Signals and Systems
Periodic and Non-Periodic Signals

For a discrete-time signal,


x[n + N] = x[n] all n

x[n +m N] = x[n] for m an integer

The fundamental period N0 of x[n] is the smallest positive integer N.

66
Classification of Signals and Systems
Periodic and Non-Periodic Signals

Periodic
Sequence
67
Classification of Signals and Systems
Periodic and Non-Periodic Signals


Note that a sequence obtained by uniform sampling of a
periodic continuous-time signal may not be periodic.

Note also that the sum of two continuous-time periodic
signals may not be periodic but that the sum of two periodic
sequences is always periodic.

68
Classification of Signals and Systems
Energy and Power Signals

Consider v(t) to be the voltage across a resistor R producing a current i(t).


The instantaneous power p(t) per ohm is defined as

p(t )= v (t )i (t ) =i 2 (t )
R
Total energy is ∞
E=∫ i2 (t ) dt
−∞
Average power is
T/
P= lim
1
∫ i2(t )dt
2
T →∞ T −T / 2
69
Classification of Signals and Systems
Energy and Power Signals

For an arbitrary continuous-time signal x(t), the normalized energy content E of x(t)
is defined as


E=∫|x (t )|2 dt
−∞

Normalized Average power is

T/
P= lim
1 2
∫ |x (t )|2 dt
T →∞ T −T / 2

70
Classification of Signals and Systems
Energy and Power Signals

Similarly, for a discrete-time signal x[n], the normalized energy content E
of x[n] is defined as
n=∞

E= ∑ |x [n]|2
n=−∞

● The normalized average power P of x[n] is defined as

n=
1
P= lim 2 N +1 ∑ |x [ n]| 2
N
N n=−N
→∞
71
Classification of Signals and Systems
Energy and Power Signals

Similarly, for a discrete-time signal x[n], the normalized energy content E
of x[n] is defined as
n=∞

E= ∑ |x [n]|2
n=−∞

● The normalized average power P of x[n] is defined as

n=
1
P= lim 2 N +1 ∑ |x [ n]| 2
N
N n=−N
→∞
72
Classification of Signals and Systems
Energy and Power Signals
● A signal with finite energy has zero power. (ENERGY SIGNAL)

● A signal with finite power has infinite energy. (POWER SIGNAL)

● A signal cannot both be an energy signal and a power signal.

● There are signals, that are neither energy nor power signals.

● A periodic signal is a power signal if its energy content per period is finite,
and then the average power of this signal need only be calculated over a
period. Not all periodic signals are power signals.

73
Operations on Signals

• Sometime a given mathematical function


may completely describe a signal .
• Different operations are required for
different purposes of arbitrary signals.
• The operations on signals can be Time
Shifting
Time Scaling
Time Inversion or Time Folding
74
Operations on Signals

Time Shifting
x(t ± t0) is time shifted version of the signal
x(t). x (t + t0) → negative shift
x (t - t0) → positive shift

75
Operations on Signals

Time Scaling
x(At) is time scaled version of the
signal x(t). whereA is always positive.
|A| > 1 → Compression of the signal
|A| < 1 → Expansion

76
Operations on Signals

Time Scaling

Example: Given x(t) and we are to find y(t) = x(2t)

77

The period of x(t) is 2 and the period of y(t) is 1,


Operations on Signals

Time Scaling

•Given y(t),
find w(t) = y(3t)
and v(t) = y(t/3

78
Operations on Signals

Time Reversal (Or) Time Folding

•Time reversal is also called time folding


•In Time reversal signal is reversed with respect
to time i.e.
y(t) = x(-t) is obtained for the givenfunction

79
Operations on Signals

Time Reversal (Or) Time


Folding

80
Operations on Signals

Amplitude Scaling

C x(t) is a amplitude scaled version


of x(t) whose amplitude is scaled
by a factor C.

81
Operations on Signals

Addition

82
Operations on Signals

Subraction

83
Operations on Signals

Multiplication
Here multiplication of amplitude of two or more
signals at each instance of time or any other
independent variables is done which are common
between the signals.

84
Operations on Signals

Time Shifting for discrete sequences


Time shifting n  n  n0 , n0 an integer

85
Operations on Signals

Scaling for discrete sequences

n  Kn K an integer >
1

86

13
Classification of Signals and Systems

Systems and Classification


● A system is a mathematical model of a physical process that
relates the input (or excitation) signal to the output(or response)
● signal.
Let x and y be the input and output signals, respectively, of a
system.
Then the system is viewed as a transformation (or mapping) of x
into y.
y=Tx
87
Classification of Signals and Systems

Deterministic and Stochastic Systems



If the input and output signals x and y are deterministic
signals, then the system is called a deterministic system.
● If the input and output signals x and y are random
signals, then the system is called a stochastic system.

88
Classification of Signals and Systems

Continuous-Time and Discrete-Time Systems

● A continuous time system is characterized


by
differential equation.
● A discrete time system is often expressed
by
difference equation 89
Classification of Signals and Systems

Systems with Memory and without Memory



A system is said to be memoryless if the output at any
time depends on only the input at that same time.

Otherwise, the system is said to have
memory.

An example of a memoryless system is a resistor R with
the input x(t) taken as the current and the voltage taken
as the output y(t).
y =R x (t )

90
Classification of Signals and Systems

Systems with Memory and without Memory

● An example of a system with memory is a capacitor C with


the current as the input x(t) and the voltage as the output y(t);
then t
y = 1 ∫ x ( τ) d
C
τ −∞
n

y [ n]= ∑ x [ k
]
k=−∞ 91
Classification of Signals and Systems

Causal and Non-Causal Sytems


A system is called causal if its output at the present time
depends on only the present and/or past values of the
input.
● Thus, in a causal system, it is not possible to obtain
an output before an input is applied to the system.

A system is called noncausal (or anticipative) if its output at
the present time depends on future values of the input.

92
Classification of Signals and Systems

Causal and Non-Causal Sytems

Examples of non-causal Systems


y (t )=x (t +1)

y [ n]= x [−n ]


Note that all memoryless systems are causal, but not vice
versa.

93
Classification of Signals and Systems

Linear Systems and Nonlinear Systems


A system is said to be linear if it possesses additivity
and homogenity.
● T{x1+x2} = y1+y2 (Additivity)


T{ax} = ay (Homogeneity or Scaling)

● T{a1x1+a2x2} = a1y1+a2y2
(Superposition)
94
Classification of Signals and Systems

Linear Systems and Nonlinear Systems

● Consequence of homogeneity is that for a linear system


that
zero input yields zero output.
Examples of non linear
systems
y=x 2 y=cos x

95
Classification of Signals and Systems

Time In-Variant and Time Varying Systems


A system is called time-invariant if a time shift (delay or
advance) in the input signal causes the same time shift in the
output signal.
T{x(t -τ )} = y(t - τ)
T{x[n - k]} = y[n - k]


To check a system for time-invariance, we can compare the
shifted output with the output produced by the shifted input.
96
Classification of Signals and Systems

Linear Time-Invariant Systems

●If the system is linear and also time-


invariant, then it is called a linear time-
invariant (LTI) system.

97
Classification of Signals and Systems

Stable Systems
A system is bounded-input/bounded-output (BIBO)
● stable
if for any bounded input ‘x’ defined by
|x|⩽k1

the corresponding output y is also bounded defined


by |y|⩽k2
where k1 and k2 are finite real
constants

An unstable system is one in wh9i8ch


not all bounded inputs lead to
bounded output.
Standard Signals

Unit Step
Signal
● The unit step function u(t), also known as the Heaviside
unit function, is defined as

Note that it is discontinuous at t = 0 and that the value at t = 0 is undefined.


99
Standard Signals

Unit Step Signal


Time shifted version of unit step
signal

100
Standard Signals
Unit Impulse Function


The unit impulse function, δ(t), also known as the Dirac
delta function, is defined as:

101
Standard Signals
Unit Impulse Function

102
Standard Signals
Unit Impulse
Function

103
Standard Signals
Unit Impulse
Function

104
Standard Signals
Unit Impulse Function

The area under an impulse is called its strength or
weight. It is represented graphically by a vertical arrow.
An impulse with a strength of one is called a unit
impulse.

105
Standard Signals
Unit Impulse Function
The Sampling Property

 g t  t  t dt  g t
0 0

 
The Scaling Property

 a tt 0  1  tt 0 


a

106
Standard Signals
Unit Impulse Function

107
Standard Signals
Uses of Impulse Function

Modeling of electrical, mechanical, physical


phenomenon:

– point charge,

– impulsive force,

– point mass

– point light
108
Standard Signals
Signum Function
 1 , t  0
sgn t    0 , t  0   2u t  1
1 , t  0 

Precise Graph Commonly-Used Graph

The signum function, is closely related to the unit-step


function.
109
Standard Signals

Rectangular Pulse or Gate Function

 t   1/a , t  a/
Rectangular pulse,
0
, 2t  a/
a

2

110
Standard Signals

Unit Triangular function

111
Standard Signals

Sinc function
sin t
sinc t 
  t

112
Standard Signals

Discrete unit Step function


un  1 , n  0
0 , n  0

113
Standard Signals

Discrete unit impulse function

 1 , n0
  n 
0 , n  0

114
Module – II

FOURIER SERIES

115
Introduction to Fourier Series

● Fourier Series is a representation of signals as a linear


combination of a set of basic signals(sinusoidal or
exponential).

● Representation of continuous-time and discrete-time


periodic signals is referred as Fourier Series.

● Representation of aperiodic, finite energy signals is done


through Fourier Transform.
116

● Used for analyzing, designing and understanding signals and


LTI
systems
Introduction to Fourier
Series

I/ P Linear O/
Circui P
t

Sinusoidal OK
Inputs

Nonsinusoidal
Inputs 117

Nonsinusoidal Sinusoidal
Inputs Inputs
Introduction to Fourier Series

Perception of Fourier Series

● Trigonometric sums – Babylonians - predict Astronomical


events
● Year 1748 – L Euler – examined motion of string –
normal modes – discarded trigonometric series

● Year 1753 – D Bernoulli – linear combinations of normal


118
modes.
● Year 1759 – J. L Lagrange – criticized use of
trigonometric series for vibrating strings.
Introduction to Fourier Series

Perception of Fourier Series

● After a half century later Fourier developed his ideas


on Trigonometric series.

Joseph
Fourier 1768
to 1830 119
Introduction to Fourier Series

Perception of Fourier Series

● Year 1807 – Fourier represented a series for


temperature distribution through a body.

● Any periodic signal could be represented by such a


series.
● For aperiodic signals weighted integrals of sinusoids that
are 120
not at all harmonically related.

● Lagrange rejected this trigonometric series saying


discontinuities can never be represented in sinusoidal.
Introduction to Fourier Series

Perception of Fourier Series

● Year 1807 – Fourier represented a series for


temperature distribution through a body.

● Any periodic signal could be represented by such a


series.
● For aperiodic signals weighted integrals of sinusoids that
are 121
not at all harmonically related.

● Lagrange rejected this trigonometric series saying


discontinuities can never be represented in sinusoidal.
Introduction to Fourier Series

Application areas of Fourier Series

● In Theory of Integration, point-set topology and eigen


function expansion.

● Sinusoidal signals arise naturally in describing the motion


of the planets and periodic behaviour of the earth’s
climate.

● Alternating current sources generate voltages and 122


currents used for describing LTI systems.
Introduction to Fourier Series

Application areas of Fourier Series

● Waves in ocean – linear combination of sinusoidal waves of


diff. wavelengths (or) periods.

123
Introduction to Fourier Series

Application areas of Fourier Series

● Radio signals are sinusoidal in


nature.
● Discrete-time concepts and methods – numerical
analysis.
● Predicting motion of a heavenly body, given a sequence
of observations.

124
● Mid 1960s – FFT was introduced – reduced the time of
computation
● With this tool many interesting but previously impractical ideas
with discrete time Fourier series and transform have come
practical.
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials

A periodic signal with period of T , x(t ) = x(t + T ) for all


t,

125

Both these signals are periodic with fundamental frequency


ω0 and fundamental period T = 2 π / ω0 .
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials

● The set of harmonically related complex exponentials

126

● Each of these signals is periodic with period of


T
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials

Thus, a linear combination of harmonically related


complex exponentials of the form

is also periodic with period of T 127


k = 0 , x (t ) is a constant.
k = + 1 and k = − 1 , both have fundamental frequency equal to ω0 and are
collectively
referred to as the fundamental components or the first harmonic components.
k = + 2 and k = − 2 , the components are referred to as the second harmonic
components. k = + N and k = − N , the components are referred to as the Nth
harmonic components.
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials
● If x (t ) is real, that is, x ( t ) = x * ( t )

Replacing k by − k in the summation, we


have 128
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials

By comparison with first equation

a k = a * − k , or equivalently a * k = a −k

To derive the alternative forms of the Fourier series,


129
we rewrite the summation
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials
Substituting a * k for a − k , we have

Since the two terms inside the summation are complex


conjugate of each other, this can be expressed as 130
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials
If a k is expressed in polar from as

131
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials

It is one commonly encountered form for the Fourier series of


real periodic signals in continuous time. 132
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related
Complex Exponentials
Another form is obtained by writing ak in rectangular form
as

133
Fourier series Representation – CT
Periodic Signals
Linear Combinations of harmonically Related Complex Exponentials
For real periodic functions, the Fourier series in terms of complex
exponential has the following three equivalent forms:

134
Fourier series Representation – CT
Periodic Signals
Convergence of Fourier Series – Dirichlet Conditions
The Dirichlet conditions for the periodic signal x are as follows:


1)Over a single period, x is absolutely integrable(i.e.,∫|x (t )|dt
T
<∞)
2)Over a single period, x has a finite number of maxima and
minima (i.e., x is of bounded variation ).

3) Over any finite interval, x has a finite number of discontinuities


each of which is finite .
135
Fourier series Representation – CT
Periodic Signals
Convergence of Fourier Series – Dirichlet Conditions

If a periodic signal x satisfies the Dirichlet conditions ,


then:

1.The Fourier series converges pointwise everywhere to


x , except at the points of discontinuity of x .
2.At each point t = t a of discontinuity of x , the Fourier
series x converges to

136
where x(ta ) and x(ta + ) denote the values of the signal
x on the left- and −right-hand sides of the discontinuity, respectively.
Fourier series Representation – CT
Periodic Signals
Convergence of Fourier Series – Dirichlet
Conditions

● Since most signals tend to satisfy the Dirichlet conditions and


the
above convergence result specifies the value of the Fourier
series

at every point, this result is often very useful in practice.


137
Fourier series Representation – CT
Periodic Signals
Convergence of Fourier Series – Dirichlet Conditions

● Since most signals tend to satisfy the Dirichlet conditions and


the
above convergence result specifies the value of the Fourier
series

at every point, this result is often very useful in practice.


138
Fourier series Representation – CT Periodic Signals

Examples of Functions Violating Dirichlet


Conditions

139
Fourier series Representation – CT
Periodic Signals
Gibbs Phenomenon

● In practice, we frequently encounter signals with


discontinuities.


When a signal x has discontinuities, the Fourier series
representation of does not converge uniformly (i.e., at the same
rate everywhere).

140

The rate of convergence is much slower at points in the vicinity
of a discontinuity.
Fourier series Representation – CT
Periodic Signals
Gibbs Phenomenon
Furthermore, in the vicinity of a discontinuity, the truncated

Fourier series xN exhibits ripples, where the peak amplitude of
the ripples does not seem to decrease with increasing N .

As it turns out, as N increases, the ripples get compressed


towards discontinuity, but, for any finite N , the peak amplitude

of the ripples remains approximately constant.

141
Fourier series Representation – CT
Periodic Signals
Gibbs Phenomenon

● This behavior is known as Gibbs


phenomenon.

● The above behavior is one of the weaknesses of Fourier series


(i.e., Fourier series converge very slowly near discontinuities).

142
Fourier series Representation – CT Periodic Signals

Gibbs Phenomenon

143
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

Multiply both side by


of

144
Integrating both sides from 0 to T = 2 π / ω 0 , we
have
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

For
k=n
145
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

Synthesi
s
Equation

146

Analysis
Equation
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal


The set of coefficient { a k } are often called the
Fourier series coefficients (or) the spectral
coefficients of x(t).


The coefficient a 0 is the dc or constant component and
is given with k = 0 , that is

147
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

Example: consider the signal x(t ) = sin ω0t .

Comparing the right-hand side of this equation


with synthesis equation 148
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation of a
Continuous-Time Periodic Signal

Example: The periodic square wave, sketched in the figure


below and define over one period is
The signal has a fundamental period T and fundamental
frequency ω0 = 2 π / T .

149
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

● To determine the Fourier series coefficients for x(t ) , we use analysis


equation.

● Because of the symmetry of x(t ) about t = 0 , we choose − T / 2 ≤ t ≤ T /


2 as the interval over which the integration is performed, although any
other interval of length T is valid the thus lead to the same result.

150

For
k=0
Fourier series Representation – CT
Periodic Signals
Determination of the Fourier Series Representation
of a Continuous-Time Periodic Signal

For k ≠ 0 , we obtain

151
Fourier series Representation – CT Periodic Signals

Determination of the Fourier Series Representation


of a Continuous-Time Periodic Signal

152
Fourier series Representation – CT
Periodic Signals
Convergence of the Fourier Series

If a periodic signal x (t ) is approximated by a linear


combination of finite number of harmonically related complex
exponentials

Let eN(t ) denote the approximation error 153

The criterion used to measure quantitatively the


approximation error is the energy in the error over one
period:
Fourier series Representation – CT
Periodic Signals
Convergence of the Fourier
Series

The particular choice for the coefficients that minimize


the energy in the error is

154

The limit of EN as N -> ∞ is


zero.
Fourier series Representation – CT
Periodic Signals
Convergence of the Fourier Series
One class of periodic signals that are representable
through Fourier series is those signals which have finite
energy over a period,

When this condition is satisfied, we can guarantee that 155


the coefficients obtained from are finite. We define

then
Fourier series Representation – CT
Periodic Signals
Convergence of the Fourier Series

● The convergence guaranteed when x(t) has finite energy


over a period is very useful.

● In this case, we may say that x(t) and its Fourier


series representation are indistinguishable.

156
FOURIE TRANSFORM:

•A periodic signal can be represented as linear


combination of complex exponentials which are
harmonically related.
•An aperiodic signal can be represented as linear
combination of complex exponentials, which are
infinitesimally close in frequency. So the representation
take the form of an integral rather than a sum
•In the Fourier series representation, as the period
increases the fundamental frequency decreases and the
harmonically related components become closer in
frequency. As the period becomes infinite, the frequency
components form a continuum and the Fourier series
becomes an integral.
1
FOURIE TRANSFORM:

The main drawback of Fourier series is, it is only applicable


to periodic signals. There are some naturally produced
signals such as nonperiodic or aperiodic, which we cannot
represent using Fourier series.
To overcome this shortcoming, Fourier developed a
mathematical model to transform signals between time or
spatial domain to frequency domain & vice versa, which is
called 'Fourier transform.
Fourier transform has many applications in physics and
engineering such as analysis of LTI systems, RADAR,
astronomy, signal processing etc.

1
Deriving FOURIE TRANSFORM from FOURIER SERIES:

Consider a periodic signal ft with period T. The complex Fourier


series representation of ft is given as

1
Deriving FOURIE TRANSFORM from FOURIER SERIES:

1
Deriving FOURIE TRANSFORM from FOURIER SERIES:

1
Deriving FOURIE TRANSFORM from FOURIER SERIES:

1
FOURIE TRANSFORM :

1
FOURIE TRANSFORM :

FT of Unit Step Function:


U(ω) = πδ(ω) + 1/jω

1
FOURIE TRANSFORM :

Conditions for Existence of Fourier Transform


Any function ft can be represented by using Fourier transform
only when the function satisfies Dirichlet’s conditions. i.e.

 The function ft has finite number of maxima and minima.


There must be finite number of discontinuities in the signal
ft,in the given interval of time.
It must be absolutely integrable in the given interval of time
i.e.
∫∞
−∞| f(t)| dt < ∞

1
DTFT:

The discrete-time Fourier transform DTFT or the Fourier


transform of a discrete–time sequence
x[n] is a representation of the sequence in terms of the
complex exponential sequence ejωn .
The DTFT sequence x[n] is given by

Here, Xω is a complex function of real frequency variable ω


and it can be written as
X(ω) = Xre(ω) + jXimg (ω)

1
Inverse Discrete-Time Fourier Transform
IDTFT:

Convergence Condition:
The infinite series in equation 1 may be converges or may not. xn is absolutely summable

1
DTFT:

Where Xreω, Ximgω are real and imaginary parts of Xω


respectively.

1
Linearity Property

169
Time Shifting

170
Frequency Shifting Property

171
Time Reversal Property

172
Time Scaling Property

173
Differentiation and Integration Properties

174
Multiplication and Convolution Properties

175
Differentiation in frequency domain

176
Complex Conjugation

177
Parseval's equation

178
Symmetry (or Duality)

179
UNIT - 3

● Signal transmission through Linear system

180
Linear System
Linear system , it satisfies principle superposition.

The response of linear system to weighted sum of input signals is


equal to the same weighted sum of output signals.

𝑥 𝑖 𝑡 → 𝑦 𝑖 𝑡 = 𝑇[ 𝑥 𝑖 𝑡 ]
𝑁

𝑥 𝑡 = 𝑎 𝑖 𝑥 𝑖 𝑡 𝑤𝑕𝑒𝑟𝑒 𝑎 𝑖 𝑖𝑠 𝑎𝑛𝑦 𝑎𝑟𝑏𝑖𝑡𝑎𝑟𝑦 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡


𝑖=1
𝑁 𝑁 181
𝑦 𝑡 = 𝑇 𝑥 𝑡 = 𝑇 𝑎𝑖 𝑥𝑖 𝑡 = 𝑎 𝑖 𝑇[𝑥 𝑖 𝑡 ]
𝑖=1 𝑖
𝑁 =1

𝑦 𝑡 = 𝑎𝑖 𝑦𝑖 𝑡
𝑖=1
Classification of Linear systems

Lumped and distributed systems

Time invariant and variant systems

182
Classification of Linear systems : Lumped
systems
Lumped systems:
Consisting of Lumped elements which are connected particular way.

The energy in the system considered to be as stored of dissipated in distinct


isolated elements.

Disturbance initiated at any point propagated instantaneously at every point in


the system.

Dimensions of elements is very small compare to signal wave length.

Obeys ohm law and Kirchhoff laws only and system are expressed by ordinary
differential equations. 183
Classification of Linear systems : Distributed systems

Elements are distributed over a long distances.

Dimensions of the circuits are small compared to the wave length of signals to be
transmitted.

system takes finite amount of time for disturbance at one point to be


propagated to the other point.

Expressed with partial differential equations.

Example are transmission lines , optical fiber , wave guides, antennas,


semiconductor devices , beams etc.,
184
Classification of Linear systems : Linear time invariant
system and variant system
LTI system , it satisfies linear and time invariant properties.

A system is Time invariant , if a time shift of input signal leads to


an identical time shift in the output signal.

𝑦 𝑡 = 𝑇𝑥𝑡
𝑖𝑓𝑖𝑛𝑝𝑢𝑡𝑑𝑒𝑙𝑎𝑡𝑒𝑑𝑜𝑟𝑎𝑑𝑣𝑎𝑛𝑐𝑒𝑑𝑏𝑦𝑡0𝑠𝑒𝑐𝑜𝑛𝑑𝑠
𝑦1 𝑡 = 𝑇𝑥 𝑡∓𝑡0 185

𝑦1𝑡=𝑦𝑡∓𝑡0
= 𝑦 𝑡,𝑡0 𝑡𝑖𝑚𝑒𝑖𝑛𝑣𝑎𝑟𝑖𝑎𝑛𝑡𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒𝑣𝑎𝑟𝑖𝑎𝑛𝑡
Representation of Arbitrary signal
Let us consider an arbitrary signal

𝑥 𝑡 𝑖𝑠𝑎𝑛𝑎𝑝𝑝𝑟𝑜𝑥𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛𝑜𝑓𝑥𝑡 𝑎𝑛𝑑
𝑖𝑡𝑐𝑎𝑛𝑏𝑒𝑒𝑥𝑝𝑟𝑒𝑠𝑠𝑒𝑑𝑎𝑠𝑙𝑖𝑛𝑒𝑎𝑟𝑐𝑜𝑚𝑏𝑖𝑛𝑎𝑡𝑖𝑜𝑛𝑜𝑓𝑠𝑕𝑖𝑓𝑡𝑒𝑑 𝑖𝑚𝑝𝑢𝑙𝑠𝑒𝑠
𝑥 𝑡 = ⋯…+𝑥 −2Δ 𝛿Δ 𝑡+2Δ +𝑥 −Δ 𝛿Δ 𝑡 +Δ + 𝑥0𝛿Δ𝑡
+𝑥Δ 𝛿Δ𝑡−Δ+𝑥2Δ𝛿Δ𝑡−2Δ+⋯…..

𝑥𝑡 = 𝑥kΔ 𝛿Δ𝑡−𝑘Δ∆
186
𝑘=−∞

𝑥 𝑡 = lim𝑥 𝑡
∆→0

As Δ→0, 𝛿Δ𝑡 →δt, summationbecomes integration kΔ→τ, Δ→dτ


Representation of Arbitrary signal

1
𝛿∆ 𝑡 = 0 < 𝑡 < ∆ 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒0


𝑥𝑡 = 𝑥 𝜏 𝛿 𝑡 −𝜏 𝑑𝜏
−∞
A continuous time signal can be expressed as integral of weighted shifted
impulses.

187
Impulse response of LTI system
𝑦 𝑡 𝑖𝑠 𝑎 𝑟𝑒𝑝𝑜𝑛𝑠𝑒 𝑜𝑓 𝑥(𝑡)

𝑥𝑡 = 𝑥 𝜏 𝛿 𝑡 − 𝜏 𝑑𝜏
−∞

𝑦 𝑡 = 𝑇[𝑥 𝑡 ]

𝑦 𝑡 = 𝑇[𝑥 𝑡 = 𝑇 𝑥 𝜏 𝛿 𝑡 − 𝜏 𝑑𝜏
−∞

𝑦𝑡 = 𝑥 𝜏 𝑇 𝛿 𝑡 − 𝜏 𝑑𝜏
−∞
188
Impulse response of LTI system

𝑕 𝑡 − 𝜏 = 𝑇[ 𝛿 𝑡 − 𝜏 𝑡𝑕𝑖𝑠 𝑠𝑎𝑡𝑖𝑠𝑓𝑖𝑒𝑠 𝑡𝑖𝑚𝑒 𝑖𝑛𝑣𝑎𝑟𝑖𝑎𝑡 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦


𝑕𝑡 =𝑇𝛿 𝑡 𝑡𝑕𝑖𝑠 𝑠𝑕𝑜𝑤𝑠𝑖𝑚𝑝𝑢𝑠𝑒 𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑒 𝑜𝑓 𝐿𝑇𝐼 𝑠𝑦𝑠𝑡𝑒𝑚
Impulse response of LTI system due to impulse input applies at t=o
is h(t).
This is known as convolution integral and it gives relationship
among input signal, output signal and impulse response of
system.LTI system completely characterized by impulse response

189
Frequency response of LTI System
Consider LTI system with impulse response h(t)

𝑦 𝑡 = 𝑥 𝜏 𝑕 𝑡 − 𝜏 𝑑𝜏
−∞

𝑦 𝑡 𝐹 𝑜𝑢 𝑟𝑖𝑒𝑟 𝑡𝑟𝑎𝑛 𝑠𝑓𝑜𝑟𝑚 𝑌(𝜔)


𝑥 𝑡 𝐹 𝑜𝑢 𝑟𝑖𝑒𝑟 𝑡𝑟𝑎𝑛𝑠𝑓𝑜𝑟𝑚 𝑋 (𝜔)
𝑕 𝑡 𝐹 𝑜𝑢 𝑟𝑖𝑒𝑟 𝑡𝑟𝑎𝑛𝑠𝑓𝑜𝑟𝑚 𝐻 ( 𝜔 )

𝑌 𝜔 = 𝑦 𝑡 𝑒 − 𝑗 𝜔 𝑡 𝑑𝑡
−∞ 190
∞ ∞
𝑌 𝜔 = 𝑥 𝜏 𝑕 𝑡 − 𝜏 𝑒 − 𝑗 𝜔 𝑡 𝑑𝜏 𝑑𝑡
−∞ −∞
Frequency response of LTI System

𝑡 −𝜏= 𝜆, 𝑑𝑡=𝑑𝜆
∞ ∞
𝑌𝜔 = 𝑥𝜏𝑒−𝑗𝜔𝜏𝑑𝜏 𝑕 𝜆𝑒−𝑗𝜔𝜆𝑑𝜆
−∞ −∞

𝑌𝜔=𝐻𝜔𝑋𝜔
𝐻𝜔=𝑚𝑎𝑔𝑛𝑒𝑡𝑢𝑑𝑒𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑒𝑜𝑓𝐿𝑇𝐼𝑠𝑦𝑡𝑒𝑚andit symmetric
191
∠𝐻𝜔=𝑝𝑕𝑎𝑠𝑒𝑟𝑒𝑠𝑝𝑜𝑛𝑠𝑒𝑜𝑓𝐿𝑇𝐼𝑠𝑦𝑠𝑡𝑒𝑚andit is antisymmetric
Response to Eigen functions

If input to the system is an exponential function

𝑥 𝑡 = 𝑒𝑗𝜔𝑡

𝑦𝑡 = 𝑕𝜏𝑥𝑡−𝜏𝑑𝜏
−∞

𝑦𝑡 = 𝑕𝜏𝑒𝑗𝜔(𝑡−𝜏)𝑑𝜏
192
−∞

𝑦 𝑡 =𝑒𝑗𝜔𝑡 𝐻𝜔 = 𝑥𝑡𝐻𝜔
Outputis acomplexexponentialofthesamefrequencyasinput
multipliedbythecomplexconstant𝐻𝜔.
Properties of LTI System
Commutative Property

𝑦 𝑡 = 𝑥𝑡∗𝑕𝑡=𝑕𝑡∗𝑥𝑡
∞ ∞
𝑦𝑡 = 𝑥 𝜏 𝑕𝑡 −𝜏 𝑑𝜏= 𝑕𝜏𝑥𝑡−𝜏 𝑑𝜏
−∞ −∞

193
Properties of LTI System
Associate Property : cascading of two or more LTI system will results to
single system with impulse response equal to the convolution of the
impulse response of the cascading systems

194

𝑥𝑡∗𝑕1 𝑡∗𝑕2𝑡=𝑥𝑡∗{𝑕2𝑡 ∗𝑕1 𝑡}


𝑕𝑡 = 𝑕2𝑡∗𝑕1 𝑡
Properties of LTI System
Distributive Property: This property gives that addition of two or more LTI
system subjected to same input will results single system with impulse
response equal to the sum of impulse response of two or more individual
systems.

𝑥𝑡∗𝑕1 𝑡+𝑕2𝑡 =𝑥𝑡∗𝑕1 𝑡+𝑥𝑡∗𝑕2𝑡

195
Properties of LTI System
Static and Dynamic system:

A system is static or memory less if its output at any time depends only on
the value of its input at that instant of time

For LTI systems, this property can hold if its impulse response is itself an
impulse.

convolution property, the output depends on the previous samples of the


input, therefore an LTI system has memory and hence it is dynamic system

196
Properties of LTI System

Causality :A continuous time LTI system is said to causal if and only if

𝑛𝑜𝑛𝑧𝑒𝑟𝑜 𝑓𝑜𝑟 𝑡≥0


𝑕𝑡 = 0
𝑓𝑜𝑟𝑡<0

197
Properties of LTI System
Stability: continuous time system is BIBO stable if and only if the
impulse response is absolutely Integrable.

Consider LTI system with impulse response h(t) . the output y(t) is


𝑦𝑡 = 𝑕𝜏 𝑥 𝑡 −𝜏 𝑑𝜏
−∞

If the input 𝑥 𝑡 is bounded that is 𝑥 𝑡 ≤ 𝑀𝑥 < ∞


198

𝑦𝑡 = 𝑕𝜏 𝑥 𝑡 −𝜏 𝑑𝜏
−∞

𝑦𝑡 = 𝑕𝜏 𝑥 𝑡 −𝜏 𝑑𝜏
−∞
Properties of LTI System


𝑦𝑡=𝑀𝑥 𝑕𝜏𝑑𝜏
−∞

Forboundedoutput, theimpulseresponseis absolutelyIntergrable



thatis 𝑕𝜏 𝑑𝜏< ∞
−∞

Theaboveequationgivesnecessaryandsufficient conditionforBIBO
stability.
199
Properties of LTI System

Inverse systems :A system T said to be invertible if and only if there exits


an inverse system T-1 for such that T T-1 is an identical system

200
Transfer function of LTI system
Transfer function of LTI system defined as the ratio of Fourier transform of the
output signal to Fourier transform of the input signal.

𝑌(𝜔)
𝐻𝜔=
𝑋(𝜔)
h(t) =IFTof 𝐻𝜔. 201
Transfer function of LTI system
Input and output relationship of continuous time causal LTI system described
by linear constant coefficient differential equations with zero initial
conditions is given by

𝑁 𝑀
𝑑𝑘𝑦𝑡 𝑑𝑘𝑥𝑡
𝑎𝑘 = 𝑏𝑘
𝑑𝑡𝑘 𝑑𝑡𝑘
𝑘=0 𝑘=0
𝑤𝑕𝑒𝑟𝑒𝑎𝑘 and𝑏𝑘 anyarbitaryconstants andN>M
202

Nrefertohighest derivativeofy(t)
Transfer function of LTI system

Apply Fourier Transform to above equation

𝑁 𝑀

𝑎𝑘(𝑗𝜔)𝑘𝑌𝜔=𝑏𝑘(𝑗𝜔)𝑘𝑋𝜔
𝑘=0 𝑘=0
203
𝑌(𝜔) 𝑀 𝑘
𝑘=0 𝑏𝑘(𝑗𝜔)
𝐻𝜔= = 𝑁 𝑎𝑘(𝑗𝜔)𝑘
=
𝑋(𝜔) 𝑘=0
Distortion less Transmission Through LTI
System
Distortion less transmission through the LTI system requires that the response be
exact replica of input signal.

The replica may have different magnetude and delayed in time.

𝑎𝑛𝑦 𝑎𝑟𝑏𝑖𝑡𝑎𝑟𝑦 𝑖𝑛𝑝𝑢𝑡 𝑥 𝑡 , 𝑖𝑓 𝑜𝑢𝑡𝑝𝑢𝑡 𝑦 𝑡 = 𝑘 𝑥(𝑡 − 𝑡 0 )


𝑌 𝜔 = 𝑘𝑋(𝜔) 𝑒 −𝑗𝜔𝑡 0
𝐻 𝜔 = 𝑘 𝑒 −𝑗𝜔𝑡 0
𝐻 𝜔 = 𝑘, ∠𝐻 𝜔 = 𝑛𝜋 − 𝜔𝑡0 204

Magnetude response of system 𝐻 𝜔 must be constant over


entire frequency range.
Phase response of the system ∠𝐻 𝜔 must be linear with
frequency
Signal Band Width

Signal Band width:

It is the range of significant frequency components present in the


signal.
For any practical signals, the energy content decreases with
frequency, only some of frequency components of signals have
significant amplitude within a certain frequency band; outside this
band have negligible amplitude.

The amplitude of significant frequency components within the


times of maximum signal amplitude.
205
System Band Width
The band width of system is defined as the interval of frequencies over
which the magnitude spectrum of remains within times (3d1B) its value
at the mid band.
2

𝜔1=𝑙𝑜𝑤𝑒𝑟3𝑑𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦=𝑙𝑜𝑤𝑒𝑟𝑐𝑢𝑡𝑜𝑓𝑓𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦=
1
𝑙𝑜𝑤𝑒𝑟𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦𝑎𝑡𝑤𝑕𝑖𝑐𝑕𝑚𝑎𝑔𝑛𝑒𝑡𝑢𝑑𝑒𝑜𝑓𝐻𝜔
2

Times of its value at the mid band.

206
𝜔2=𝑢𝑝𝑝𝑒𝑟𝑐𝑢𝑡𝑜𝑓𝑓𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦=𝑈𝑝𝑝𝑒𝑟3𝑑𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
1
= 𝑕𝑖𝑔𝑕𝑒𝑠𝑡 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦𝑎𝑡 𝑤𝑕𝑖𝑐𝑕 𝑚𝑎𝑔𝑛𝑒𝑡𝑢𝑑𝑒𝑜𝑓𝐻 𝜔 𝑡𝑖𝑚𝑒𝑠𝑖𝑡𝑠𝑚𝑖𝑑𝑏𝑎𝑛𝑑 𝑣𝑎𝑙𝑢𝑒
2
System band width =𝑈𝑝𝑝𝑒𝑟3𝑑𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦−𝑙𝑜𝑤𝑒𝑟3𝑑𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
System Band Width
For distortion less transmission, a system should have infinite bandwidth.
But due to physical limitations it is impossible to design an ideal filters having
infinite bandwidth.

For satisfactory distortion less transmission, an LTI system should have high
bandwidth compared to the signal bandwidth

207
Filter characteristics of linear system
LTI system acts as filter depending on the transfer function of system.

The system modifies the spectral density function of input signal according to
transfer function.

system act as some kind of filter to various frequency components.

Some frequency components are boosted in strength, some are attenuated,


and some may remain unaffected.

each frequency component suffers a different amount of phase shift in the


process of transmission.

208
Types of filters
LTI system may be classified into five types of filter

Low pass filter

High pass filter

Band pass filter

Band reject filter

All pass filter.

209
Types of Ideal filters
Pass Band : Passes all frequency components in its pass band without distortion
.
Stop Band : completely blocks frequency components outside of pass band.
There is discontinuity between pass band and stop band in frequency spectrum.

Transition band : For Practical filters, The range of frequencies over which
there is a gradual Transition between pass band and stop band.

210
Types of Ideal filters : Ideal Low Pass Filter

An ideal low pass filter transmits all frequency components below the certain
frequency ωc rad /sec called cutoff frequency, without distortion. The signal
above these frequencies is filtered completely.

TransferfunctionofIdealLPF
𝑒−𝑗𝜔0𝑡 𝑓𝑜𝑟𝜔<𝑊
𝐻𝜔 =
0 𝑓𝑜𝑟 𝜔 > 𝑊
211
Types of Ideal filters : Ideal High Pass Filter
An ideal high pass filter transmits all frequency components above the certain
frequency W rad/sec called cutoff frequency, without distortion. The signal
below these frequencies is filtered completely.

𝑒−𝑗𝜔𝑡0 𝑓𝑜𝑟 𝜔 >𝑊


𝐻𝜔 =
0 𝑓𝑜𝑟𝜔 <𝑊

212
Types of Ideal filters : Ideal Band Pass Filter
An ideal band pass filter transmits all frequency components within certain
frequency band W1to W2 rad /sec, without distortion. The signal with frequency
outside this band is stopped completely.

𝑒 −𝑗𝜔𝑡 0 𝑓𝑜𝑟 𝑊 1< 𝜔 <𝑊 2


𝐻 𝜔 =
0 𝑜𝑡𝑕𝑒𝑟 𝑤𝑖𝑠𝑒

213
Types of Ideal filters : Ideal Band Reject
Filter
An ideal band reject filter rejects all frequency components within certain
frequency band W1 to rad W2/sec. The signal outside this band is transmitted
without distortion.

0 𝑓𝑜𝑟𝑊1 < 𝜔<𝑊2


𝐻 𝜔 = −𝑗𝜔𝑡
𝑒 0 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

214
Causality and Physical Reliability: Paley
Wiener
For physically realizable systems, that cannot have response before the input
signal applied. criterion
In time domain approach the impulse response of physically realizable systems
must be causal.

Frequency domain, The necessary and sufficient condition for magnetude


response to be physically realizable is known as the Paley – Wiener criterion

215
∞ 𝑙𝑛𝐻(𝜔)𝑑𝜔
2
<∞
−∞ 1+𝜔
This condition known as the Paley – Wiener criterion

To satisfy the the Paley – Wiener criterion, the function H (ω ) must be square
integral .

All causal system satisfy the Paley –Wiener criterion.

Ideal filters are not physically realizable. But it possible to construct physically
realizable filters close to the filter characteristics.

Where ε an arbitrary small value


216
𝐻𝜔 = 𝑒−𝑗𝜔𝑡0𝑓𝑜𝑟𝜔<𝑊
𝜀 𝑓𝑜𝑟𝜔>𝑊
Band width and Rise time

TheRisetime(tr) ofoutputresponseis definedasthetimetheresponse


taketoreachfrom10%to90%ofthefinalvalueofsignal.
𝑑𝑦(𝑡) 1
=
𝑑𝑡 𝑡0 𝑡𝑟
SystembandWidthcanbederivedfromoutputresponse
𝑒−𝑗𝜔𝑡𝑜 𝑓𝑜𝑟 𝜔 < 𝜔𝑐 217
ConsiderLPFwithtransferfunction𝐻(𝜔)=
0𝑓𝑜𝑟 𝜔 > 𝜔𝑐
Rise time and Band width

1 ∞ 𝑗𝜔𝑡
𝑕𝑡= 𝐻𝜔𝑒 𝑑𝜔
2𝜋−∞
1 𝜔𝑐 𝑗𝜔(𝑡−𝑡) 1𝑠𝑖𝑛𝜔𝑐(𝑡−𝑡0)
𝑕𝑡 = 𝑒 0𝑑𝜔=
2𝜋 −𝜔𝑐 𝜋 (𝑡−𝑡0)
𝜔𝑐𝑠𝑖𝑛𝑐𝜔𝑐(𝑡−𝑡0)
𝑕𝑡=
𝜋 218
Rise time and band width


𝑦 𝑡 =𝑕𝑡 ∗ 𝛿 𝑡 = 𝑕𝜏𝑑𝜏
−∞
𝑑𝑦 𝑡 𝜔𝑐
= 𝑠𝑖𝑛𝑐𝜔(t-t
𝑐 0)
𝑑𝑡 𝜋

𝑑𝑦𝑡 𝜔𝑐 1
= =
𝑑𝑡 𝑡0
𝜋 𝑡𝑟
219
𝜋
𝑡𝑟 =
𝜔𝑐
BandwidthofLPFis 𝜔𝑐rad/sec
The convolution integral

The process of expressing the output signal in terns of the


superposition of weighted and shifted impulse responses is
called convolution.

The mathematical tool for evaluating the convolution of


continuous time signal is called convolution integral. For
discrete time signal is called convolution sum.

Characterizing input – output relationship of LTI systems.


220

Play important role in time and frequency domain analysis.


The convolution integral

Let 𝒙𝟏𝒕𝒂𝒏𝒅𝒙𝟐𝒕𝒃𝒆𝒕𝒘𝒐 continuous time signals.


convolution 𝒐𝒇𝒙𝟏 𝒕 𝒂𝒏𝒅𝒙𝟐 𝒕 can Then be
∞ expressed as
𝒙𝟏𝝉 𝒙 𝟐𝒕−𝝉𝒅𝝉
−∞
𝒘𝒉𝒆𝒓𝒆𝜏𝒊𝒔𝒅𝒖𝒎𝒎𝒚𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆
TheoutputofanycontinuoustimeLTIsystemis theconvolutionofthe
inputx(t) withimpulseresponseh(t) ofthesystem. 221
The convolution Integral
Case1
𝑛𝑜𝑛𝑧𝑒𝑟𝑜𝑣𝑎𝑙𝑢𝑒 𝑡 ≥0
If the input signaliscausal 𝑥 𝑡 = 0
𝑓𝑜𝑟𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒

𝑦𝑡 = 𝑥𝜏𝑕𝑡−𝜏 𝑑𝜏
0

Case2
𝑛𝑜𝑛𝑧𝑒𝑟𝑜𝑣𝑎𝑙𝑢𝑒 𝑡 ≥0
If LTI system is causal𝑕𝑡 = 0 222
𝑓𝑜𝑟𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒
𝑡
𝑦𝑡 = 𝑥𝜏𝑕𝑡−𝜏𝑑𝜏
−∞
The convolution Integral

Case3
If bothinputsignalandsystemarecausal

𝑦𝑡 = 𝑥𝜏𝑕𝑡−𝜏𝑑𝜏
0
223
Properties of convolution integral :
Commutative Property:

𝑙𝑒𝑡𝑥1𝑡𝑎𝑛𝑑𝑥2𝑡𝑎𝑟𝑒𝑡𝑕𝑒𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠𝑡𝑖𝑚𝑒𝑠𝑖𝑔𝑛𝑎𝑙𝑠
𝑥1 𝑡 ∗ 𝑥2 𝑡 = 𝑥2𝑡∗𝑥1𝑡

𝑥1 𝑡 ∗ 𝑥2 𝑡 = 𝑥1𝜏𝑥2𝑡−𝜏𝑑𝜏
−∞

𝑡 −𝜏= 𝜆 224


𝑥1𝑡∗𝑥2𝑡 = 𝑥2𝜆𝑥2𝑡−𝜆𝑑𝜆=𝑥2𝑡∗𝑥1𝑡
−∞
Properties of convolution integral :
Distributive Property:

𝑥1 𝑡 ∗ 𝑥 2 𝑡 + 𝑥 3 𝑡 = 𝑥1 𝑡 ∗ 𝑥 2 𝑡 + 𝑥1 𝑡 ∗ 𝑥3 𝑡
Associate Property:
𝑥1 𝑡 ∗ 𝑥 2 𝑡 ∗ 𝑥 3 𝑡 = [𝑥1 𝑡 ∗ 𝑥 2 𝑡 ] ∗ 𝑥 3 𝑡
= 𝑥1 𝑡 ∗ 𝑥2 𝑡 ∗ 𝑥3 𝑡
Shifting property:
𝑥1 𝑡 ∗ 𝑥1 𝑡 − 𝑡0 = 𝑥(𝑡 − 𝑡0)
𝑥1 𝑡 − 𝑡1 ∗ 𝑥1 𝑡 − 𝑡2 = 𝑥(𝑡 − 𝑡1 − 𝑡2) 225
Properties of convolution integral

Convolutionwithimpulsefunction
𝑥 𝑡 ∗ 𝛿𝑡 = 𝑥𝑡
𝑥 𝑡 ∗ 𝛿 𝑡 −𝑡0 = 𝑥𝑡−𝑡0
Convolution with unitstepfunction
𝑡
𝑢𝑡 = 𝛿𝜏𝑑𝜏 226

−∞
𝑡 𝑡
𝑥𝑡∗𝑢𝑡 = 𝑥 𝜏 ∗ 𝛿 𝜏 𝑑𝜏= 𝑥𝜏𝑑𝜏
−∞ −∞
Properties of convolution integral
Width Property:
Let us co nsider finite duration of two signals
𝑥1 𝑡 𝑎 𝑛 𝑑 𝑥2 𝑡 a re T 1 a n d T 2 re s p ec tive ly t h e n
d u r a t i o n o f y(t) = 𝑥1 𝑡 ∗ 𝑥 2 𝑡 is eq u a l to t h e s u m of
duration of 𝑥1 𝑡 𝑎 𝑛 𝑑 𝑥2 𝑡 .
A r e a u n d e r finite signals 𝑥1 𝑡 𝑎 𝑛 𝑑 𝑥 2 𝑡 a re A 1 a n d A 2
respectively t h e n th e area u n d e r y (t) is p ro du c t of b o th
areas.

A = a r e a u n d e r y ( t ) = a r e a u n d e r 𝑥 1 𝑡 a n d a r e a unde22r7
𝑥2 𝑡 = A1 A2
Convolution property of Fourier Transform

𝑥𝑡↔𝑋𝜔, 𝑦𝑡↔𝑌𝜔
𝐹𝑜𝑢𝑟𝑖𝑒𝑟𝑇𝑟𝑎𝑛𝑠𝑓𝑜𝑟𝑚𝑜𝑓𝑥𝑡∗𝑦𝑡=𝑋𝜔𝑌(𝜔)
ConvolutioninFrequencyDomain
228

𝐹𝑜𝑢𝑟𝑖𝑒𝑟𝑇𝑟𝑎𝑛𝑠𝑓𝑜𝑟𝑚𝑜𝑓𝑋𝜔∗𝑌𝜔=2𝜋[𝑥𝑡𝑦𝑡]
Method of Graphical Convolution
Increase the time t along positive axis . Multiply the signals and
integrate over the period of two signals to obtain convolution at t.

Increase the time shift step by step and obtain convolution using step
4.

InDcrreaawsetthheeticmoentvaololnugtipoonsitxive(t)axwisit.hMtuhlteiplvyathlueesisgnoab
l s taanidnen
i dteginratsetoevpesrt4heapnerdo
i 5doafs
twfuonscigtnioan
l stoofobt.tainconvolution at t.

Increase the time shift step by step and obtain convolution using step 4.

Draw the convolution x (t) with the values obtained in steps 4 and 5 as function of t.

229
UNIT-IV

230
LAPLACE TRANSFORM:

A Laplace transform of function f (t) in a time domain,


where t is the real number greater than or equal to zero, is
given as F(s), where there

It is the complex number in frequency domain .i.e. s = σ+jω


The above equation is considered as unilateral Laplace
transform equation
When the limits are extended to the entire real axis then the
Bilateral Laplace transform can be defined as

1
LAPLACE TRANSFORM:

The techniques of Laplace transform are not only used in


circuit analysis, but also in
 Proportional-Integral-Derivative (PID) controllers
 DC motor speed control systems
 DC motor position control systems
Second order systems of differential equations (under
damped, over damped and critically damped)

1
LAPLACE TRANSFORM:
LAPLACE TRANSFORM:
REGION OF CONVERGENCE OF LAPLACE TRANSFORM:

Conditions For Applicability of Laplace Transform

Laplace transforms are called integral transforms so there are


necessary conditions for convergence of these transforms.

i.e. f must be locally integral for the interval [0, ∞) and


depending on whether σ is positive or negative, e^(-σt) may
be decaying or growing. For bilateral Laplace transforms
rather than a single value the integral converges over a
certain range of values known as Region of Convergence.
PROPERTIES OF LAPLACE TRANSFORM:

1.LINEARITY:
PROPERTIES OF LAPLACE TRANSFORM:

First Derivative Property :

The first derivative in time is used in deriving the Laplace


transform for capacitor and inductor impedance. The
general formula
PROPERTIES OF LAPLACE TRANSFORM:

Second Derivative Property :


The second derivative in time is found using the Laplace
transform for the first derivative. The general formula
PROPERTIES OF LAPLACE TRANSFORM:

Integration Property:
Determine the Laplace transform of the integral

Apply the Laplace transform definition


PROPERTIES OF LAPLACE TRANSFORM:

Time Scaling:
PROPERTIES OF LAPLACE TRANSFORM:

Time shift:
PROPERTIES OF LAPLACE TRANSFORM:

Frequency shift:
PROPERTIES OF LAPLACE TRANSFORM:

Differentiation in the s-domain:


PROPERTIES OF LAPLACE TRANSFORM:

Initial value theorem:


PROPERTIES OF LAPLACE TRANSFORM:

Final value theorem:


Relation between FOURIER and LAPLACE TRANSFORM:

The (unilateral) Laplace transform of a function g:

The function g is assumed to be of bounded variation. If g is the


ant derivative of f:
Z-transform

The Z-transform converts a discrete-time signal, which is a


sequence of real or complex numbers, into a complex
frequency-domain representation.
The Z-transform can be defined as either a one-sided or
two-sided transform.
Bilateral Z-transform

The bilateral or two-sided Z-transform of a discrete-time signal


x [ n ] is the formal power series X ( z ) defined as
as

Z-TRANSFORM

Unilateral Z-transform

Alternatively, in cases where x *n +is defined only for n ≥ 0 ,


the single-sided or unilateral Z-transform is defined as

In signal processing, this definition can be used to evaluate


the Z-transform of the unit impulse response of a discrete-
time causal system.
Z-TRANSFORM:

Inverse Z-transform

where C is a counterclockwise closed path encircling the


origin and entirely in the region of convergence (ROC).

This contour can be used when the ROC includes the unit
circle, which is always guaranteed when X ( z )hen all the
poles are inside the unit circle.
Z-TRANSFORM:

Region of convergence:

The region of convergence (ROC) is the set of points in the


complex plane for which the Z-transform summation
converges.
Z-TRANSFORM:

PROPERTIES OF ROC:

 ROC of z-transform is indicated with circle in z-plane.


 ROC does not contain any poles.
If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is
entire z-plane except at z = 0.
 If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC is
entire z-plane except at z = ∞.
 If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius a.
i.e. |z| > a.
 If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with radius
a. i.e. |z| < a.
 If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane except at z
= 0 & z = ∞.
PROPERTIES OF Z-TRANSFORM:

LINEARITY:
PROPERTIES OF Z-TRANSFORM:

TIME EXPANSION:
PROPERTIES OF Z-TRANSFORM:

TIME SHIFTING:

Define

we have and
.

PROPERTIES OF Z-TRANSFORM:

CONVOLUTION:

The ROC of the convolution could be larger than the intersection


of and , due to the possible pole-zero cancellation caused by
the convolution
.

PROPERTIES OF Z-TRANSFORM:

Time Reversal :
.

PROPERTIES OF Z-TRANSFORM:

Differentiation in z-Domain :

Conjugation
.

PROPERTIES OF Z-TRANSFORM:

Time reversal:
.

PROPERTIES OF Z-TRANSFORM:

Time reversal:
UNIT-V

260
Graphical and analytical proof for Band Limited Signals:

Sampling theorem: A continuous time signal can be represented


in its samples and can be recovered back when sampling
frequency fs is greater than or equal to the twice the highest
frequency component of message signal. i. e. fs≥2fm
Proof: Consider a continuous time signal x(t). The spectrum
of x(t) is a band limited to fm Hz i.e. the spectrum of x(t) is zero
for |ω|>ωm.Sampling of input signal x(t) can be obtained by
multiplying x(t) with an impulse train δ(t) of period Ts. The
output of multiplier is a discrete signal called sampled signal
which is represented with y(t) in the following diagrams:

1
Graphical and analytical proof for Band Limited Signals:

1
Graphical and analytical proof for Band Limited Signals:

Here, you can observe that the sampled signal takes the period of impulse. The process of
sampling can be explained by the following mathematical expression

1
Graphical and analytical proof for Band Limited Signals:

1
Graphical and analytical proof for Band Limited Signals:

To reconstruct x(t), you must recover input signal spectrum X(ω)


from sampled signal spectrum Y(ω), which is possible when there
is no overlapping between the cycles of Y(ω).

There are three types of sampling techniques:


Impulse sampling.
Natural sampling.
Flat Top sampling.

1
Graphical and analytical proof for Band Limited Signals:

Impulse Sampling
Impulse sampling can be performed by multiplying input signal x(t) with impulse train of period
'T'. Here, the amplitude of impulse changes with respect to amplitude of input signal x(t). The
output of sampler is given by

1
Graphical and analytical proof for Band Limited Signals:

To get the spectrum of sampled signal, consider Fourier


transform of equation 1 on both sides

This is called ideal sampling or impulse sampling. You cannot use


this practically because pulse width cannot be zero and the
generation of impulse train is not possible practically.

Natural Sampling:
Natural sampling is similar to impulse sampling, except the
impulse train is replaced by pulse train of period T. i.e. you
multiply input signal x(t) to pulse train

1
Graphical and analytical proof for Band Limited Signals:

1
Graphical and analytical proof for Band Limited Signals:

Flat Top Sampling: During transmission, noise is introduced at top of the transmission pulse
which can be easily removed if the pulse is in the form of flat top. Here, the top of the samples
are flat i.e. they have constant amplitude. Hence, it is called as flat top sampling or practical
sampling. Flat top sampling makes use of sample and hold circuit.

1
Graphical and analytical proof for Band Limited Signals:

1
Graphical and analytical proof for Band Limited Signals:

Nyquist Rate:
It is the minimum sampling rate at which signal can be
converted into samples and can be recovered back without
distortion.
Nyquist rate fN = 2fm hz
Nyquist interval = 1/fN = 1/2fm seconds.

1
Reconstruction of signal from its samples:

Assume that the Nyquist requirement ω0 > 2ωm is satisfied.


We consider two reconstruction schemes:
•ideal reconstruction (with ideal band limited
interpolation),
• reconstruction with zero-order hold.
Ideal Reconstruction: Shannon interpolation formula

1
Reconstruction of signal from its samples:

Our ideal reconstruction filter has the frequency response:

1
Reconstruction of signal from its samples:

The reconstructed signal xr(t) is a train of sinc pulses scaled by


the samples x*n+. • This system is difficult to implement because
each sinc pulse extends over a long (theoretically infinite) time
interval.

1
Reconstruction of signal from its samples:

A general reconstruction filter


For the development of the theory, it is handy to consider the
impulse-sampled signal xP(t) and its CTFT.

Figure : Reconstruction in the frequency domain is low pass filtering

1
Effect of under sampling – Aliasing :

Possibility of sampled frequency spectrum with different conditions


is given by the following diagrams

1
Aliasing Effect:

The overlapped region in case of under sampling represents aliasing


effect, which can be removed by
• considering fs >2fm
• By using anti aliasing filters .

Samplings of Band Pass Signals:


In case of band pass signals, the spectrum of band pass signal X*ω+ =
0 for the frequencies outside the range f1 ≤ f ≤ f2. The frequency f1 is
always greater than zero. Plus, there is no aliasing effect when fs >
2f2. But it has two disadvantages:

1
Samplings of Band Pass Signals:

The sampling rate is large in proportion with f2. This has


practical limitations.
The sampled signal spectrum has spectral gaps.
To overcome this, the band pass theorem states that the input
signal x(t) can be converted into its samples and can be
recovered back without distortion when sampling frequency
fs < 2f2.
Also,

1
Samplings of Band Pass Signals:

1
Samplings of Band Pass Signals:

1
Correlation:

Cross Correlation and Auto Correlation of Functions:


Correlation
Correlation is a measure of similarity between two signals. The
general formula for correlation is

There are two types of correlation:


•Auto correlation
•Cross correlation

1
Auto Correlation Function:

It is defined as correlation of a signal with itself. Auto correlation


function is a measure of similarity between a signal & its time
delayed version. It is represented with R(τ).
Consider a signals x(t). The auto correlation function of x(t) with its
time delayed version is given by

1
Auto Correlation Function:

Where τ = searching or scanning or delay parameter.


If the signal is complex then auto correlation function is given by
Cross Correlation Function:

Cross correlation is the measure of similarity between two


different signals.
Consider two signals x1(t) and x2(t). The cross correlation of
these two signals R12(τ)R12(τ) is given by
Cross Correlation Function:
Properties of Cross Correlation Function:

Auto correlation exhibits conjugate symmetry i.e. R (τ ) = R*(-τ


)
Properties of Cross Correlation Function:

Auto correlation function of energy signal at origin i.e. at τ =0 is


equal to total energy of that signal, which is given as:
Properties of Cross Correlation Function:

Auto correlation function is maximum at τ =0 i.e |R (τ ) | ≤ R (0) ∀ τ


Properties of Cross Correlation Function:

Auto correlation function and energy spectral densities are Fourier


transform pairs. i.e.
F.T[R(τ)]=SXX(ω)
SXX(ω)= ∫R(τ)e−jωτdτ where -∞ < τ<∞

R(τ)=x(τ)∗x(−τ)
Properties of Cross Correlation Function
:

•Auto correlation exhibits conjugate symmetry i.e. R12(τ)=R∗ 21(−τ).


•Cross correlation is not commutative like convolution i.e.
R12(τ)≠R21(−τ)
•If R12(0) = 0 means, if ∫x1(t)x∗2(t)dt=0 over interval(-∞,∞), then
the two signals are said to be orthogonal.
•Cross correlation function corresponds to the multiplication of
spectrums of one signal to the complex conjugate of spectrum of
another signal. i.e.
R12(τ)←→X1(ω)X∗2(ω)
This also called as correlation theorem
Energy Density Spectrum:

Energy spectral density describes how the energy of a signal or a


time series is distributed with frequency. Here, the term energy is
used in the generalized sense of signal processing; Energy density
spectrum can be calculated using the formula:
Power Density Spectrum:

The above definition of energy spectral density is suitable for


transients (pulse-like signals) whose energy is concentrated around one
time window; then the Fourier transforms of the signals generally
exist. For continuous signals over all time, such as stationary processes,
one must rather define the power spectral density (PSD); this describes
how power of a signal or time series is distributed over frequency, as in
the simple example given previously. Here, power can be the actual
physical power, or more often, for convenience with abstract signals, is
simply identified with the squared value of the signal.
Power density spectrum can be calculated by using the formula:
Power Density Spectrum:

Thespectrum of a real valued process (or even a complex process


using the above definition) is real and an even function of frequency:

If the process is continuous and purely in deterministic, the auto


covariance function can be reconstructed by using the Inverse
Fourier transform
•The PSD can be used to compute the variance (net power) of a
process by integrating over frequency:
Relation between Autocorrelation Function and
Energy/Power Spectral Density Function:
Relation between Autocorrelation Function and Energy Spectral
Density Function
Relation between Autocorrelation Function and
Energy/Power Spectral Density Function:

Relation between Autocorrelation Function and Power Spectral


Density Function
Relation between Autocorrelation Function and
Energy/Power Spectral Density Function:

Relation between Convolution and Correlation:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy