Department of Electronics & Telecommunication Engineering Orientation Program Subject:Digital Communication Class:T.E Sem:v
Department of Electronics & Telecommunication Engineering Orientation Program Subject:Digital Communication Class:T.E Sem:v
Orientation Program
Subject:Digital Communication
Class:T.E sem:V Academic Year 2023-24
(Rev 2019 ‘C’ Scheme)
PO-8 Ethics
PO-10 Communication
An understanding of skills to
communicate in both oral and written
forms to have a successful career,
demonstrating the practice of
PSO-3 Successful career and Social concern
professional ethics and the concerns
for societal and environmental
wellbeing by solving real world
problems.
Thank You
Numericals on Linear block code
10 Marks Numerical
random variable are denoted by the uppercase such as X and Y and value taken by
them denoted by lowercase letter such as x1x2,y1,y2
2. continuous random variable- The random variable that takes infinite number of
value is called as a continuous random variable
example- noise voltage generated by the electronic amplifier has a continuous
amplitude . this means S of noise voltage amplitude is continuous therefore X has a
continuous range of values .
Cummulative distribution function(CDF)
probability density function (PDF)
Channel
Transmitter Receiver
I/P output
Informati
on signal
Disadvantage
1. Bit rate of digital system are high that's why required large
channel bandwidth
2. digital communication system required synchronisation in case
of synchronisation modulation
Channel for digital communication
classification of a channel
1.Telephone channels 2. coaxial cable 3. Optical Fibre 4.
microwave link 5. satellite channel
Comparison between analog Modulation and digital modulation
1.In analog modulation system transmitted signal is analog in nature
whereas in the digital modulation transmitted signal is is in digital format
that is trained of a digital pulses
2. analog modulation amplitude frequency or a phase variation is
transmitted signal represent the information
where as in digital modulation amplitude width and position of a
transmitter pule is constant and message is transmitted in the form of
code word
3.Noise immunity is poor in analog modulation but noise immunity is
excellent in digital modulation
4. it is not possible to remove the noise Because analog signal is function
of time it is continuously varies with respect to time whereas it is
possible to remove the noise in digital modulation as signal is in digital in
nature that is in the form of 1 and 0.
5. repeaters are not used in the analog system but repeaters are used in
the digital system.
Performance Parameters of
Digital communication system
Bit rate - (R) it is a number of bits per second
Baud rate( r)- number of symbols for second or number
of elements per second
Relation between bit rate and Baud rate
r=R/n
Where n is number of bits for symbols
Note- bit rate is always greater than Baud rate R>r
Block diagram and sub-system description
of a digital communication system,
Information- information is a source of communication
example- you are planning for a tour to the city locate in in such a area where
rainfall is rare and if you want to know the weather forecasting then weather
Bureau will provide you're one of the following information
1. it would be a sunny day today (less information)
2. there would be a scattered rain (some information)
3. there would be cyclone (maxi information)
⚫ Information Rate=R=r*H
r=no of message /sec, H=information/message
N/L =
⚫ Variance =
Topic
AWGN channel and Shannon-
Hartley channel capacity theorem.
AWGN Channel
AWGN is often used as a channel model in which the only
impairment to communication is a linear addition of
wideband or white noise with a constant spectral density
(expressed as watts per hertz of bandwidth) and a Gaussian
distribution of amplitude.
Solution:-
To calculate the maximum capacity of a Gaussian channel with a
given bandwidth of 3 kHz and a signal-to-noise ratio (SNR) of 30 dB,
we can use Shannon's Channel Capacity formula:
C = B * log2(1 + SNR)
Where:
C is the channel capacity in bits per second (bps).
B is the bandwidth in hertz (Hz).
SNR is the signal-to-noise ratio (unitless).
Convert dB To value
we can calculate the initial channel capacity:
C_initial = 3,000 Hz * log2(1 + 10^(30/10))
Symbol m1 m2 m3 m4 m5
Probability 0.4 0.2 0.2 0.1 0.1
https://www.youtube.com/watch?v=_lswtelpWr8
SOURCE CODING ALGORITHM
Solved Numerical of 10 Marks based on
Huffman coding to calculate Entropy,
Average codeword length,Code efficiency
and Variance
Average codeword length=L=2.2 bits/message
H=2.1293 ,Efficiency=96.36%
variance=0.16
⚫ Entropy
N=
⚫ Variance =
Example: x1,x2,x3 symbols with
probability 0.45,0.35 ,0.2 respt calculate
Entropy(H),Average codeword legnth (N)
Example
Consider the same memoryless source with the same
message and their probability find the Huffman Code by
moving the probability of combined message as low as
possible tracking backward through the various step. Find
the codeword of this Huffman code.
Example: A DMS has following symbols with probability as shown below
symbol probability codeword no of bits in
codeword
S0 0.25 10 2
S1 0.25 11 2
S2 0.1250 001 3
S3 0.125 010 3
S4 0.125 011 3
S5 0.0625 0000 4
S6 0.0625 0001 4
JUNE -13
Source coding algorithm
⚫ Huffman code
⚫ Shannon fano code
M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3
M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4
M6 1/16 1 1 1 0 1110 4
M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2
M2 1/8
M3 1/8
M4 1/16
M5 1/16
M6 1/16
M7 1/32
M8 1/32
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0
M2 1/8 1
M3 1/8 1
M4 1/16 1
M5 1/16 1
M6 1/16 1
M7 1/32 1
M8 1/32 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0
M2 1/8 1 0
M3 1/8 1 0
M4 1/16 1 1
M5 1/16 1 1
M6 1/16 1 1
M7 1/32 1 1
M8 1/32 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1
M4 1/16 1 1 0
M5 1/16 1 1 0
M6 1/16 1 1 1
M7 1/32 1 1 1
M8 1/32 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1
M4 1/16 1 1 0 0
M5 1/16 1 1 0 1
M6 1/16 1 1 1 0
M7 1/32 1 1 1 1
M8 1/32 1 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1
M4 1/16 1 1 0 0
M5 1/16 1 1 0 1
M6 1/16 1 1 1 0
M7 1/32 1 1 1 1 0
M8 1/32 1 1 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3
M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4
M6 1/16 1 1 1 0 1110 4
M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd
M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3
M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4
M6 1/16 1 1 1 0 1110 4
M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
EXAMPLE-2
Message m1 m2 m3 m4 m5
Case 1-
m1=0.4 set1
Addition of m2 to m5 =0.19+0.16+0.15+0.1=0.6 set 2
0.6-0.4=0.2
Case 2-
m1+m2=0.59 set 1
m3+m4+m5=0.41 set 2
0.59-0.41=0.18
Message probability Column Column Column code word number of
I II III bits per
codeword
m1 0.4
m2 0.19
m3 0.16
m4 0.15
m5 0.1
Message probability Column Column Column code word number of
I II III bits per
codeword
m1 0.4 0
m2 0.19 0
m3 0.16 1
m4 0.15 1
m5 0.1 1
Message probability Column Column Column code word number of
I II III bits per
codeword
m1 0.4 0 0
m2 0.19 0 1
m3 0.16 1 0
m4 0.15 1 1
m5 0.1 1 1
Message probability Column Column Column code word number of
I II III bits per
codeword
m1 0.4 0 0
m2 0.19 0 1
m3 0.16 1 0
m4 0.15 1 1
m5 0.1 1 1
Message probabil Column Column Column code word number of
ity I II III bits per
codeword
m1 0.4 0 0 00 2
m2 0.19 0 1 01 2
m3 0.16 1 0 10 2
m4 0.15 1 1 0 110 3
m5 0.1 1 1 1 111 3
https://www.youtube.com/watch?v=we6qxE0rQMc
Example- a discrete memoryless source has 5 symbol X1, X2 X3 X4 and X5
with probability
P( X1)= 0.4 , P( X2)=0.19,P(X3)=0.16,P(X4)=0.14, P(X5)=0.11
Message probability Column Column Column code word number of
I II III bits per
codeword
X1 0.4 0 0 00 2
X2 0.19 0 1 01 2
X3 0.16 1 0 10 2
X4 0.14 1 1 0 110 3
X5 0.11 1 1 1 111 3
Average codeword length L= 2.25 bits per message
entropy= 2.15 information per message
code efficiency 95.6 percent
EXAMPLE
Message m1 m2 m3 m4 m5 m6
m1 0.3 0 0 00 2
m2 0.25 0 1 01 2
m3 0.15 1 0 0 100 3
m4 0.12 1 0 1 101 3
m6 0.1 1 1 0 110 3
m5 0.08 1 1 1 111 3
case 1-
upper Set 1 -0.3
lower set- 0.7
0.7-0.3=0.4
Case 2-
Upper set 1- 0.55
lower set- 0.45
0.55- 0.45= 0.1
Error control coding
when transmission of a digital signal take place
between two system such as a computer signal get
distorted due to addition of noise to it the noise
introduce error which may change bit 1 to 0 or vice
versa .
This error can become serious problem in digital system
so it is necessary to detect and correct the error
Need for error control coding
digital communication error are introduced during the
transmission of data. it affects reliability so to improve
the reliability bit energy Eb signal power should increase
.noise spectral density No should be decreases to
maximize the ratio of Eb/No =Ps But there is a practical
limit to increase the ratio of Eb/No so some type of
coding need to be used to improve quality of transmitted
signal
Need of channel coding
❖ channel coding is done to minimise the effect of channel
noise. this process will reduce number of error in received
data will make the system is more reliable .
Codeword
Channel decoder present at receiver map channel output into a
digital signal to to minimise the effect of channel noise. the
channel encoder and Decoder provide reliable communication
over noisy channel by introducing redundancy in prescribed
form at transmitter .output of encoder is series of the code word
the decoder convert this code word into digital message.
CODEWORD (n)
2.Code rate- it is the ratio of number of message bits(k) to the total number of bits (n)
in code word.
Code rate (r ) =k/n
3. hamming weight of codeword- it is defined as the number of non-zero elements in
code word. it is the distance between that code word and all 0 code vectors eg-101100-
weight of the code word is 3 ie number of 1
4. Code efficiency- it is defined as the ratio of message bit to the number of
transmitted bit per block
Code efficiency = code rate=k/n
5. hamming distance- consider the two vectors having same length the hamming
distance is simply the distance between two code word and is defined as Number off
location ine whitch their respective elements differ.
Eg- code word 1- 1 1 1 1 0 1 0 0 Hamming distance=5
Codeword 2- 1 0 0 1 1 0 1 0
6.Minimum distance (dmin)- the dmin for Linear block code defined
as the smallest hamming distance between any pair of code vector in
the code. so it is the smallest having weight of differ between any pair
of the code vector.
eg- 110001 -wt is 3
111100 wt is 4 that's why dmin=3
7. Code vector- n n bit code word can be visualised in an n
dimensional space as Victor whose elements are n bit in code word.
Steps for Linear block code
X =( m1 m2 m3 m4 …..mk, P1 P2 P3…..Pq)
k is message bits
q is number of parity bit
X- codeword
Linear block code
Step 1 X= M . G
Where
X- code word of 1 *n size or n bits
M- message bits or k bits or 1*k size
G- generator matrix of k*n size
check bits plays role of error detection and correction . the job of
linear block code is to generate those parity bits
Linear block code (7,4) Encoder
i/p bit
sequence
m4 m3 m2 m1
+ + +
C3 C2 C1
Example: Linear block code
(7,4)=(n,k)
The parity check matrix of particular (7,4) linear block code is given by
H=
0 0 0 0 0 0 0 0000000 0
0 0 0 1 0 1 1 0001011 3
0 0 1 0 1 0 1 0010101 3
0 0 1 1 1 1 0 0011110 4
0 1 0 0 1 1 0 0100110 3
0 1 0 1 1 0 1 01011101 4
0 1 1 0 0 1 1 0110011 4
0 1 1 1 0 0 0 0111000 3
1 0 0 0 1 1 1 1000111 4
1 0 0 1 1 0 0 1001100 3
1 0 1 0 0 1 0 1010010 3
1 0 1 1 0 0 1 1011001 4
1 1 0 0 0 0 1 1100001 3
1 1 0 1 0 1 0 1101010 4
1 1 1 0 1 0 0 1110100 4
1 1 1 1 1 1 1 1111111 7
Example LBC (6,3)
For a (6,3) code The generator matrix G Is given by
n= no of bits in codeword
k=no of message bits
q=n-k =no of check bits
Y= 1100011 we have transpose of H
Example if G or H not given
Linear block code having following check bits C4 C5 C6 are given by
C4= d1 + d2 + d3
C5= d1+ d2
C6 =d1 + d3
0 0 0 0 0 0 000000 0
0 0 1 1 0 1 001101 3
0 1 0 1 1 0 010110 3
0 1 1 0 1 1 011011 4
1 0 0 1 1 1 100111 4
1 0 1 0 1 0 101010 3
1 1 0 0 0 1 110001 3
1 1 1 1 0 0 111100 4
Syndromes table for single bit
error
ERROR VECTORS BITS IN
ERROR
0 0 0 0 0 0 0 -
1 0 0 0 0 0 0 first
0 1 0 0 0 0 0 second
0 0 1 0 0 0 0 third
0 0 0 1 0 0 0 forth
0 0 0 0 1 0 0 fifth
0 0 0 0 0 1 0 sixth
0 0 0 0 0 0 1 even
Syndromes table for single bit
error
ERROR VECTORS BITS IN Syndrom
ERROR e vectors
0 0 0 0 0 0 0 -
1 0 0 0 0 0 0 first 111
0 1 0 0 0 0 0 second 110
0 0 1 0 0 0 0 third 101
0 0 0 1 0 0 0 forth 011
0 0 0 0 1 0 0 fifth 100
0 0 0 0 0 1 0 sixth 010
0 0 0 0 0 0 1 even 011
The parity check matrix of a(7,4) linear block
code is as H=[1 1 1 0 1 0 0,1 1 0 1 0 1 1,1 0 1 1 0
0 1] Calculate the syndrome vector for single
bit errors.
[S]=[E] [HT]
[S]=[1 0 0 0 0 0 0][HT]=[1 1 1]
[S]=[0 1 0 0 0 0 0][HT]=[1 1 0]
Equalizer and eye pattern
what is Inter symbol interference (ISI) and how to
measure it.
when a signal is passes through a communication channel distortion
will introduce to compensate for a linear distortion we can use the net
called as equaliser connected in cascade with channel.
ISI free
Delayed version signal
Channel of channel i/p Equalizer
i/p Hc(f) Heq(f)
CHANNEL
Following information can be extracted from the eye pattern
110010=
Cyclic code
Example- consider (7, 4) cyclic code generated by generator
polynomial g(D)=1+D2+D3 .design and encoder using shift register
and using this find out the code word for a message 1001 and for
the message 10 10. suppose receive vector r= 0 0 1 0 1 1 0 find the
syndrome using syndrome circuit. and find out the matrix for the
above cyclic code.
Numerical on syndrome calculator
Example2- consider( 7 ,4 )cyclic code.Designe
syndrome calculator for(7,4) hamming code
generated by generator polynomial G(D)=1+D+D3
If transmitted and received code word given by
X= 0 1 1 1 0 0 1 and Y=0 1 1 0 0 0 1
g1=1 g2=0
M-ary
M=No of symbols
N=no bits combined to reduced trasmission
BW
M=2N
BW=Bit rate
QPSK-Qudrature Phase Shift
keying
QPSK Constellation diagram
QPSK Constellation diagram
QAM consellation diagram
MSK MODULATION/Shaped
QPSK
Minimum shift keying
1.MSK bandwidth requirement is less as compared to QPSK
2. The spectrum of MSK has a much wider Mein lobe as
compared to that of QPSK system. typically the is 1.5 times
wider than the main lobe of QPSK . the side of a much smaller
as compared to QPSK so bandpass filter is not required to
remove the side bands.
3. the waveform of MSK has a important property called phase
continuity. that means there are no abrupt change in the phase
of MSK Like QPSK. due to the feature of MSK the intersymbol
interference caused by the non linear amplifier are avoided
completely in case of MSK
Effect of inter symbol interference
1.In absence of ISI and the noise the transmitted bit can be
decoded correctly at the receiver the presence of ISI will
introduce error in decision device at the receiver output.
does the receiver can make an error in the deciding whether
it has received a logic 1 or logic 0
2.another effect of which take place due to overlapping of
spreading of adjacent pulses. it is necessary to use the
special filter called equaliser in order to reduce ISI and its
effectS.
Remedy to reduce ISI
the function which produce a zero ISI is sinc function. does instead of a
rectangular pulse if we transmit sinc pulse Then ISI can be reduced to
zero. using Sinc Pulse for a transmission is known as nyquist pulse
Shaping.
this type of filter particularly not available. therefore particular the frequency
response of the filter is modified with the different role of factor that is alpha
to obtain a particular achievable filter response curve.