0% found this document useful (0 votes)
74 views192 pages

Department of Electronics & Telecommunication Engineering Orientation Program Subject:Digital Communication Class:T.E Sem:v

Uploaded by

Bhavesh Nanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views192 pages

Department of Electronics & Telecommunication Engineering Orientation Program Subject:Digital Communication Class:T.E Sem:v

Uploaded by

Bhavesh Nanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 192

Department of Electronics & Telecommunication Engineering

Orientation Program
Subject:Digital Communication
Class:T.E sem:V Academic Year 2023-24
(Rev 2019 ‘C’ Scheme)

Prepared by: Prof. Swapna Patil


Date:10/7/2023
Time:9:15am-10:15am
Contents

1.Syllabus scheme (TH and Practical)


2.Prerequisites
3.Course objective,Course outcomes
4.Detailed syllabus of Digital communication
5.List of experiments
6.Teaching aids (TH & PRACT)
7.Program outcomes & Program Specific
Outcomes (POs & PSOs)
Syllabus Scheme
Prerequisites:
ECC401 - Engineering Mathematics-IV
ECC404 - Signals and Systems
ECC405 - Principles of Communication Engineering
Course objectives:
1. To describe the basics of information theory and source coding.
2. To illustrate various error control codes.
3. To describe baseband system.
4. To learn different digital modulation and demodulation techniques
Course outcomes:After successful completion of the course student will be able
to: 1. Apply the concepts of information theory in source coding.
2. Compare different error control systems and apply various error detection codes.
3. Analyze different error correction codes.
4. Compare various baseband transmission methods for digital signals.
5. Evaluate the performance of optimum baseband detection in the presence of white
noise.
6. Compare the performances of different digital modulation techniques
Detailed syllabus of Dcom
Text Books:
1.H. Taub, D. Schlling, and G. Saha-Principles of Communication Systems, Tata
Mc- Graw Hill, New Delhi, Third Edition, 2012.
2. Lathi B P, and Ding Z-Modern Digital and Analog Communication Systems,
Oxford University Press, Fourth Edition, 2017.
3. Haykin Simon-Digital Communications, John Wiley and Sons, New Delhi,
Fourth Edition, 2014.
4. John G. Proakis-Digital Communications, McGraw-Hill, Fourth Edition
Reference Books:
1. Sklar B, and Ray P. K.-Digital Communication: Fundamentals and
applications, Pearson,Dorling Kindersley (India), Delhi, Second Edition,
2009.
2. T L Singal-Analog and Digital Communication, Tata Mc-Graw Hill, New
Delhi, First Edition,2012.
3. P Ramakrishna Rao-Digital Communication, Tata Mc-Graw Hill, New
Delhi, First Edition,2011.
4. K. Sam Shanmugam-Digital and analog communication Systems, John Wiley and
sons. 5. Upamanyu Madhow- Fundamentals of Digital Communication- Cambridge
University Press 6. W.C. Huffman, Vera Pless- Fundamentals of Error Correcting
Codes, Cambridge University Press
7. Graham Wade-Coding Techniques, Palgrave, New York
NPTEL / Swayam Course:
1. https://nptel.ac.in/courses/108/101/108101113/
2. https://nptel.ac.in/courses/108/102/108102096/
3. https://nptel.ac.in/courses/108/102/108102120/
ECL501 Digital Communication Lab
Course objectives:
ECL501 Digital Communication Lab
1. To learn source coding and error control coding techniques
2. To compare different line coding methods
3. To distinguish various digital modulations
4. To use different simulation tools for digital communication applications
Course outcomes:
After the successful completion of the course student will be able to
1. Compare various source coding schemes
2. Design and implement different error detection codes
3. Design and implement different error correction codes
4. Compare various line coding techniques
5. Illustrate the impulse response of a matched filter for optimum detection
6. Demonstrate various digital modulation techniques
Suggested list of experiments: (Course teacher can design their own experiments based on the
prescribed syllabus)
1. Huffman code generation
2. Shannon-Fano code generation
3. Vertical redundancy Check (VRC) code generation and error detection
4. Horizontal Redundancy Check (HRC) code generation and error detection
5. Cyclic redundancy Check (CRC) code generation and error detection
6. Checksum code generation and error detection
7. Compare the performances of HRC and Checksum
8. Linear block code generation and error detection
9. Error detection and correction using Hamming code virtual lab http://vlabs.iitb.ac.in/vlabs
dev/labs/mit_bootcamp/comp_networks_sm/labs/exp1/index.php
10. Cyclic code generation and error detection
11. Convolutional code generation
University of Mumbai-R2019-C-Scheme-TY Electronics and Telecommunication Engineering Page 38 of 101

12.Line Codes generation and performance comparison


13. Spectrum of line codes ( NRZ unipolar and polar)
14. Impulse responses of ideal (Nyquist filter) and practical
(Raised cosine filter) solution for zero ISI
15. Matched filter impulse response for a given input
16. Generation (and detection) of Binary ASK
17. Generation (and detection) of Binary PSK
18. Generation (and detection) of Binary FSK
19. Generation (and detection) of QPSK
20. Generation (and detection) of M-ary PSK
21. Generation (and detection) of M-ary FSK
22. Generation (and detection) of 16-ary QASK
23. Generation (and detection) of MSK

Term Work, Practical and Oral:


Term Work, Practical and Oral:
At least 8 experiments covering the entire syllabus must be given “Batch
Wise”. The experiments can be conducted with the help of simulation tool
(preferably open source) and breadboard and components. Teacher should
refer the suggested list of experiments and can design additional experiments
to acquire practical design skills. The experiments should be student centric
and attempt should be made to make experiments more meaningful,
interesting and innovative.
Term work assessment must be based on the overall performance of the
student with every experiment and assignments graded from time to time.
The grades will be converted to marks as per “Credit and Grading
System” manual and should be added and averaged. Based on the above
scheme grading and term work assessment should be done.
The practical and oral examination will be based on entire syllabus. Students
are encouraged to share their experiments codes on online repository.
Practical exam slip should cover all the 8 experiments for examination.
Internal Assessment (20-Marks):
Internal Assessment (IA) consists of two class tests of 20 marks each. IA-1 is
to be conducted on approximately 40% of the syllabus completed and IA-2
will be based on remaining contents (approximately 40% syllabus but
excluding contents covered in IA-I). Duration of each test shall be one hour.
Average of the two tests will be considered as IA marks.
End Semester Examination (80-Marks):
Weightage to each of the modules in end-semester examination will be
proportional to number of respective lecture hours mentioned in the
curriculum.
1. Question paper will comprise of total 06 questions, each carrying 20
marks. 2. Question No: 01 will be compulsory and based on entire
syllabus wherein 4 to 5 sub questions will be asked.
3. Remaining questions will be mixed in nature and randomly selected from
all the modules. 4. Weightage of each module will be proportional to
number of respective lecture hours as mentioned in the syllabus.
5. Total 04 questions need to be solved.
DCOM Practicals
https://octave-online.net/
❖ Teaching aids- PPT, Google Classroom, Google
Meet

Class code: de2fv6a


❖ Assessment Tools- Assignment, Quiz, Term Test,
Final Exam
SHREE L.R. TIWARI COLLEGE OF ENGINEERING
Kanakia Park, Mira Road(E), Thane-401107, Maharashtra.
DEPARTMENT OF ELECTRONICS AND TELECOMMUNICATION
Programme Outcome (POs & PSOs)
Programme Outcomes are the skills and knowledge which the students have at the time of graduation. This will indicate
what student can do from subject-wise knowledge acquired during the programme.

PO Short title of the PO

PO-1 Engineering knowledge

PO-2 Problem analysis

PO-3 Design/development of solutions

PO-4 Conduct investigations of complex problems

PO-5 Modern tool usage

PO-6 The engineer and society

PO-7 Environment and sustainability

PO-8 Ethics

PO-9 Individual and team work

PO-10 Communication

PO-11 Project management and finance

PO-12 Life-long learning


Program Specific Outcomes (PSOs)
An ability to understand and associate
the basic concepts and applications in
the field of Electronics,
Communication/networking, signal
PSO-1 Professional Skills
processing, microwave technology,
embedded systems and semiconductor
technology in the design of complex
systems.

A capability to comprehend the


technological advancements in the
usage of modern design tools to
PSO-2 Problem- Solving skills analyze and design
subsystems/processes for a variety of
applications with cost effective and
appropriate solutions.

An understanding of skills to
communicate in both oral and written
forms to have a successful career,
demonstrating the practice of
PSO-3 Successful career and Social concern
professional ethics and the concerns
for societal and environmental
wellbeing by solving real world
problems.
Thank You
Numericals on Linear block code
10 Marks Numerical

Prepared by: Mrs.Swapna Patil


Sampling Theorem
Statement: A continuous time signal can be represented in its
samples and can be recovered back when sampling frequency
fs is greater than or equal to the twice the highest frequency
component of message signal. i. e.fs≥2fm.
Definition of amplitude shift keying / on- off keying:
Amplitude shift keying is a form of a digital modulation technique in which the
amplitude of analog sinusoidal carrier signal is pass with respect to input digital data.
Definition of frequency shift keying:
Frequency shift keying is form of digital modulation technique in which frequency of sinusoidal carrier signal is varied to
represent input digital data.

only frequency of a carrier is changing amplitude and phase remains constant

Definition of frequency shift keying:


Frequency shift keying is form of digital modulation technique in which frequency
of sinusoidal carrier signal is varied to represent input digital data.

only frequency of a carrier is changing amplitude and phase remains constant


Definition of phase shift keying:
Phase shift keying is a form of digital modulation technique in which phase of analogue sinusoidal carrier signal is where is
to represent input digital data.

Definition of phase shift keying:


Phase shift keying is a form of digital modulation technique in which phase of
analogue sinusoidal carrier signal is where is to represent input digital data.
Line codes
Baseband Modulation and transmission
The first step in digitising a signal is called as formatting. it is
done to make the message signal compatible with the digital
processing. the source information is converted into the digital
symbol. this transformation can be achieved by process such as
coding sampling quantization and PCM. the resulting digital
message are inform of 1 and 0. then they are transmitted into
the baseband pulse waveform in next step called pulse
modulation search waveform can be transmitted over cable.

101001 PULSE WAVEFORM =


LINECODES
Different format of linecodes
1.Non return to zero (NRZ)
2.Return to zero (RZ)

Types of Line Coding


There are 3 types of Line Coding
● Unipolar
● Polar
● Bi-polar
Example
1.Over a long transmission line draw the following data format for the
binary sequence
10011101011.
i)Unipolar NRZ ii) Polar RZ iii) Manchester
2. Represent the given data sequence 110011010011 with help of neat
waveforms in
i) Manchester format
ii) NRZ
iii) AMI-RZ
iv) RZ
Line codes
Random variable- a function which takes on any
value from the sample space and its range is is some set
of real number is called as a random variable.

example- experiment is tossing a dice


possible outcome {1 2 3 4 5 6} =S sample space and
each element in the sample space is called as a sample
point sample space contain 6 sample point for this
experiment every time the trial is performed outcome is
any one of the sample point between 1 to 6 in every trial
outcome is randomly occur. this is not fix therefore the
outcome of a trial or experiment is a variable which can
be randomly take any value from the sample points
Types of random variable
1. discrete random variables
2. continuous random variable

random variable are denoted by the uppercase such as X and Y and value taken by
them denoted by lowercase letter such as x1x2,y1,y2

1. discrete random variable - discrete random variable may be defined as a random


variable which takes on only finite number of values infinite observation interval. this
means that discrete random variable has countable number of discrete value.
example- consider the experiment tossing of 3 coins simultaneously. in such case 8
possible outcomes are there in in sample space .
S={HHH HHT HTH THH HTT THT TTH TTT}
X={x1 x2 x3 x4 x5 x6 x7 x8} Let X is the number of head to be the
random variable
={3 2 2 2 1 1 1 0}
X= the finite number of values

2. continuous random variable- The random variable that takes infinite number of
value is called as a continuous random variable
example- noise voltage generated by the electronic amplifier has a continuous
amplitude . this means S of noise voltage amplitude is continuous therefore X has a
continuous range of values .
Cummulative distribution function(CDF)
probability density function (PDF)

1.definition of CDF and PDF


2. properties of CDF and PDF
3.difference between CDF and PDF
Parameters to study statistical
informations
know PDF that is probability density function give some kind
of information of random variable but it is complex to
understand. so there is a simple way to have the information
about the random variable called as statistical average
some more information like mean or average, moment
standard deviation, variance these are the parameter to
measure any statistical information.
Statistical average of random variable( mean)
1.it is given by the summation of value of a random
variable X weighted by their probability
2. mean is denoted by mx
3.mean is nothing but the expected value of a
random variable X
4. mean is the ratio of of arithmetic sum of all the
values of X to the total number of value of X
Moment and variance- the end moment of any random
variable X may define as mean value of Xn

1.Central moment= random variable- mean


2.second Central moment n=2 known as variance of a
random variable
3.variance provide indication of randomness of random
variable
4. variance= mean square value- square of mean
5.standard deviation= square root of variance
Digital Communication system

Channel

Transmitter Receiver

I/P output
Informati
on signal

Digital Communication is the transfer of digital


information/messages using digital signals over a wired/wireless
medium.
Types of communication system
1. analog communication- in this communication
information is continuously various in amplitude
frequency and phase
2. digital communication system- in this system
information is in discrete in nature or digital format that
is in terms of 10 10, in digital communication digital
signal binary message signal is transmitted in digital
format.
Is Digital signal and Discrete signal are same?
CT to DT
Modulation

Continuous wave Pulse


modulation modulation

Analog CW Digital CW Analog pulse


pulse modulation
modulation modulation eg. modulation
eg.PCM,DM,DPCM
ASK,FSK,PSK eg.PAM,PWM,PPM
eg.AM,FM,PM
Need of digital communication

1. Digital signal can be easily regenerated


2. digital signal are less affected by noise
3. it is possible to use the regenerative repeater which
increase range of communication
4. digital signal are less affected by distortion and
interference
5. digital signal have less error rate
6. Digital circuit and system are more reliable and
flexible
7. it is easy for Digital Signal to undergo signal
processing
Advantage and disadvantage of digital communication
Advantages
1. Due to digital in nature of transmitted signal interference of
noise is less that's why less error in the system
2. better noise immunity because of repeaters are used and
increase the range of communication
3. due to channel coding techniques it is possible to detect error
4. multiplexing is possible
5. Secure data transmission can be achieved
6. it is possible to use the Digital Signal Processing techniques,
image processing, data compression because of digital nature

Disadvantage
1. Bit rate of digital system are high that's why required large
channel bandwidth
2. digital communication system required synchronisation in case
of synchronisation modulation
Channel for digital communication

Types of modulation and coding depends on the channel


characteristics
following are the important characteristics of channel are
1. power required to achieve the desired signal to noise ratio
2. bandwidth of a channel
3. amplitude and the phase response of a channel
4. types of channel linear or nonlinear
5. effect of external interference of a Channel

classification of a channel
1.Telephone channels 2. coaxial cable 3. Optical Fibre 4.
microwave link 5. satellite channel
Comparison between analog Modulation and digital modulation
1.In analog modulation system transmitted signal is analog in nature
whereas in the digital modulation transmitted signal is is in digital format
that is trained of a digital pulses
2. analog modulation amplitude frequency or a phase variation is
transmitted signal represent the information
where as in digital modulation amplitude width and position of a
transmitter pule is constant and message is transmitted in the form of
code word
3.Noise immunity is poor in analog modulation but noise immunity is
excellent in digital modulation
4. it is not possible to remove the noise Because analog signal is function
of time it is continuously varies with respect to time whereas it is
possible to remove the noise in digital modulation as signal is in digital in
nature that is in the form of 1 and 0.
5. repeaters are not used in the analog system but repeaters are used in
the digital system.
Performance Parameters of
Digital communication system
Bit rate - (R) it is a number of bits per second
Baud rate( r)- number of symbols for second or number
of elements per second
Relation between bit rate and Baud rate
r=R/n
Where n is number of bits for symbols
Note- bit rate is always greater than Baud rate R>r
Block diagram and sub-system description
of a digital communication system,
Information- information is a source of communication
example- you are planning for a tour to the city locate in in such a area where
rainfall is rare and if you want to know the weather forecasting then weather
Bureau will provide you're one of the following information
1. it would be a sunny day today (less information)
2. there would be a scattered rain (some information)
3. there would be cyclone (maxi information)

Classification of information source- memory source( current symbol


depends on previous symbol)
memoryless source( symbol produce is independent on previous
symbol)

Discrete memoryless source(DMS)- consist of discrete set of letters or


alphabets symbols. in general any message emitted by the source
consists of string or sequence of symbol. discrete memory less source
can be characterized by the list of symbol, probability of symbols, and
the rate of generation of the symbols by the source.
Probability- in communication system there are basically two types of signal
1. deterministic signal ( signal can be described by mathematically)
2. random signal ( signal behaviour or value cannot be predicted in advance)
the random signal are described in terms of statistical property.It is possible
to analyse the random signal statistical with the help of probability theory
using following parameters
1. experiment- it is defined as the process which is conducted to get some
output or result
eg. throwing of a coin this is considered to be an experiment which give
the output head or tail ie there are two possible outputs
chance of getting head-50%
chance of getting tail-50%
both are having equal probability called are “ equally likely outcomes”
2. sample space - set of all possible outcomes of an experiment is called as
a sample space (S)
eg- experiment- tossing a coin S={H,T}
throwing a dice S={ 1 2 3 4 5 6 }
3. event- The expected subset of sample space is called as event
eg- tossing a dice experiment of all even numbers S={2,4,6}
Definition of probability- probability is define as study of
random experiment. in any random experiment there is a
always and uncertainty that the event will occur or not.As
measure of probability Of occurrence of an event a number
between 0 and 1 assign. if event is sure then probability
100% if it not occur then probability 0%

mathematical expression for the probability off event


A=P(A)= Number of possible favourable outcomes/ total
number of outcomes
⚫ Entropy

⚫ Information Rate=R=r*H
r=no of message /sec, H=information/message

⚫ Average codeword length

N/L =

Where is number of bits in symbol


⚫ Code efficiency = η=H/N

⚫ Variance =
Topic
AWGN channel and Shannon-
Hartley channel capacity theorem.
AWGN Channel
AWGN is often used as a channel model in which the only
impairment to communication is a linear addition of
wideband or white noise with a constant spectral density
(expressed as watts per hertz of bandwidth) and a Gaussian
distribution of amplitude.

Why AWGN channel is used in communication?


The term additive white Gaussian noise (AWGN) originates due
to the following reasons: [Additive] The noise is additive, i.e.,
the received signal is equal to the transmitted signal plus
noise. This gives the most widely used equality in
communication systems.
Block diagram of AWGN
channel model
.
Calculate maximum capacity of gaussian channel with BW is
3khz and SNR 30dB,if bandwidth is double the calculate new
channel capacity

Solution:-
To calculate the maximum capacity of a Gaussian channel with a
given bandwidth of 3 kHz and a signal-to-noise ratio (SNR) of 30 dB,
we can use Shannon's Channel Capacity formula:

C = B * log2(1 + SNR)

Where:
C is the channel capacity in bits per second (bps).
B is the bandwidth in hertz (Hz).
SNR is the signal-to-noise ratio (unitless).
Convert dB To value
we can calculate the initial channel capacity:
C_initial = 3,000 Hz * log2(1 + 10^(30/10))

Let's perform the calculation:

C_initial = 3,000 Hz * log2(1 + 10^(3))

C_initial = 3,000 Hz * log2(1 + 1,000)

C_initial = 3,000 Hz * log2(1,001)

C_initial ≈ 3,000 Hz * 9.9658

C_initial ≈ 29,897.4 bps


If the bandwidth is doubled, the new channel capacity can be
calculated using the same formula but with the doubled
bandwidth:

C_new = 6,000 Hz * log2(1 + 10^(30/10))


Let's calculate the new channel capacity:
C_new = 6,000 Hz * log2(1 + 10^(3))
C_new = 6,000 Hz * log2(1 + 1,000)
C_new = 6,000 Hz * log2(1,001)
C_new ≈ 6,000 Hz * 9.9658
C_new ≈ 59,785.2 bps
Therefore, if the bandwidth is doubled to 6 kHz, the new
channel capacity would be approximately 59,785.2 bps.
Automatic Repeat ReQuest (ARQ)
Automatic Repeat ReQuest (ARQ) is a group of error – control protocols for transmission
of data over noisy or unreliable communication network. These protocols reside in the
Data Link Layer and in the Transport Layer of the OSI (Open Systems Interconnection)
reference model. They are named so because they provide for automatic retransmission
of frames that are corrupted or lost during transmission. ARQ is also called Positive
Acknowledgement with Retransmission (PAR).
ARQs are used to provide reliable transmissions over unreliable upper layer services.
They are often used in Global System for Mobile (GSM) communication.
Working Principle
In these protocols, the receiver sends an acknowledgement message back to the sender
if it receives a frame correctly. If the sender does not receive the acknowledgement of a
transmitted frame before a specified period of time, i.e. a timeout occurs, the sender
understands that the frame has been corrupted or lost during transit. So, the sender
retransmits the frame. This process is repeated until the correct frame is transmitted.
Types of ARQ Protocols
There are three ARQ protocols in the data link layer.
● Stop – and – Wait ARQ − Stop – and – wait ARQ provides unidirectional data
transmission with flow control and error control mechanisms, appropriate for
noisy channels. The sender keeps a copy of the sent frame. It then waits for a
finite time to receive a positive acknowledgement from receiver. If the timer
expires, the frame is retransmitted. If a positive acknowledgement is received
then the next frame is sent.
● Go – Back – N ARQ − Go – Back – N ARQ provides for sending multiple frames
before receiving the acknowledgement for the first frame. It uses the concept of
sliding window, and so is also called sliding window protocol. The frames are
sequentially numbered and a finite number of frames are sent. If the
acknowledgement of a frame is not received within the time period, all frames
starting from that frame are retransmitted.
● Selective Repeat ARQ − This protocol also provides for sending multiple frames
before receiving the acknowledgement for the first frame. However, here only the
erroneous or lost frames are retransmitted, while the good frames are received
and buffered.
Types of sources and their model
1. Analog source Eg-Radio and TV broadcasting system generate the
analog signal as audio and video
2. Discrete source- eg Source digital computer gives discrete output
Discrete source classified into discrete memory less source and
stationary source
i) discrete memoryless sourceDMS- if the current output letter produced
by discrete is independent from all the past and future output then
source called as discrete memoryless source
ii) stationary source- if the output of a discrete source is statistical e
dependent on past or future output then called as discrete stationary
source
Multipath Propagation
Multipath propagation causes intersymbol interference
when a wireless signal being transmitted reaches a receiver
through different paths. This commonly occurs when
reflected signals bounce off of surfaces, when the wireless
signal refracts through obstacles, and because of
atmospheric conditions. These paths have different lengths
before reaching the receiver, thus creating different versions
that reach at different time intervals. The delay in symbol
transmission will interfere with correct symbol detection.
The amplitude and or phase of the signal can be distorted
when the different paths are received for additional
interference.
Techniques to Counter Intersymbol
Interference
⚫ Intersymbol interference can be countered in
telecommunications and data storage.
⚫ Systems can be designed to shorten impulse responses
that are short enough to reduce the possibility of signals
crossing over to other symbols within the transmission.
Energy that is intact is then restrained within each symbol
and without additional data from other signals in the mix.
⚫ Creating transmissions with consecutive raised-cosine
impulses can help prevent intersymbol interference.
These types of signals can have zero intersymbol
interference properties.
⚫ Guard periods can be put in place to separate symbols.
This prevents symbols from being received out of order
or cluttered, preventing intersymbol interference.
⚫ An equalizer at the receiver can apply an inverse filter.
This removes the channel’s effects and increases the
ability to receive signals outside of bandlimited
frequencies.
⚫ The Viterbi algorithm can be applied to determine the
probability of a sequence of observed events within
Mutual Information
There are two source coding algorithm which use the
variable length coding techniques

1. Huffman coding Algorithm

2. Shannon fano coding algorithm


Huffman coding steps
1.List the source symbol or message in order of decreasing
probability
2. two source symbols of lowest probability assigned number 0
and 1
3. these two symbols are combined into new message
4. the probability of new message is equal to the sum of probability
of two original symbols.
5. the probability of these two message is plays in list according to
its value
6. repeat this procedure until we are left with only two you so
symbols for which 0 and 1 are assigned.
10 marks numerical based on source coding algorithm
asking Entropy,average code word length ,code
effiency,variance

Example 1- consider 5 source symbol Of DMS


probability given in table follow the Huffman
algorithm to find the code word of each message also
find average codeword length, entropy(average
information for message) and code efficiency.

Symbol m1 m2 m3 m4 m5
Probability 0.4 0.2 0.2 0.1 0.1
https://www.youtube.com/watch?v=_lswtelpWr8
SOURCE CODING ALGORITHM
Solved Numerical of 10 Marks based on
Huffman coding to calculate Entropy,
Average codeword length,Code efficiency
and Variance
Average codeword length=L=2.2 bits/message

H=2.1293 ,Efficiency=96.36%
variance=0.16
⚫ Entropy

⚫ Average codeword length

N=

Where is number of bits in symbol


⚫ Code efficiency = η=H/N

⚫ Variance =
Example: x1,x2,x3 symbols with
probability 0.45,0.35 ,0.2 respt calculate
Entropy(H),Average codeword legnth (N)
Example
Consider the same memoryless source with the same
message and their probability find the Huffman Code by
moving the probability of combined message as low as
possible tracking backward through the various step. Find
the codeword of this Huffman code.
Example: A DMS has following symbols with probability as shown below
symbol probability codeword no of bits in
codeword

S0 0.25 10 2

S1 0.25 11 2

S2 0.1250 001 3

S3 0.125 010 3

S4 0.125 011 3

S5 0.0625 0000 4

S6 0.0625 0001 4
JUNE -13
Source coding algorithm

⚫ Huffman code
⚫ Shannon fano code

PREPARED BY: PROF.SWAPNA PATIL


Procedure of Shannon-fano code

1. List the source symbol/message in order of descending


probability
2. Partition the set of symbols into two sets that are as
close as possible equally probable
3. Assign 0 to each message in upper set and 1 to each
message in lower set
4. Continue this process, each time partitioning the sets
with as nearly equal probabilities as possible until it is is
not possible to further partition the message
EXAMPLE
Message M1 M2 M3 M4 M5 M6 M7 M8

probability 1/2 1/8 1/8 1/16 1/16 1/16 1/32 1/32


Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3

M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4

M6 1/16 1 1 1 0 1110 4

M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2
M2 1/8
M3 1/8

M4 1/16
M5 1/16

M6 1/16

M7 1/32
M8 1/32
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0
M2 1/8 1
M3 1/8 1

M4 1/16 1
M5 1/16 1

M6 1/16 1

M7 1/32 1
M8 1/32 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0
M2 1/8 1 0
M3 1/8 1 0

M4 1/16 1 1
M5 1/16 1 1

M6 1/16 1 1

M7 1/32 1 1
M8 1/32 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1

M4 1/16 1 1 0
M5 1/16 1 1 0

M6 1/16 1 1 1

M7 1/32 1 1 1
M8 1/32 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1

M4 1/16 1 1 0 0
M5 1/16 1 1 0 1

M6 1/16 1 1 1 0

M7 1/32 1 1 1 1
M8 1/32 1 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0
M2 1/8 1 0 0
M3 1/8 1 0 1

M4 1/16 1 1 0 0
M5 1/16 1 1 0 1

M6 1/16 1 1 1 0

M7 1/32 1 1 1 1 0
M8 1/32 1 1 1 1 1
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3

M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4

M6 1/16 1 1 1 0 1110 4

M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
Message proba Column Column Column Column column code number
bility I II III IV V word of bits
per
codewo
rd

M1 1/2 0 0 1
M2 1/8 1 0 0 100 3
M3 1/8 1 0 1 101 3

M4 1/16 1 1 0 0 1100 4
M5 1/16 1 1 0 1 1101 4

M6 1/16 1 1 1 0 1110 4

M7 1/32 1 1 1 1 0 11110 5
M8 1/32 1 1 1 1 1 11111 5
EXAMPLE-2
Message m1 m2 m3 m4 m5

probability 0.4 0.19 0.16 0.15 0.1

Case 1-
m1=0.4 set1
Addition of m2 to m5 =0.19+0.16+0.15+0.1=0.6 set 2
0.6-0.4=0.2

Case 2-
m1+m2=0.59 set 1
m3+m4+m5=0.41 set 2
0.59-0.41=0.18
Message probability Column Column Column code word number of
I II III bits per
codeword

m1 0.4

m2 0.19

m3 0.16

m4 0.15

m5 0.1
Message probability Column Column Column code word number of
I II III bits per
codeword

m1 0.4 0

m2 0.19 0

m3 0.16 1

m4 0.15 1

m5 0.1 1
Message probability Column Column Column code word number of
I II III bits per
codeword

m1 0.4 0 0

m2 0.19 0 1

m3 0.16 1 0

m4 0.15 1 1

m5 0.1 1 1
Message probability Column Column Column code word number of
I II III bits per
codeword

m1 0.4 0 0

m2 0.19 0 1

m3 0.16 1 0

m4 0.15 1 1

m5 0.1 1 1
Message probabil Column Column Column code word number of
ity I II III bits per
codeword

m1 0.4 0 0 00 2
m2 0.19 0 1 01 2
m3 0.16 1 0 10 2

m4 0.15 1 1 0 110 3
m5 0.1 1 1 1 111 3
https://www.youtube.com/watch?v=we6qxE0rQMc
Example- a discrete memoryless source has 5 symbol X1, X2 X3 X4 and X5
with probability
P( X1)= 0.4 , P( X2)=0.19,P(X3)=0.16,P(X4)=0.14, P(X5)=0.11
Message probability Column Column Column code word number of
I II III bits per
codeword

X1 0.4 0 0 00 2
X2 0.19 0 1 01 2
X3 0.16 1 0 10 2

X4 0.14 1 1 0 110 3
X5 0.11 1 1 1 111 3
Average codeword length L= 2.25 bits per message
entropy= 2.15 information per message
code efficiency 95.6 percent
EXAMPLE
Message m1 m2 m3 m4 m5 m6

probability 0.3 0.25 0.15 0.12 0.08 0.1

Determine average codeword length, entropy code efficiency,Minimum


variance using Huffman coding and Shannon fano coding techniques
compare and comment the result .
Message probability Column Column Column code word number of
I II III bits per
codeword

m1 0.3 0 0 00 2
m2 0.25 0 1 01 2
m3 0.15 1 0 0 100 3

m4 0.12 1 0 1 101 3
m6 0.1 1 1 0 110 3

m5 0.08 1 1 1 111 3
case 1-
upper Set 1 -0.3
lower set- 0.7
0.7-0.3=0.4

Case 2-
Upper set 1- 0.55
lower set- 0.45
0.55- 0.45= 0.1
Error control coding
when transmission of a digital signal take place
between two system such as a computer signal get
distorted due to addition of noise to it the noise
introduce error which may change bit 1 to 0 or vice
versa .
This error can become serious problem in digital system
so it is necessary to detect and correct the error
Need for error control coding
digital communication error are introduced during the
transmission of data. it affects reliability so to improve
the reliability bit energy Eb signal power should increase
.noise spectral density No should be decreases to
maximize the ratio of Eb/No =Ps But there is a practical
limit to increase the ratio of Eb/No so some type of
coding need to be used to improve quality of transmitted
signal
Need of channel coding
❖ channel coding is done to minimise the effect of channel
noise. this process will reduce number of error in received
data will make the system is more reliable .

⚫ The channel encoder map incoming digital signal Into


channel input by introducing some redundant bits along
with the message Bits These extra bits are called as parity
bits
(the parity bits are redundant as they do not carry any
information )
message bit along with parity bit forms code word

Message bits Parity Bits

Codeword
Channel decoder present at receiver map channel output into a
digital signal to to minimise the effect of channel noise. the
channel encoder and Decoder provide reliable communication
over noisy channel by introducing redundancy in prescribed
form at transmitter .output of encoder is series of the code word
the decoder convert this code word into digital message.

In source coding redundancy removed whereas in channel


coding redundancy is introduce in controlled manner. it is
possible to go for only source coding for only channel coding
sometime go for both coding. these techniques improve the
system performance but also increase the complexity
Error detection and correction
error control technique can be divided into two types
1. error detection technique
2. error correction techniques
Error introduced in data bit are of two types 1. coherent errors 2. flow
integrity errors
eg. error means particular bit 0 is transmitted and 1 is received or vice versa
eg- missing block of data for a data block may be lost in a network if it is
delivered to the wrong destination

depending on number of bits in the error two types of error


1. single bit error
2. burst error
disadvantage of coding
1. increase the transmission bandwidth requirements
2. system becomes complex
Basic definitions
1. codeword- It is in bit encoded block of bits it contain data bit and parity
bits

Message bits (k) Parity bits (n-k=q)

CODEWORD (n)

2.Code rate- it is the ratio of number of message bits(k) to the total number of bits (n)
in code word.
Code rate (r ) =k/n
3. hamming weight of codeword- it is defined as the number of non-zero elements in
code word. it is the distance between that code word and all 0 code vectors eg-101100-
weight of the code word is 3 ie number of 1
4. Code efficiency- it is defined as the ratio of message bit to the number of
transmitted bit per block
Code efficiency = code rate=k/n
5. hamming distance- consider the two vectors having same length the hamming
distance is simply the distance between two code word and is defined as Number off
location ine whitch their respective elements differ.
Eg- code word 1- 1 1 1 1 0 1 0 0 Hamming distance=5
Codeword 2- 1 0 0 1 1 0 1 0
6.Minimum distance (dmin)- the dmin for Linear block code defined
as the smallest hamming distance between any pair of code vector in
the code. so it is the smallest having weight of differ between any pair
of the code vector.
eg- 110001 -wt is 3
111100 wt is 4 that's why dmin=3
7. Code vector- n n bit code word can be visualised in an n
dimensional space as Victor whose elements are n bit in code word.
Steps for Linear block code
X =( m1 m2 m3 m4 …..mk, P1 P2 P3…..Pq)
k is message bits
q is number of parity bit
X- codeword
Linear block code
Step 1 X= M . G
Where
X- code word of 1 *n size or n bits
M- message bits or k bits or 1*k size
G- generator matrix of k*n size

check bits plays role of error detection and correction . the job of
linear block code is to generate those parity bits
Linear block code (7,4) Encoder
i/p bit
sequence

m4 m3 m2 m1

+ + +

C3 C2 C1
Example: Linear block code
(7,4)=(n,k)
The parity check matrix of particular (7,4) linear block code is given by

H=

1. find the generator Matrix G


2. list all the code vectors
3. what is the minimum distance between code vector.
4. how many error can be detected?
5. how many error can be corrected?
6. draw the encoder for above code
7. verify whether above code is hamming code
8 find the syndrome if Y=010101
Here H=Parity check matrix(given in this numerical)
G=Genrator matrix
n = 7 number of bits in code word ,k=4 number of message bits
so so n- k=7-4=3 number of check bits or parity bits q=3
Condition to check given code is hamming code
m1 m2 m3 m4 C1 C2 C3 codeword Wt of
codeword

0 0 0 0 0 0 0 0000000 0

0 0 0 1 0 1 1 0001011 3

0 0 1 0 1 0 1 0010101 3

0 0 1 1 1 1 0 0011110 4

0 1 0 0 1 1 0 0100110 3

0 1 0 1 1 0 1 01011101 4

0 1 1 0 0 1 1 0110011 4

0 1 1 1 0 0 0 0111000 3

1 0 0 0 1 1 1 1000111 4

1 0 0 1 1 0 0 1001100 3

1 0 1 0 0 1 0 1010010 3

1 0 1 1 0 0 1 1011001 4

1 1 0 0 0 0 1 1100001 3

1 1 0 1 0 1 0 1101010 4

1 1 1 0 1 0 0 1110100 4

1 1 1 1 1 1 1 1111111 7
Example LBC (6,3)
For a (6,3) code The generator matrix G Is given by

1. find all code words


2. realise an encoder for this code
3. verify that this code is single error correcting code
4. if receive code word Y= 1100011 find the syndrome
G=Genrator matrix-Transmitter
H=Parity check matrix --Receiver

n= no of bits in codeword
k=no of message bits
q=n-k =no of check bits
Y= 1100011 we have transpose of H
Example if G or H not given
Linear block code having following check bits C4 C5 C6 are given by
C4= d1 + d2 + d3
C5= d1+ d2
C6 =d1 + d3

1.Construct the generator matrix


2.construct code generated by matrix
3.determine error-correcting capacity
4.prepared decoding table
5.decode the if the received codeword 10 11 0 0 and 0 0 0 1 1 0
C4 C5 C6 -Check bits q=3
d1,d2,d3 message bits k=3
n=k+q=6
d1 d2 d3 C4 C5 C6 Codew Wt of
ord code
word

0 0 0 0 0 0 000000 0

0 0 1 1 0 1 001101 3

0 1 0 1 1 0 010110 3

0 1 1 0 1 1 011011 4

1 0 0 1 1 1 100111 4

1 0 1 0 1 0 101010 3

1 1 0 0 0 1 110001 3

1 1 1 1 0 0 111100 4
Syndromes table for single bit
error
ERROR VECTORS BITS IN
ERROR

0 0 0 0 0 0 0 -

1 0 0 0 0 0 0 first

0 1 0 0 0 0 0 second

0 0 1 0 0 0 0 third

0 0 0 1 0 0 0 forth

0 0 0 0 1 0 0 fifth

0 0 0 0 0 1 0 sixth

0 0 0 0 0 0 1 even
Syndromes table for single bit
error
ERROR VECTORS BITS IN Syndrom
ERROR e vectors

0 0 0 0 0 0 0 -

1 0 0 0 0 0 0 first 111

0 1 0 0 0 0 0 second 110

0 0 1 0 0 0 0 third 101

0 0 0 1 0 0 0 forth 011

0 0 0 0 1 0 0 fifth 100

0 0 0 0 0 1 0 sixth 010

0 0 0 0 0 0 1 even 011
The parity check matrix of a(7,4) linear block
code is as H=[1 1 1 0 1 0 0,1 1 0 1 0 1 1,1 0 1 1 0
0 1] Calculate the syndrome vector for single
bit errors.
[S]=[E] [HT]
[S]=[1 0 0 0 0 0 0][HT]=[1 1 1]
[S]=[0 1 0 0 0 0 0][HT]=[1 1 0]
Equalizer and eye pattern
what is Inter symbol interference (ISI) and how to
measure it.
when a signal is passes through a communication channel distortion
will introduce to compensate for a linear distortion we can use the net
called as equaliser connected in cascade with channel.
ISI free
Delayed version signal
Channel of channel i/p Equalizer
i/p Hc(f) Heq(f)

the Equalizer is design in such a way that within operating frequency


the overall amplitude and the phase response of cascaded system are
approximately equal to the original amplitude and phase response
for distortionless transmitter.
Amplitude and Phase change

CHANNEL
Following information can be extracted from the eye pattern

1.Width of eye- time interval over which received wave can be


sample without error due to ISI
2.Height of the eye opening- noise margin
3. the best instant of Sampling is when eye opening is maximum
4. the sensitivity of a system to the timing error is determined by
observing the rate at which eye is closing as the sampling rate is
varied
5.when the effect of ISI is the eye is completely closed and it is
impossible to avoid due to the combined effect of ISI and noise
in the system
Nyquist Criteria for zero ISI
Fs>2fm
fs=2fm
fs<2fm
No ISI condition
BW>=Rb/2
Rb=bit rate ie symbols/secs
P(t)=1 t=0
C>=R
Cyclic Code
110010 Cyclic code represnted by
polynominal

110010=
Cyclic code
Example- consider (7, 4) cyclic code generated by generator
polynomial g(D)=1+D2+D3 .design and encoder using shift register
and using this find out the code word for a message 1001 and for
the message 10 10. suppose receive vector r= 0 0 1 0 1 1 0 find the
syndrome using syndrome circuit. and find out the matrix for the
above cyclic code.
Numerical on syndrome calculator
Example2- consider( 7 ,4 )cyclic code.Designe
syndrome calculator for(7,4) hamming code
generated by generator polynomial G(D)=1+D+D3
If transmitted and received code word given by
X= 0 1 1 1 0 0 1 and Y=0 1 1 0 0 0 1
g1=1 g2=0
M-ary
M=No of symbols
N=no bits combined to reduced trasmission
BW

M=2N
BW=Bit rate
QPSK-Qudrature Phase Shift
keying
QPSK Constellation diagram
QPSK Constellation diagram
QAM consellation diagram
MSK MODULATION/Shaped
QPSK
Minimum shift keying
1.MSK bandwidth requirement is less as compared to QPSK
2. The spectrum of MSK has a much wider Mein lobe as
compared to that of QPSK system. typically the is 1.5 times
wider than the main lobe of QPSK . the side of a much smaller
as compared to QPSK so bandpass filter is not required to
remove the side bands.
3. the waveform of MSK has a important property called phase
continuity. that means there are no abrupt change in the phase
of MSK Like QPSK. due to the feature of MSK the intersymbol
interference caused by the non linear amplifier are avoided
completely in case of MSK
Effect of inter symbol interference
1.In absence of ISI and the noise the transmitted bit can be
decoded correctly at the receiver the presence of ISI will
introduce error in decision device at the receiver output.
does the receiver can make an error in the deciding whether
it has received a logic 1 or logic 0
2.another effect of which take place due to overlapping of
spreading of adjacent pulses. it is necessary to use the
special filter called equaliser in order to reduce ISI and its
effectS.
Remedy to reduce ISI
the function which produce a zero ISI is sinc function. does instead of a
rectangular pulse if we transmit sinc pulse Then ISI can be reduced to
zero. using Sinc Pulse for a transmission is known as nyquist pulse
Shaping.

fourier transform of sinc pulse gives rectangular function

to resolve all frequency component the frequency response of the filter


must be exactly flat in the passband and 0 in the attenuation band .

this type of filter particularly not available. therefore particular the frequency
response of the filter is modified with the different role of factor that is alpha
to obtain a particular achievable filter response curve.

nyquist criteria for distortionless bandpass binary transmission nyquist


criteria shows that the theoretical minimum system bandwidth needed to
detect Rs Symbols for second without ISI is Rs/2. this occur when filter
response is made rectangular.
in criteria bandwidth requirement is minimum but no margin of error in the
sampling time in received signal
Pulse shaping
IA-2 DCOM on 19/11/2020 at
3:00-4:00pm
1. QPSK,offset &non offset QPSK
QAM,MSK,M-ARY
MODULATION,Comparision
2. ISI remedy ,no ISI Condition nyquist
criteria,Pulse shaping
3. Convolution code
Example-Convolution code
Convolution code=(n,k.L)
Where L= Constraint length
Ex-The convolution encoder given in the figure has following
two generator sequence each of length 3 ,g1=(1,1,1)
g2=(1,0,1) Obtain the encoded sequence for the given message
10011
Example
Convolutional encoder has constraint length of 3 and code rate
one by 3 then impulse response for each are given by g1=111
g2=111 ,g3=101 Draw the encoder diagram, state diagram and
tree diagram for above code
Forward error correction (FEC)
Forward error correction (FEC) is an error correction technique to detect and correct a limited
number of errors in transmitted data without the need for retransmission.
In this method, the sender sends a redundant error-correcting code along with the data frame.
The receiver performs necessary checks based upon the additional redundant bits. If it finds
that the data is free from errors, it executes error-correcting code that generates the actual
frame. It then removes the redundant bits before passing the message to the upper layers.

Advantages and Disadvantages


● Because FEC does not require handshaking between the source and the destination, it
can be used for broadcasting of data to many destinations simultaneously from a single
source.
● Another advantage is that FEC saves bandwidth required for retransmission. So, it is
used in real time systems.
● Its main limitation is that if there are too many errors, the frames need to be retransmitted.
Error Correction Codes for FEC
Error correcting codes for forward error corrections can be broadly categorized into two types,
namely, block codes and convolution codes.
● Block codes − The message is divided into fixed-sized blocks of bits to which redundant
bits are added for error correction.
● Convolutional codes − The message comprises of data streams of arbitrary length and
parity symbols are generated by the sliding application of a Boolean function to the data
stream.
● Hamming Codes − It is a block code that is capable of
detecting up to two simultaneous bit errors and correcting
single-bit errors.
● Binary Convolution Code − Here, an encoder processes an
input sequence of bits of arbitrary length and generates a
sequence of output bits.
● Reed - Solomon Code − They are block codes that are
capable of correcting burst errors in the received data block.
● Low-Density Parity Check Code − It is a block code specified
by a parity-check matrix containing a low density of 1s. They
are suitable for large block sizes in very noisy channels.
Information theory and source coding

Information theory is used for mathematical analysis and the


modelling of Communication system.
Topic
AWGN channel, and Shannon-
Hartley channel capacity theorem.
AWGN Channel
AWGN is often used as a channel model in which the only impairment to
communication is a linear addition of wideband or white noise with a
constant spectral density (expressed as watts per hertz of bandwidth) and
a Gaussian distribution of amplitude.

Why AWGN channel is used in communication?


The term additive white Gaussian noise (AWGN) originates due to the
following reasons: [Additive] The noise is additive, i.e., the received
signal is equal to the transmitted signal plus noise. This gives the most
widely used equality in communication systems.
Block diagram of AWGN
channel model
Automatic Repeat ReQuest (ARQ)
Automatic Repeat ReQuest (ARQ) is a group of error – control protocols for transmission
of data over noisy or unreliable communication network. These protocols reside in the
Data Link Layer and in the Transport Layer of the OSI (Open Systems Interconnection)
reference model. They are named so because they provide for automatic retransmission
of frames that are corrupted or lost during transmission. ARQ is also called Positive
Acknowledgement with Retransmission (PAR).
ARQs are used to provide reliable transmissions over unreliable upper layer services.
They are often used in Global System for Mobile (GSM) communication.
Working Principle
In these protocols, the receiver sends an acknowledgement message back to the sender
if it receives a frame correctly. If the sender does not receive the acknowledgement of a
transmitted frame before a specified period of time, i.e. a timeout occurs, the sender
understands that the frame has been corrupted or lost during transit. So, the sender
retransmits the frame. This process is repeated until the correct frame is transmitted.
Types of ARQ Protocols
Types of ARQ Protocols
There are three ARQ protocols in the data link layer.
● Stop – and – Wait ARQ − Stop – and – wait ARQ provides unidirectional data
transmission with flow control and error control mechanisms, appropriate for
noisy channels. The sender keeps a copy of the sent frame. It then waits for a
finite time to receive a positive acknowledgement from receiver. If the timer
expires, the frame is retransmitted. If a positive acknowledgement is received
then the next frame is sent.
● Go – Back – N ARQ − Go – Back – N ARQ provides for sending multiple frames
before receiving the acknowledgement for the first frame. It uses the concept of
sliding window, and so is also called sliding window protocol. The frames are
sequentially numbered and a finite number of frames are sent. If the
acknowledgement of a frame is not received within the time period, all frames
starting from that frame are retransmitted.
● Selective Repeat ARQ − This protocol also provides for sending multiple frames
before receiving the acknowledgement for the first frame. However, here only the
erroneous or lost frames are retransmitted, while the good frames are received
and buffered.
.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy