Digital Communications: Fundamentals and Applications: by Bernard Sklar
Digital Communications: Fundamentals and Applications: by Bernard Sklar
by Bernard Sklar
Digital communication system
Important features of a DCS:
The transmitter sends a waveform from a finite set
of possible waveforms during a limited time
The channel distorts, attenuates the transmitted
signal
The receiver decides which waveform was
transmitted given the distorted/noisy received
signal. There is a limit to the time it has to do this
task.
The probability of an erroneous decision is an
important measure of system performance
Lecture 1 2
Digital versus analog
Advantages of digital communications:
Regenerator receiver
Original Regenerated
pulse pulse
Propagation distance
Lecture 1 3
Classification of signals
Deterministic and random signals
Deterministic signal: No uncertainty with respect to
the signal value at any time.
Random signal: Some degree of uncertainty in
signal values before it actually occurs.
Thermal noise in electronic circuits due to the random
movement of electrons. See my notes on Noise
Reflection of radio waves from different layers of
ionosphere
Interference
Lecture 1 4
Classification of signals …
Periodic and non-periodic signals
A discrete signal
Analog signals
Lecture 1 5
Classification of signals ..
Energy and power signals
A signal is an energy signal if, and only if, it has nonzero
but finite energy for all time:
A signal is a power signal if, and only if, it has finite but
nonzero power for all time:
Lecture 1 6
Random process
A random process is a collection (ensemble) of time functions,
or signals, corresponding to various outcomes of a random
experiment. For each outcome, there exists a deterministic
function, which is called a sample function or a realization.
Random
variables
Real number
Sample functions
or realizations
(deterministic
function)
time (t)
Lecture 1 7
Random process …
Strictly stationary: If none of the statistics of the random process are affected
by a shift in the time origin.
and
respectively. In other words, you get the same result from averaging over the
ensemble or over all time.
Lecture 1 8
Autocorrelation
Autocorrelation of an energy signal
Lecture 1 9
Spectral density
Energy signals:
Power signals:
Random process:
Power spectral density (PSD):
Lecture 1 10
Properties of an autocorrelation function
For real-valued (and WSS in case of random
signals):
1. Autocorrelation and spectral density form a
Fourier transform pair. – see Linear systems,
noise
2. Autocorrelation is symmetric around zero.
3. Its maximum value occurs at the origin.
4. Its value at the origin is equal to the average
power or energy.
Lecture 1 11
Noise in communication systems
Thermal noise is described by a zero-mean, Gaussian random
process, n(t).
Its PSD is flat, hence, it is called white noise. is the Standard
Deviation and 2 is the Variance of the random process.
[w/Hz]
Power spectral
density
Autocorrelation
function
Probability density function
Lecture 1 12
Signal transmission through linear systems
Input Output
Linear system - see my notes
Deterministic signals:
Random signals:
Lecture 1 13
Signal transmission … - cont’d
Ideal filters:
Non-causal!
Low-pass
Band-pass High-pass
Realizable filters:
RC filters Butterworth filter
Lecture 1 14
Bandwidth of signal
Baseband versus bandpass:
Baseband Bandpass
signal signal
Local oscillator
Bandwidth dilemma:
Bandlimited signals are not realizable!
Realizable signals have infinite bandwidth! We approximate
“Band-Limited” in our analysis!
Lecture 1 15
Bandwidth of signal …
Different definition of bandwidth:
a) Half-power bandwidth d) Fractional power containment bandwidth
b) Noise equivalent bandwidth e) Bounded power spectral density
c) Null-to-null bandwidth f) Absolute bandwidth
(a)
(b)
(c)
(d)
(e)50dB
Lecture 1 16
Formatting and transmission of baseband signal
A Digital Communication System
Digital info.
Textual Format
source info.
Pulse
Analog Transmit
Sample Quantize Encode modulate
info.
Pulse
Bit stream waveforms Channel
Format
Analog
info. Low-pass
Decode Demodulate/
filter Receive
Textual Detect
sink
info.
Digital info.
Lecture 2 17
Format analog signals
To transform an analog waveform into a form
that is compatible with a digital
communication system, the following steps
are taken:
1. Sampling – See my notes on Sampling
2. Quantization and encoding
3. Baseband transmission
Lecture 2 18
Sampling
See my notes on Fourier Series, Fourier Transform and Sampling
x (t ) | X ( f ) |
xs (t )
| Xs( f )|
Lecture 2 19
Aliasing effect
LP filter
Nyquist rate
aliasing
Lecture 2 20
Sampling theorem
Out
In
Average quantization noise power
Quantized
Lecture 2 22
Encoding (PCM)
Lecture 2 23
Quantization example
amplitude
x(t)
111 3.1867
100 0.4552
010 -1.3657
Lecture 2 24
Quantization error
Quantizing error: The difference between the input and output of
a quantizer e(t ) xˆ (t ) x(t )
+
e(t ) The Noise Model is an
approximation!
xˆ (t ) x(t )
Lecture 2 25
Quantization error …
Quantizing error:
Granular or linear errors happen for inputs within the dynamic
range of quantizer
Saturation errors happen for inputs outside the dynamic range
of quantizer
Saturation errors are larger than linear errors (AKA as “Overflow”
or “Clipping”)
Saturation errors can be avoided by proper tuning of AGC
Saturation errors need to be handled by Overflow Detection!
Quantization noise variance:
q E{[ x q( x)] } e 2 ( x) p( x)dx Lin
2 2 2 2
Sat
L / 2 1
ql2 q2
2
Lin 2 p ( xl )ql Uniform q. 2
Lin
l 0 12 12
Lecture 2 26
Uniform and non-uniform quant.
Uniform (linear) quantizing:
No assumption about amplitude statistics and correlation
properties of the input.
Not using the user-related specifications
Robust to small changes in input statistic by not finely tuned to a
specific set of input parameters
Simple implementation
Application of linear quantizer:
Signal processing, graphic and display applications, process
control applications
Non-uniform quantizing:
Using the input statistics to tune quantizer parameters
Larger SNR than uniform quantizing with same number of levels
Non-uniform intervals in the dynamic range with same quantization
noise variance
Application of non-uniform quantizer:
Commonly used for speech
Lecture 2 27
Non-uniform quantization
It is achieved by uniformly quantizing the “compressed” signal.
(actually, modern A/D converters use Uniform quantizing at 12-13 bits
and compand digitally)
At the receiver, an inverse compression characteristic, called
“expansion” is employed to avoid signal distortion.
compression+expansion companding
y C (x) x̂
x(t ) y (t ) yˆ (t ) xˆ (t )
x ŷ
Compress Qauntize Expand
Transmitter Channel Receiver
Lecture 2 28
Statistics of speech amplitudes
In speech, weak signals are more frequent than strong ones.
0.5
0.0
1.0 2.0 3.0
Normalized magnitude of speech signal
S
Using equal step sizes (uniform quantizer) gives low for weak
S N q
signals and high for strong signals.
N q
Adjusting the step size of the quantizer by taking into account the speech statistics
improves the average SNR for the input range.
Lecture 2 29
Baseband transmission
Lecture 2 30
PCM waveforms
+V 1 0 1 1 0 +V 1 0 1 1 0
NRZ-L -V Manchester -V
Unipolar-RZ +V Miller +V
0 -V
+V +V
Bipolar-RZ 0 Dicode NRZ 0
-V -V
0 T 2T 3T 4T 5T 0 T 2T 3T 4T 5T
Lecture 2 31
PCM waveforms …
Criteria for comparing and selecting PCM
waveforms:
Spectral characteristics (power spectral density and
bandwidth efficiency)
Bit synchronization capability
Error detection capability
Interference and noise immunity
Implementation cost and complexity
Lecture 2 32
Spectra of PCM waveforms
Lecture 2 33
M-ary pulse modulation
Lecture 2 34
PAM example
Lecture 2 35
Formatting and transmission of baseband signal
100 0.4552
010 -1.3657
Lecture 3 37
Example of M-ary PAM
3B
A.
‘11’
‘1’ B
T
T T ‘01’
T -B ‘00’ T T
‘0’ ‘10’
-A. -3B
Lecture 3 38
Example of M-ary PAM …
0 Ts 2Ts
2.2762 V 1.3657 V
0 Tb 2Tb 3Tb 4Tb 5Tb 6Tb
1 1 0 1 0 1
Rb=1/Tb=3/Ts
R=1/T=1/Tb=3/Ts
0 T 2T 3T 4T 5T 6T
Rb=1/Tb=3/Ts
R=1/T=1/2Tb=3/2Ts=1.5/Ts
0 T 2T 3T
Lecture 3 39
Today we are going to talk about:
Receiver structure
Demodulation (and sampling)
Detection
First step for designing the receiver
Matched filter receiver
Correlator receiver
Lecture 3 40
Demodulation and detection
mi Pulse g i (t ) Bandpass si (t ) M-ary modulation
Format
modulate modulate i 1, , M
channel
transmitted symbol hc (t )
estimated symbol n(t )
Demod.
Format Detect
m̂i z (T ) & sample r (t )
Lecture 3 42
Example: Channel impact …
hc (t ) (t ) 0.5 (t 0.75T )
Lecture 3 43
Receiver tasks
Demodulation and sampling:
Waveform recovery and preparing the received
signal for detection:
Improving the signal power to the noise power (SNR)
using matched filter
Reducing ISI using equalizer
Sampling the recovered waveform
Detection:
Estimate the transmitted symbol based on the
received sample
Lecture 3 44
Receiver structure
z (T ) m̂i
r (t ) Frequency Receiving Equalizing Threshold
down-conversion filter filter comparison
Lecture 3 45
Baseband and bandpass
Bandpass model of detection process is
equivalent to baseband model because:
The received bandpass waveform is first
transformed to a baseband waveform.
Equivalence theorem:
Performing bandpass linear signal processing followed by
heterodyning the signal to the baseband, yields the same
results as heterodyning the bandpass signal to the
baseband , followed by a baseband linear signal
processing.
Lecture 3 46
Steps in designing the receiver
Find optimum solution for receiver design with the
following goals:
1. Maximize SNR
2. Minimize ISI
Steps in design:
Model the received signal
Find separate solutions for each of the goals.
First, we focus on designing a receiver which
maximizes the SNR.
Lecture 3 47
Design the receiver filter to maximize the SNR
n(t )
AWGN
Simplify the model:
Received signal in AWGN
n(t )
AWGN
Lecture 3 48
Matched filter receiver
Problem:
Design the receiver filter h(t ) such that the SNR is
maximized at the sampling time when si (t ), i 1,..., M
is transmitted.
Solution:
The optimum filter, is the Matched filter, given by
*
h(t ) hopt (t ) si (T t )
*
H ( f ) H opt ( f ) S i ( f ) exp( j 2fT )
0 T t 0 T t
Lecture 3 49
Example of matched filter
y (t ) si (t ) h opt (t )
si (t ) h opt (t ) A2
A A
T T
T t T t 0 T 2T t
y (t ) si (t ) h opt (t )
si (t ) h opt (t ) A2
A A
T T
Lecture 3 50
Properties of the matched filter
The Fourier transform of a matched filter output with the matched signal as input
is, except for a time delay factor, proportional to the ESD of the input signal.
Z ( f ) | S ( f ) |2 exp( j 2fT )
The output signal of a matched filter is proportional to a shifted version of the
autocorrelation function of the input signal to which the filter is matched.
z (t ) Rs (t T ) z (T ) Rs (0) Es
The output SNR of a matched filter depends only on the ratio of the signal energy
to the PSD of the white noise at the filter input.
S Es
max
N T N 0 / 2
Two matching conditions in the matched-filtering operation:
spectral phase matching that gives the desired output peak at time T.
spectral amplitude matching that gives optimum SNR to the peak value.
Lecture 3 51
Correlator receiver
The matched filter output at the sampling time,
can be realized as the correlator output.
z (T ) hopt (T ) r (T )
T
*
r ( )si ( )d r (t ), s (t )
0
Lecture 3 52
Implementation of matched filter receiver
z1 (T )
z1
*
s (T t )
1
Matched filter output:
r (t ) z z Observation
vector
* z M
sM (T t ) zM (T )
zi r (t ) s i (T t ) i 1,..., M
z ( z1 (T ), z2 (T ),..., z M (T )) ( z1 , z 2 ,..., z M )
Lecture 3 53
Implementation of correlator receiver
Bank of M correlators
s 1 (t )
T z1 (T )
r (t )
0
z1 Correlators output:
z
z Observation
vector
s M (t )
T z M
0 z M (T )
z ( z1 (T ), z2 (T ),..., z M (T )) ( z1 , z 2 ,..., z M )
T
zi r (t )si (t )dt i 1,..., M
0
Lecture 3 54
Implementation example of matched filter
receivers
s1 (t )
A
Bank of 2 matched filters
T
0 T t A z1 (T )
T z1
r (t ) z
0 T
z
s2 (t )
z2
0 T
z 2 (T )
0 T t
A A
T T
Lecture 3 55
Receiver job
Demodulation and sampling:
Waveform recovery and preparing the received
signal for detection:
Improving the signal power to the noise power (SNR)
using matched filter
Reducing ISI using equalizer
Sampling the recovered waveform
Detection:
Estimate the transmitted symbol based on the
received sample
Lecture 4 56
Receiver structure
Digital Receiver
Step 1 – waveform to sample transformation Step 2 – decision making
z (T ) m̂i
r (t ) Frequency Receiving Equalizing Threshold
down-conversion filter filter comparison
Lecture 4 57
Implementation of matched filter receiver
z1 (T )
z1
*
s (T t )
1
Matched filter output:
r (t ) z z Observation
vector
* z M
sM (T t ) zM (T )
zi r (t ) s i (T t ) i 1,..., M
z ( z1 (T ), z2 (T ),..., z M (T )) ( z1 , z 2 ,..., z M )
Lecture 4 58
Implementation of correlator receiver
Bank of M correlators
s 1 (t )
T z1 (T )
r (t )
0
z1 Correlators output:
z
z Observation
vector
s M (t )
T z M
0 z M (T )
z ( z1 (T ), z2 (T ),..., z M (T )) ( z1 , z 2 ,..., z M )
T
zi r (t )si (t )dt i 1,..., M
0
Lecture 4 59
Today, we are going to talk about:
Detection:
Estimate the transmitted symbol based on the
received sample
Signal space used for detection
Orthogonal N-dimensional space
Signal to waveform transformation and vice versa
Lecture 4 60
Signal space
What is a signal space?
Vector representations of signals in an N-dimensional
orthogonal space
Why do we need a signal space?
It is a means to convert signals to vectors and vice versa.
It is a means to calculate signals energy and Euclidean
distances between signals.
Why are we interested in Euclidean distances between
signals?
For detection purposes: The received signal is transformed to
a received vector. The signal which has the minimum
Euclidean distance to the received signal is estimated as the
transmitted signal.
Lecture 4 61
Schematic example of a signal space
2 (t )
s1 (a11 , a12 )
1 (t )
z ( z1 , z 2 )
s 3 (a31 , a32 )
s 2 (a21 , a22 )
s1 (t ) a11 1 (t ) a12 2 (t ) s1 (a11 , a12 )
Transmitted signal
s2 (t ) a21 1 (t ) a22 2 (t ) s 2 (a21 , a22 )
alternatives
s3 (t ) a31 1 (t ) a32 2 (t ) s 3 (a31 , a32 )
Received signal at z (t ) z1 1 (t ) z 2 2 (t ) z ( z1 , z 2 )
matched filter output
Lecture 4 62
Signal space
To form a signal space, first we need to know
the inner product between two signals
(functions):
Inner (scalar) product:
x(t ), y (t ) x(t ) y * (t )dt
Analogous to the “dot” product of discrete n-space
vectors = cross-correlation between x(t) and y(t)
Properties of inner product:
ax(t ), y (t ) a x(t ), y (t )
x(t ), ay (t ) a * x(t ), y (t )
x(t ) y (t ), z (t ) x(t ), z (t ) y (t ), z (t )
Lecture 4 63
Signal space …
The distance in signal space is measure by calculating
the norm.
What is norm?
Norm of a signal:
2
x(t ) x(t ), x(t )
x(t ) dt E x
= “length” or amplitude of x(t)
ax(t ) a x(t )
Norm between two signals:
d x , y x(t ) y (t )
We refer to the norm between two signals as the
Euclidean distance between two signals.
Lecture 4 64
Example of distances in signal space
2 (t )
s1 (a11 , a12 )
E1 d s1 , z
1 (t )
E3 z ( z1 , z 2 )
d s3 , z E2 d s2 , z
s 3 ( a31 , a32 )
s 2 (a21 , a22 )
Lecture 4 65
Orthogonal signal space
N-dimensional orthogonal signal space Nis characterized by
N linearly independent functions j (t )j 1 called basis
functions. The basis functions must satisfy the orthogonality
condition
T
0t T
i (t ), j (t ) i (t ) (t )dt K i ji
*
j
0
j , i 1,..., N
where 1 i j
ij
0 i j
Lecture 4 66
Example of an orthonormal basis
Example: 2-dimensional orthonormal signal space
2
1 (t ) cos(2t / T ) 0t T 2 (t )
T
(t ) 2 sin(2t / T ) 0t T
2 T
T
1 (t )
0
1 (t ), 2 (t ) 1 (t ) 2 (t )dt 0
0
1 (t ) 2 (t ) 1
Example: 1-dimensional orthonormal signal space
1 (t )
1 1 (t ) 1
T 1 (t )
0
0 T t
Lecture 4 67
Signal space …
Any arbitrary finite set of waveforms si (t )iM1
where each member of the set is of duration T, can be
expressed as a linear combination of N orthonogal
waveforms j (t )Nj1 where N M .
N
si (t ) aij j (t ) i 1,..., M
j 1 NM
where
T
1 1 j 1,..., N
aij
Kj
si (t ), j (t )
Kj 0
si (t ) *j (t )dt
i 1,..., M
0t T
N
2
s i (ai1 , ai 2 ,..., aiN ) Ei K j aij
j 1
Vector representation of waveform Waveform energy
Lecture 4 68
Signal space …
N
si (t ) aij j (t ) s i (ai1 , ai 2 ,..., aiN )
j 1
Waveform to vector conversion Vector to waveform conversion
1 (t ) 1 (t )
T ai1 ai1
si (t )
0
ai1 sm
ai1
sm si (t )
N (t ) N (t )
T aiN aiN
0 aiN aiN
Lecture 4 69
Example of projecting signals to an
orthonormal signal space
2 (t )
s1 (a11 , a12 )
1 (t )
s 3 (a31 , a32 )
s 2 (a21 , a22 )
1. Define 1 (t ) s1 (t ) / E1 s1 (t ) / s1 (t ) i 1
2. For i 2,..., M compute d i (t ) si (t ) si (t ), j (t ) j (t )
If d i (t ) 0 let i (t ) d i (t ) / d i (t ) j 1
Lecture 4 71
Example of Gram-Schmidt procedure
Find the basis functions and plot the signal space for the
following transmitted signals:
s1 (t ) s2 (t )
A
T 0 T t
A
0 T t T
z1
(T t ) Observation
1
z1 vector
r (t ) z
z
N (T t ) z N
zN
N
si (t ) aij j (t ) i 1,..., M
j 1
z ( z1 , z 2 ,..., z N ) NM
z j r (t ) j (T t ) j 1,..., N
Lecture 4 73
Implementation of the correlator receiver
Bank of N correlators
1 (t )
T z1
r (t )
0
r1
z
z Observation
N (t ) vector
T rN
0 zN
N
si (t ) aij j (t ) i 1,..., M
j 1
z ( z1 , z 2 ,..., z N ) NM
T
z j r (t ) j (t )dt j 1,..., N
0
Lecture 4 74
Example of matched filter receivers using
basic functions
s1 (t ) s2 (t ) 1 (t )
A 1
T T
0 T t
0 T t A 0 T t
T
1 matched filter
1 (t )
1
r (t ) z1 z
T
z1 z
0 T t
Lecture 4 75
White noise in the orthonormal signal space
n~ (t ), (t ) 0
j
j 1,..., N n N
independent zero-mean
j j 1
Gaussain random variables with
variance var(n j ) N 0 / 2
Lecture 4 76
Detection of signal in AWGN
Detection problem:
Given the observation vector , perform
z a mapping
from to an estimate of the transmitted m̂ symbol,
z
, such that the average probability of error in the
decision is minimized.
m i
n
mi si z m̂
Modulator Decision rule
Lecture 5 77
Statistics of the observation Vector
AWGN channel model: z si n
Signal vector s i (ai1 , ai 2 ,..., aiN )is deterministic.
Elements of noise vector n (n1 , n2 ,..., nNare ) i.i.d
Gaussian random variables with zero-mean and
variance N 0 / 2. The noise vector pdf is
1 n
2
pn (n) exp
N 0 N / 2 N 0
The elements of observed vector z ( z , z ,..., z are
1 2 N)
independent Gaussian random variables. Its pdf is
1 z s
2
pz ( z | s i ) exp i
N 0 N /2
N 0
Lecture 5 78
Detection
Optimum decision rule (maximum a posteriori
probability):
Set mˆ mi if
Pr( mi sent | z ) Pr( mk sent | z ), for all k i
where k 1,..., M .
Applying Bayes’ rule gives:
Set mˆ mi if
pz (z | mk )
pk , is maximum for all k i
pz ( z )
Lecture 5 79
Detection …
Lecture 5 80
Detection (ML rule)
For equal probable symbols, the optimum
decision rule (maximum posteriori probability)
is simplified to:
Set mˆ mi if
pz (z | mk ), is maximum for all k i
or equivalently:
Set mˆ mi if
ln[ pz (z | mk )], is maximum for all k i
Lecture 5 82
Detection rule (ML)…
It can be simplified to:
or equivalently:
Lecture 5 83
Maximum likelihood detector block
diagram
,s1
1 Choose
E1
z 2 m̂
the largest
, s M
1
EM
2
Lecture 5 84
Schematic example of the ML decision regions
2 (t )
Z2
s2
Z1
s3 s1
Z3 1 (t )
s4
Z4
Lecture 5 85
Average probability of symbol error
Erroneous decision: For the transmitted symbol mi
or
equivalently signal vector , san
i
error in decision occurs if
z not fall inside region . Z i
the observation vector does
Probability of erroneous decision for a transmitted symbol
Pe (mi ) 1 Pc (mi )
Lecture 5 86
Av. prob. of symbol error …
Average probability of symbol error :
M
PE ( M ) Pr (mˆ mi )
i 1
For equally probable symbols:
1 M
1 M
PE ( M )
M
i 1
Pe (mi ) 1
M
P (m )
i 1
c i
1 M
1
M
p (z | m )dz
i 1 Z i
z i
Lecture 5 87
Example for binary PAM
pz (z | m2 ) pz (z | m1 )
s2 s1
1 (t )
Eb 0 Eb
s1 s 2 / 2
Pe (m1 ) Pe (m2 ) Q
N /2
0
2 Eb
PB PE (2) Q
N0
Lecture 5 88
Union bound
Union bound
The probability of a finite union of events is upper bounded
by the sum of the probabilities of the individual events.
Lecture 5 89
Example of union bound
2
Z2 r Z1
Pe (m1 ) p (r | m )dr
r
Z 2 Z3 Z 4
1 s2 s1
Union bound: 1
s3 s4
4 Z3 Z4
Pe (m1 ) P2 (s k , s1 )
k 2
2
A2 r 2 r r 2
s2 s1 s2 s1 s2 s1
1 1 1
s3 s4 s3 s4 s3 s4
A3 A4
P2 (s 2 , s1 ) p (r | m )dr
A2
r 1 P2 (s 3 , s1 ) p (r | m )dr
A3
r 1 P2 (s 4 , s1 ) p (r | m )dr
r 1
A4
Lecture 5 90
Upper bound based on minimum distance
1 M M d min / 2
PE ( M )
M
P2 (s k , s i ) ( M 1)Q
N /2
i 1 k 1
k i
0
d min min d ik
Minimum distance in the signal space: i ,k
ik
Lecture 5 91
Example of upper bound on av. Symbol
error prob. based on union bound
s i Ei Es , i 1,...,4 2 (t )
d i ,k 2 Es
ik Es s2
d min 2 Es d 2,3 d1, 2
s3 s1
1 (t )
Es Es
d 3, 4 d1, 4
s4
Es
Lecture 5 92
Eb/No figure of merit in digital
communications
SNR or S/N is the average signal power to the
average noise power. SNR should be modified
in terms of bit-energy in DCS, because:
Signals are transmitted within a symbol duration
and hence, are energy signal (zero power).
A merit at bit-level facilitates comparison of
different DCSs transmitting different number of bits
per symbol.
Lecture 5 93
Example of Symbol error prob. For PAM
signals
Binary PAM
s2 s1
1 (t )
Eb 0 Eb
4-ary PAM
s4 s3 s2 s1
Eb E 0 E Eb 1 (t )
6 2 b 2 b 6
5 5 5 5
1 (t )
1
T
0 T t
Lecture 5 94
Inter-Symbol Interference (ISI)
ISI in the detection process due to the filtering
effects of the system
Overall equivalent system transfer function
H ( f ) Ht ( f )H c ( f )H r ( f )
creates echoes and hence time dispersion
causes ISI at sampling time
z k sk nk i si
ik
Lecture 6 95
Inter-symbol interference
Baseband system model
x1 x2
zk
xk Tx filter Channel r (t ) Rx. filter x̂k
ht (t ) hc (t ) hr (t ) Detector
t kT
T Ht ( f ) Hc ( f ) Hr ( f )
x3 T n(t )
Equivalent model
x1 x2
Equivalent system zk
xk z (t ) x̂k
h(t ) Detector
t kT
T H( f )
x3 T nˆ (t )
filtered noise
H ( f ) Ht ( f )H c ( f )H r ( f )
Lecture 6 96
Nyquist bandwidth constraint
Nyquist bandwidth constraint:
The theoretical minimum required system bandwidth to detect Rs
[symbols/s] without ISI is Rs/2 [Hz].
Equivalently, a system with bandwidth W=1/2T=Rs/2 [Hz] can
support a maximum transmission rate of 2W=1/T=Rs [symbols/s]
without ISI.
1 Rs Rs
W 2 [symbol/s/Hz]
2T efficiency,
Bandwidth 2 W [bits/s/Hz] :
R/W
An important measure in DCs representing data throughput per
hertz of bandwidth.
Showing how efficiently the bandwidth resources are used by
signaling techniques.
Lecture 6 97
Ideal Nyquist pulse (filter)
Ideal Nyquist filter Ideal Nyquist pulse
H( f ) h(t ) sinc(t / T )
T 1
0 f 2T T 0 T 2T t
1 1
2T 2T
1
W
2T
Lecture 6 98
Nyquist pulses (filters)
Nyquist pulses (filters):
Pulses (filters) which results in no ISI at the
sampling time.
Nyquist filter:
Its transfer function in frequency domain is obtained
by convolving a rectangular function with any real
even-symmetric frequency function
Nyquist pulse:
Its shape can be represented by a sinc(t/T) function
multiply by another time function.
Example of Nyquist filters: Raised-Cosine filter
Lecture 6 99
Pulse shaping to reduce ISI
Goals and trade-off in pulse-shaping
Reduce ISI
Efficient bandwidth utilization
Robustness to timing error (small side lobes)
Lecture 6 100
The raised cosine filter
Raised-Cosine Filter
A Nyquist pulse (No ISI at the sampling time)
1 for | f | 2W0 W
2 | f | W 2W0
H ( f ) cos for 2W0 W | f | W
4 W W0
0 for | f | W
cos[2 (W W0 )t ]
h(t ) 2W0 (sinc( 2W0t ))
1 [4(W W0 )t ]2
W W0
Excess bandwidth: W W0 Roll-off factor r
W0
0 r 1
Lecture 6 101
The Raised cosine filter – cont’d
r 0.5
r 1
0.5 0.5
r 1 r 0.5
r 0
1 3 1 0 1 3 1 3T 2T T 0 T 2T 3T
T 4T 2T 2T 4T T
Rs
Baseband W sSB (1 r ) Passband W DSB (1 r ) Rs
2
Lecture 6 102
Pulse shaping and equalization to
remove ISI
No ISI at the sampling time
H RC ( f ) H t ( f ) H c ( f ) H r ( f ) H e ( f )
1
He ( f ) Taking care of ISI
Hc ( f ) caused by channel
Lecture 6 103
Example of pulse shaping
Square-root Raised-Cosine (SRRC) pulse shaping
Amp. [V]
Third pulse
t/T
First pulse
Second pulse
Data symbol
Lecture 6 104
Example of pulse shaping …
Raised Cosine pulse at the output of matched filter
Amp. [V]
t/T
Lecture 6 105
Eye pattern
Eye pattern:Display on an oscilloscope which
sweeps the system response to a baseband signal at
the rate 1/T (T symbol duration)
Distortion
due to ISI
Noise margin
amplitude scale
Sensitivity to
timing error
Timing jitter
time scale
Lecture 6 106
Example of eye pattern:
Binary-PAM, SRRQ pulse
Perfect channel (no noise and no ISI)
Lecture 6 107
Example of eye pattern:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and no ISI
Lecture 6 108
Example of eye pattern:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and no ISI
Lecture 6 109
Equalization – cont’d
z (T ) m̂i
r (t ) Frequency Receiving Equalizing Threshold
down-conversion filter filter comparison
Lecture 6 110
Equalization
ISI due to filtering effect of the
communications channel (e.g. wireless
channels)
Channels behave like band-limited filters
j c ( f )
Hc ( f ) Hc ( f ) e
Lecture 6 111
Equalization: Channel examples
Example of a frequency selective, slowly changing (slow fading)
channel for a user at 35 km/h
Lecture 6 112
Equalization: Channel examples …
Example of a frequency selective, fast changing (fast fading)
channel for a user at 35 km/h
Lecture 6 113
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse
Non-ideal channel and no noise
hc (t ) (t ) 0.7 (t T )
Lecture 6 114
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=20 dB) and ISI
hc (t ) (t ) 0.7 (t T )
Lecture 6 115
Example of eye pattern with ISI:
Binary-PAM, SRRQ pulse …
AWGN (Eb/N0=10 dB) and ISI
hc (t ) (t ) 0.7 (t T )
Lecture 6 116
Equalizing filters …
Baseband system model
a1
a (t kT ) Tx filter
k Channel r (t ) Equalizer Rx. filter z (t ) z k âk
k ht (t ) hc (t ) he (t ) hr (t ) Detector
t kT
Ta a Ht ( f ) Hc ( f ) He ( f ) Hr ( f )
2 3
n(t )
Equivalent model
H ( f ) Ht ( f )H c ( f )H r ( f )
a1
Equivalent system zk âk
a (t kT )
k
h(t )
z (t ) x(t ) Equalizer z (t )
k he (t ) Detector
t kT
Ta a H( f ) He ( f )
2 3 nˆ (t )
filtered noise
nˆ (t ) n(t ) hr (t )
Lecture 6 117
Equalization – cont’d
Equalization using
MLSE (Maximum likelihood sequence estimation)
Filtering – See notes on
z-Transform and Digital Filters
Transversal filtering
Zero-forcing equalizer
Decision feedback
Using the past decisions to remove the ISI contributed
by them
Adaptive equalizer
Lecture 6 118
Equalization by transversal filtering
Transversal filter:
A weighted tap delayed line that reduces the effect
of ISI byN
proper adjustment of the filter taps.
z (t ) cn x(t n ) n N ,..., N k 2 N ,...,2 N
n N
x(t )
c N c N 1 c N 1 cN
z (t )
Coeff.
adjustment
Lecture 6 119
Transversal equalizing filter …
Zero-forcing equalizer:
The filter taps are adjusted such that the equalizer output
is forced to be zero at N sample points on each side:
Adjust 1 k 0
z (k )
cn nN N 0 k 1,..., N
Adjust
cn nN N
min E ( z (kT ) ak ) 2
Lecture 6 120
Example of equalizer
2-PAM with SRRQ Matched filter outputs at the sampling time
Non-ideal channel
hc (t ) (t ) 0.3 (t T )
One-tap DFE
ISI-no noise,
No equalizer
ISI-no noise,
DFE equalizer
ISI- noise
No equalizer
ISI- noise
DFE equalizer
Lecture 6 121
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 7 122
Bandpass modulation
Bandpass modulation: The process of converting a
data signal to a sinusoidal waveform where its
amplitude, phase or frequency, or a combination of
them, are varied in accordance with the transmitting
data.
Bandpass signal:
2 Ei
si (t ) gT (t ) cosc t (i 1)t i (t ) 0 t T
T
where
gT (t ) is the baseband pulse shape with energy E g.
We assume here (otherwise will be stated):
gT (t ) is a rectangular pulse shape with unit energy.
Gray coding is used for mapping bits to symbols.
1
M
E denotes average symbol energy given by Es
s Lecture 7 123 M i 1
Ei
Demodulation and detection
Demodulation: The receiver signal is converted to
baseband, filtered and sampled.
Detection: Sampled values are used for detection
using a decision rule such as the ML detection rule.
1 (t )
T z1
r (t )
0
z1 Decision
z
z
circuits m̂
N (t ) (ML detector)
T z N
0 zN
Lecture 7 124
Coherent detection
Coherent detection
requires carrier phase recovery at the receiver and
hence, circuits to perform phase estimation.
Sources of carrier-phase mismatch at the receiver:
Propagation delay causes carrier-phase offset in the
received signal.
The oscillators at the receiver which generate the carrier
signal, are not usually phased locked to the transmitted
carrier.
Lecture 7 125
Coherent detection ..
Circuits such as Phase-Locked-Loop (PLL) are
implemented at the receiver for carrier phase
estimation ( ).ˆ
I branch
2 Ei 2
r (t ) gT (t ) cosi t i (t ) n(t ) cosc t ˆ
T T
PLL
Used by
Oscillator 90 deg. correlators
2
sin ct ˆ
T
Q branch
Lecture 7 126
Bandpass Modulation Schemes
One dimensional waveforms
Amplitude Shift Keying (ASK)
M-ary Pulse Amplitude Modulation (M-PAM)
Two dimensional waveforms
M-ary Phase Shift Keying (M-PSK)
M-ary Quadrature Amplitude Modulation (M-QAM)
Multidimensional waveforms
M-ary Frequency Shift Keying (M-FSK)
Lecture 7 127
One dimensional modulation,
demodulation and detection
Amplitude Shift Keying (ASK) modulation:
2 Ei
si (t ) cosc t
T
On-off keying (M=2):
si (t ) ai 1 (t ) i 1, , M “0” “1”
s2 s1
2
cosc t
1 (t )
1 (t ) 0 E1
T
ai Ei
Lecture 7 128
One dimensional mod.,…
M-ary Pulse Amplitude modulation (M-PAM)
2
si (t ) ai cosc t
T
4-PAM:
si (t ) ai 1 (t ) i 1, , M “00” “01” “11” “10”
2 s1 s2 s3 s4
1 (t ) cosct 1 (t )
T 3 Eg Eg 0 Eg 3 Eg
ai (2i 1 M ) E g
E g 2i 1 M
2 2
Ei s i
( M 2 1)
Es Eg
3
Lecture 7 129
Example of bandpass modulation:
Binary PAM
Lecture 7 130
One dimensional mod.,...–cont’d
Coherent detection of M-PAM
1 (t )
T z1 ML detector
r (t )
0
(Compare with M-1 thresholds) m̂
Lecture 7 131
Two dimensional modulation,
demodulation and detection (M-PSK)
M-ary Phase Shift Keying (M-PSK)
2 Es 2i
si (t ) cos c t
T M
si (t ) ai1 1 (t ) ai 2 2 (t ) i 1, , M
2 2
1 (t ) cosc t 2 (t ) sin c t
T T
2i 2i
ai1 Es cos ai 2 Es sin
M M
2
Es Ei s i
Lecture 7 132
Two dimensional mod.,… (MPSK)
BPSK (M=2)
2 (t )
“0” “1”
8PSK (M=8)
s1 s2
2 (t )
Eb Eb 1 (t ) s3 “011”
“010” “001”
s4 s2
QPSK (M=4) Es
2 (t ) “110” s“000”
1
“01”
s2 “00”
s1 s5 1 (t )
“111” “100”
Es
s6 s8
1 (t )
“101” s7
s3 “11” “10”
s4
Lecture 7 133
Two dimensional mod.,…(MPSK)
Coherent detection of MPSK
1 (t )
T z1
0
r (t ) z1 ˆ m̂
arctan Compute Choose
2 (t ) z2 | i ˆ | smallest
T
0
z2
Lecture 7 134
Two dimensional mod.,… (M-QAM)
M-ary Quadrature Amplitude Mod. (M-QAM)
2 Ei
si (t ) cosc t i
T
si (t ) ai1 1 (t ) ai 2 2 (t ) i 1, , M
2 2
1 (t ) cosc t 2 (t ) sin c t
T T
2( M 1)
where ai1 and ai 2 are PAM symbols and Es
3
( M 1, M 1) ( M 3, M 1) ( M 1, M 1)
( M 1, M 3) ( M 3, M 3) ( M 1, M 3)
ai1 , ai 2
( M 1, M 1) ( M 3, M 1) ( M 1, M 1)
Lecture 7 135
Two dimensional mod.,… (M-QAM)
16-QAM
2 (t )
“0000” “0001” “0011” “0010”
s1 s2 3
s3 s4
s13 s14 -3
s15 s
16
“0100” “0101” “0111” “0110”
Lecture 7 136
Two dimensional mod.,… (M-QAM)
1 (t )
T z1
ML detector
0
(Compare with M 1 thresholds)
r (t ) Parallel-to-serial
m̂
converter
2 (t )
T z2
ML detector
0
(Compare with M 1 thresholds)
Lecture 7 137
Multi-dimensional modulation, demodulation &
detection
M-ary Frequency Shift keying (M-FSK)
2 Es 2 Es
si (t ) cosi t cosc t (i 1)t
T T
1
f
2 2T
3 (t )
M
si (t ) aij j (t ) i 1, , M s3
j 1 Es
2 Es i j
i (t ) cosi t aij s2
T 0 i j 2 (t )
Es
2
Es Ei s i s1
Es
1 (t )
Lecture 7 138
Multi-dimensional mod.,…(M-FSK)
1 (t )
T z1
r (t )
0
z1 ML detector:
z
z Choose
the largest element m̂
M (t ) in the observed vector
T z M
0 zM
Lecture 7 139
Non-coherent detection
Non-coherent detection:
No need for a reference in phase with the received
carrier
Less complexity compared to coherent detection at
the price of higher error rate.
Lecture 7 140
Non-coherent detection …
Differential coherent detection
Differential encoding of the message
The symbol phase changes if the current bit is different
from the previous bit.
2E
si (t ) cos0t i (t ) , 0 t T , i 1,...,M
T
k (nT ) k ((n 1)T ) i (nT )
i
Symbol index: k 0 1 2 3 4 5 6 7
Data bits: mk 1 1 0 1 0 1 1
Diff. encoded bits 1 1 1 0 0 1 1 1 s2 0 s1 1 (t )
Symbol phase: k 0 0
Lecture 7 141
Non-coherent detection …
Coherent detection for diff encoded mod.
assumes slow variation in carrier-phase mismatch during two symbol
intervals.
correlates the received signal with basis functions
uses the phase difference between the current received vector and
previously estimated symbol
2E
r (t ) cos0t i (t ) n(t ), 0 t T
T
i (nT ) j ((n 1)T ) i (nT ) j ((n 1)T ) i (nT )
2 (t )
(a2 , b2 )
i (a1 , b1 )
Lecture 7 142 1 (t )
Non-coherent detection …
Optimum differentially coherent detector
1 (t )
T
r (t )
0
Decision m̂
Delay
T
Sub-optimum differentially coherent detector
T
r (t )
0
Decision m̂
Delay
T
Performance degradation about 3 dB by using sub-
optimal detector
Lecture 7 143
Non-coherent detection …
Energy detection
Non-coherent detection for orthogonal signals (e.g. M-
FSK)
Lecture 7 144
Non-coherent detection …
Non-coherent detection of BFSK
2 / T cos(1t )
T z11
2
0 2 2
z11 z12
2 / T sin(1t )
T z12
r (t )
0
2 + z (T )
Decision stage:
m̂
2 / T cos( 2t ) if z (T ) 0, mˆ 1
T z 21 - if z (T ) 0, mˆ 0
2
0
2 2
2 / T sin(2t ) z 21 z 22
T z 22
0
2
Lecture 7 145
Example of two dim. modulation
2 (t )
16QAM 8PSK s3 “011”
“010” “001”
2 (t ) s4 s2
“0000” “0001” “0011” “0010”
s1 s2 s3 s4 Es
3
“110” s“000”
1
“1000” “1001” “1011” “1010” s5 1 (t )
s5 s6 s7 s8
1 “111” “100”
-3 -1 1 3
1 (t )
s6 s8
s9 s10 -1
s11 12s “101” s7
2 (t )
“1100” “1101” “1111” “1110”
QPSK “00”
s 2“01” s1
s13 s14 -3
s15 s
16
“0100” “0101” “0111” “0110” Es
1 (t )
Lecture 8 147
Error probability of bandpass modulation
1 (t )
T r1
r (t )
0
r1 Decision
r
r Circuits
m̂
N (t ) Compare z
T rN with threshold.
0 rN
Lecture 8 148
Error probability …
The matched filters output (observation vector= r) is
the detector input and the decision variable is a z f (r )
function of r, i.e.
For MPAM, MQAM and MFSK with coherent detection z r
For MPSK with coherent detection z r
Lecture 8 149
Error probability …
AWGN channel model: r si n
The signal vector s i (ai1 , ai 2 ,..., aiN )is deterministic.
The elements of the noise vector n (n1 , n2 ,..., nN )are i.i.d
Gaussian random variables with zero-mean and
variance N 0 / 2. The noise vector's pdf is
1 n
2
pn (n) exp
N 0 N /2
N0
The elements of the observed vector r (r1 , r2 ,..., rNare )
independent Gaussian random variables. Its pdf is
1 r s
2
pr (r | s i ) exp i
N 0 N / 2 N 0
Lecture 8 150
Error probability …
Eb Eb 1 (t )
s 2“1”
2 (t )
s1 s 2 2 Eb Eb
2 Eb Eb
PB Q
PB Q
N0 N0
Lecture 8 151
Error probability …
Non-coherent detection of BFSK
2 / T cos(1t )
Decision variable:
Difference of envelopes
T r11
2 z z1 z 2
0 2 2
z1 r11 r12
2 / T sin(1t )
T r12
r (t )
0
2 + z
Decision rule:
m̂
2 / T cos( 2t ) if z (T ) 0, mˆ 1
T r21 - if z (T ) 0, mˆ 0
2
0
2 2
2 / T sin(2t ) z 2 r21 r22
T r22
0
2
Lecture 8 152
Error probability – cont’d
Non-coherent detection of BFSK …
1 1
PB Pr( z1 z2 | s 2 ) Pr( z 2 z1 | s1 )
2 2
Pr( z1 z 2 | s 2 ) E Pr( z1 z2 | s 2 , z2 )
Pr( z1 z2 | s 2 , z 2 ) p( z2 | s 2 )dz2
p( z | s )dz p( z | s )dz
0 0 z2 1 2 1
2 2 2
1 Eb
PB exp
2 N0
Lecture 8 153
Error probability ….
Coherent detection of M-PAM
Decision variable:
z r1
“00” “01” “11” “10”
s1 s2 s3 s4
4-PAM 1 (t )
3 Eg Eg 0 Eg 3 Eg
1 (t )
T r1
ML detector
r (t )
0
(Compare with M-1 thresholds) m̂
Lecture 8 154
Error probability ….
Coherent detection of M-PAM ….
Error happens if the noise, n r s , exceeds in amplitude
1 1 m
one-half of the distance between adjacent symbols. For symbols
on the border, error can happen only in one direction. Hence:
Pe (s m ) Pr | n1 || r1 s m | E g for 2 m M 1;
P (s ) Pr n r s
e 1 1 1 1 Eg
and Pe (s M ) Pr n1 r1 s M E g
1 M 2
1
1
M
PE ( M )
M
P (s
m 1
e m )
M
Pr | n1 | E g Pr n1 E g Pr n1 E g
M M
2( M 1) 2 E g
2( M 1)
M
Pr n1 E g
M
2( M 1)
E
pn1 (n)dn
M
Q
N0
g
( M 2 1)
Es (log 2 M ) Eb Eg
3
Gaussian pdf with
2( M 1) 6 log 2 M Eb zero mean and variance N0 / 2
PE ( M ) Q 2
M M 1 N0
Lecture 8 155
Error probability …
Coherent detection
2 (t )
of M-QAM “0000”
s1
“0001”
s2 s 3“0011”s 4 “0010”
“1000”
s “1001”
s s 7“1011”s8 “1010”
5 6
16-QAM 1 (t )
s9 s10 s11 s12
“1100” “1101” “1111” “1110”
r (t ) Parallel-to-serial m̂
converter
2 (t )
T r2 ML detector
0 (Compare with M 1 thresholds)
Lecture 8 156
Error probability …
Coherent detection of M-QAM …
M-QAM can be viewed as the combination of two M PAM
modulations on I and Q branches, respectively.
No error occurs if no error is detected on either the I or the Q
branch.
Considering the symmetry of the signal space and the
orthogonality of the I and Q branches:
1 3 log 2 M Eb
PE ( M ) 41 Q Average probability of
M M 1 N 0 symbol error for M PAM
Lecture 8 157
Error probability …
Coherent detection
of MPSK 2 (t )
s 3 “011”
“010”
s4 s“001”
2
Es
“110”
s“000”
1
8-PSK s5 1 (t )
“111” s8“100”
1 (t ) s 6
T r1 “101”s 7
0
r (t ) r1 ˆ m̂
arctan Compute Choose
2 (t ) r2 | i ˆ | smallest
T
0
r2 Decision variable
z ˆ r
Lecture 8 158
Error probability …
Coherent detection of MPSK …
The detector compares the phase of observation vector to M-1
thresholds.
Due to the circular symmetry of the signal space, we have:
1 M /M
PE ( M ) 1 PC ( M ) 1
M
P (s
m 1
c m ) 1 Pc (s1 ) 1
/ M
pˆ ( )d
where
2 Es E
pˆ ( ) cos( ) exp s sin 2 ; | |
N0 N0 2
It can be shown that
2 Es 2log 2 M Eb
PE ( M ) 2Q sin or PE ( M ) 2Q sin
N0 M
N0 M
Lecture 8 159
Error probability …
Coherent detection of M-FSK
1 (t )
T r1
r (t )
0
r1 ML detector:
r
r Choose
the largest element m̂
M (t ) in the observed vector
T rM
0 rM
Lecture 8 160
Error probability …
Coherent detection of M-FSK …
The dimension of the signal space is M. An upper
bound for the average symbol error probability can be
obtained by using the union bound. Hence:
Es
PE ( M ) M 1Q
N0
or, equivalently
PE ( M ) M 1Q
log 2 M Eb
N0
Lecture 8 161
Bit error probability versus symbol error
probability
Number of bits per symbol k log 2 M
For orthogonal M-ary signaling (M-FSK)
PB 2 k 1 M /2
k
PE 2 1 M 1
PB 1
lim
k P 2
E
For M-PSK, M-PAM and M-QAM
PE
PB for PE 1
k
Lecture 8 162
Probability of symbol error for binary
modulation
Note!
• “The same average symbol
PE energy for different sizes of
signal space”
Eb / N 0 dB
Lecture 8 163
Probability of symbol error for M-PSK
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N 0 dB
Lecture 8 164
Probability of symbol error for M-FSK
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N 0 dB
Lecture 8 165
Probability of symbol error for M-
PAM
Note!
• “The same average symbol
energy for different sizes of
PE signal space”
Eb / N 0 dB
Lecture 8 166
Probability of symbol error for M-
QAM
Note!
• “The same average symbol
Eb / N 0 dB
Lecture 8 167
Example of samples of matched filter output
for some bandpass modulation schemes
Lecture 8 168
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 9 169
What is channel coding?
Channel coding:
Transforming signals to improve communications
performance by increasing the robustness against
channel impairments (noise, interference, fading, ...)
Waveform coding: Transforming waveforms to better
waveforms
Structured sequences: Transforming data sequences into
better sequences, having structured redundancy.
-“Better” in the sense of making the decision process
less subject to errors.
Lecture 9 170
Error control techniques
Automatic Repeat reQuest (ARQ)
Full-duplex connection, error detection codes
The receiver sends feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.
Forward Error Correction (FEC)
Simplex connection, error correction codes
The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC)
Full-duplex, error detection and correction codes
Lecture 9 171
Why using error correction coding?
Error performance vs. bandwidth
Power vs. bandwidth
Data rate vs. bandwidth PB
Capacity vs. bandwidth Coded
A
F
Coding gain:
For a given bit-error probability, C B
the reduction in the Eb/N0 that can be
realized through the use of code: D
E
Eb Eb Uncoded
G [dB] [dB] [dB]
N0 u N 0 c Eb / N 0 (dB)
Lecture 9 172
Channel models
Discrete memory-less channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output
Lecture 9 173
Linear block codes
Lecture 9 174
Some definitions
Binary field :
The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition Multiplication
00 0 00 0
0 1 1 0 1 0
1 0 1 1 0 0
Binary field
1 is
1 also
0 called 1Galois
1 1 field, GF(2).
Lecture 9 175
Some definitions…
Fields :
Let F be a set of objects on which two operations ‘+’
and ‘.’ are defined.
F is said to be a field if and only if
1. F forms a commutative group under + operation. The
additive identity element is labeled “0”.
a, bforms
1. F-{0} F acommutative
a b b agroup
F under . Operation. The
multiplicative identity element is labeled “1”.
a, b F a b b a F
a (b c) (a b) (a c)
Lecture 9 176
Some definitions…
Vector space:
Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u, v V u v v u F
2.
a F , v V a v u V
3. Distributive:
(a b) v a v b v and a (u v) a u a v
4. Associative:
5. a, b F , v V (a b) v a (b v )
v V, 1 v v
Lecture 9 177
Some definitions…
Examples of vector spaces
The set of binary n-tuples, denoted by Vn
Example:
{(0000), (0101), (1010), (1111)} is a subspace of V4 .
Lecture 9 178
Some definitions…
Spanning set:
A collection of vectors G v1 , v 2 , , v n,is said to
be a spanning set for V or to span V if
linear combinations of the vectors in G include all
vectors in the vector space V,
Example:
(1000), (0110), (1100), (0011), (1001) spans V4 .
Bases:
The spanning set of V that has minimal cardinality is
called the basis for V.
Cardinality of a set is the number of objects in the set.
Example:
(1000), (0100), (0010), (0001) is a basis for V4 .
Lecture 9 179
Linear block codes
Linear block code (n,k)
A set C Vwith cardinality is called 2 k a linear block
n
code if, and only if, it is a subspace of the vector
space .
Vn
Vk C Vn
Members of C are called code-words.
The all-zero codeword is a codeword.
Any linear combination of code-words is a codeword.
Lecture 9 180
Linear block codes – cont’d
mapping Vn
Vk
C
Bases of C
Lecture 9 181
Linear block codes – cont’d
The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over the channel.
The reverse procedure is done at the receiver.
Channel
Data block Codeword
encoder
k bits n bits
Lecture 9 182
Linear block codes – cont’d
The Hamming weight of the vector U,
denoted by w(U), is the number of non-zero
elements in U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d (U, V ) w(U V )
Lecture 9 183
Linear block codes – cont’d
e d min 1
d min 1
t
2
Lecture 9 184
Linear block codes – cont’d
1 n
n j
PB
n
j
j t 1 j
p (1 p ) n j
Lecture 9 185
Linear block codes – cont’d
Discrete, memoryless, symmetric channel model
1 1-p 1
p
Tx. bits Rx. bits
p
0 1-p 0
Note that for coded systems, the coded bits are
modulated and transmitted over the channel. For
example, for M-PSK modulation on AWGN channels
(M>2):
2 2log 2 M Ec 2 2log 2 M Eb Rc
p Q sin
Q sin
log 2 M N0 M
log 2 M N 0 M
where E is energy per coded bit, given by Ec Rc Eb
c
Lecture 9 186
Linear block codes –cont’d
mapping Vn
Vk
C
Lecture 9 187
Linear block codes – cont’d
Encoding in (n,k) block code
U mG V1
V
(u1 , u2 , , u n ) (m1 , m2 , , mk ) 2
Vk
(u1 , u2 , , u n ) m1 V1 m2 V2 m2 Vk
The rows of G are linearly independent.
Lecture 9 188
Linear block codes – cont’d
Example: Block code (6,3)
000 000000
V1 1 1 0 1 0 0 100 110100
G V2 0 1 1 0 1 0 010 011010
V3 1 0 1 0 0 1 110 1 011 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
1 11 0 0 0 111
Lecture 9 189
Linear block codes – cont’d
Systematic block code (n,k)
For a systematic code, the first (or last) k elements in
the codeword are information bits.
G [P I k ]
I k k k identity matrix
Pk k (n k ) matrix
Lecture 9 190
Linear block codes – cont’d
H [I n k PT ]
Lecture 9 191
Linear block codes – cont’d
r Ue
r (r1 , r2 ,...., rn ) received codeword or vector
e (e1 , e2 ,...., en ) error pattern or vector
Syndrome testing:
S is the syndrome of r, corresponding to the error
pattern e.
S rH T eH T
Lecture 9 192
Linear block codes – cont’d
Standard array
For row i 2,3,...,2 n kfind a vector in Vof minimum
n
weight that is not already listed in the array.
Call this pattern e i and form the i : throw as the
corresponding coset
zero
codeword U1 U2 U 2k
e2 e2 U2 e 2 U 2k
coset
e 2 n k e 2 n k U 2 e 2 n k U 2 k
coset leaders
Lecture 9 193
Linear block codes – cont’d
Note that ˆ r eˆ (U e) eˆ U (e eˆ )
U
If , the error is corrected.
eˆ e
If , undetectable decoding error occurs.
eˆ e
Lecture 9 194
Linear block codes – cont’d
Example: Standard array for the (6,3) code
codewords
Coset leaders
Lecture 9 195
Linear block codes – cont’d
Lecture 9 196
Hamming codes
Hamming codes
Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.
Hamming codes are expressed as a function of a
single integer m 2.
Code length : n 2m 1
Number of information bits : k 2 m m 1
Number of parity bits : n-k m
Error correction capability : t 1
Lecture 9 197
Hamming codes
Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H 0 1 0 1 0 1 1 [I 33 PT ]
0 0 1 1 1 0 1
0 1 1 1 0 0 0
1 0 1 0 1 0 0
G [P I 44 ]
1 1 0 0 0 1 0
1 1 1 0 0 0 1
Lecture 9 198
Cyclic block codes
Cyclic codes are a subclass of linear block
codes.
Encoding and syndrome calculation are easily
performed using feedback shift-registers.
Hence, relatively long block codes can be
implemented with a reasonable complexity.
BCH and Reed-Solomon codes are cyclic
codes.
Lecture 9 199
Cyclic block codes
U (1101)
U (1) (1110 ) U ( 2) (0111) U (3) (1011) U ( 4) (1101) U
Lecture 9 200
Cyclic block codes
Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form
U( X ) u0 u1 X u 2 X 2 ... un 1 X n 1 degree (n-1)
Relationship between a codeword and its cyclic shifts:
XU( X ) u0 X u1 X 2 ..., un 2 X n 1 un 1 X n
un 1 u0 X u1 X 2 ... un 2 X n 1 un 1 X n un 1
U (1 ) ( X ) u n1 ( X n 1)
U (1) ( X ) un 1 ( X n 1)
Lecture 9 201
Cyclic block codes
Basic properties of Cyclic codes:
Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there is a
unique monic polynomial with minimal degree
g( X )
is called the generator polynomial.
r n. g( X )
1. Every code polynomial in C can be expressed
uniquelygas
( X ) g 0 g1 X ... g r X r
2. The generator polynomial U(isX a) factor of
U ( X ) m ( X )g ( X )
g( X )
X n 1
Lecture 9 202
Cyclic block codes
The orthogonality of G and H in polynomial form is
expressed as g ( X ).hThis
( X )means
X n 1 is
also a factor of n
h( X ) X 1
1. The row , of the generator matrix is formed by
i, i 1of,...,
the coefficients thek cyclic shift of the generator
polynomial. " i 1"
g0 g1 g r 0
g( X )
Xg ( X ) g0 g1 gr
G
k 1 g0 g1 g r
X g ( X )
0 g0 g1 g r
Lecture 9 203
Cyclic block codes
Lecture 9 204
Cyclic block codes
Example: For the systematic (7,4) Cyclic code
with generator polynomial g( X ) 1 X X 3
1. Find the codeword for the message m (1011)
n 7, k 4, n k 3
m (1011) m( X ) 1 X 2 X 3
X n k m( X ) X 3m( X ) X 3 (1 X 2 X 3 ) X 3 X 5 X 6
Divide X n k m( X ) by g ( X) :
X 3 X 5 X 6 (1 X X 2 X 3 )(1 X X 3 ) 1
quotient q(X) generator g(X) remainder p ( X )
Lecture 9 205
Cyclic block codes
Find the generator and parity check matrices, G and H,
respectively.
g( X ) 1 1 X 0 X 2 1 X 3 ( g 0 , g1 , g 2 , g 3 ) (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0 We do the following:
G
0 0 1 1 0 1 0 row(1) row(3) row(3)
0 0 0 1 1 0 1 row(1) row(2) row(4) row(4)
1 1 0 1 0 0 0
0 1 0 0 1 0 1 1
1 1 0 1 0 0
G H 0 1 0 1 1 1 0
1 1 1 0 0 1 0
0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44
Lecture 9 206
Cyclic block codes
Syndrome decoding for Cyclic codes:
Received codeword in polynomial form is given by
Received r ( X ) U ( X ) e( X ) Error
codeword pattern
The syndrome is the remainder obtained by dividing the
received polynomial by the generator polynomial.
r ( X ) q ( X )g ( X ) S ( X ) Syndrome
With syndrome and Standard array, the error is estimated.
Lecture 9 207
Example of the block codes
PB
8PSK
QPSK
Eb / N 0 [dB]
Lecture 9 208
Convolutional codes
Convolutional codes offer an approach to error control
coding substantially different from that of block codes.
A convolutional encoder:
encodes the entire data stream, into a single codeword.
does not need to segment the data stream into blocks of fixed
size (Convolutional codes are often forced to block structure by periodic
truncation).
is a machine with memory.
This fundamental difference in approach imparts a
different nature to the design and evaluation of the code.
Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction techniques.
Lecture 10 209
Convolutional codes-cont’d
A Convolutional code is specified by three
parameters or(n, k , K ) where
(k / n, K )
is the coding rate, determining the number
of data bits per coded bit.
R
cIn k / n usually k=1 is chosen and we assume that
practice,
from now on.
K is the constraint length of the encoder a where
the encoder has K-1 memory elements.
There is different definitions in literatures for constraint
length.
Lecture 10 210
Block diagram of the DCS
Channel
Codeword sequence
Lecture 10 212
A Rate ½ Convolutional encoder
u1 u1
u1 u2 u1 u 2
t3 1 0 1 0 0
t4 0 1 0 1 0
u2 u2
Lecture 10 213
A Rate ½ Convolutional encoder
Lecture 10 214
Effective code rate
Initialize the memory before encoding the first bit (all-
zero)
Clear out the memory after encoding the last bit (all-
zero)
Hence, a tail of zero-bits is appended to data bits.
Lecture 10 215
Encoder representation
Vector representation:
We define n binary vector with K elements (one
vector for each modulo-2 adder). The i:th element
in each vector, is “1” if the i:th stage in the shift
register is connected to the corresponding modulo-
2 adder, and “0” otherwise.
Example:
u1
g1 (111)
m u1 u2
g 2 (101)
u2
Lecture 10 216
Encoder representation – cont’d
Impulse response representaiton:
The response of encoder to a single “one” bit that
goes through it.
Example: Branch word
Register
contents u1 u2
100 1 1
Input sequence : 1 0 0
010 1 0
Output sequence : 11 10 11
001 1 1
Input m Output
1 11 10 11
0 00 00 00
1 11 10 11
Modulo-2 sum: 11 10 00 10 11
Lecture 10 217
Encoder representation – cont’d
Polynomial representation:
We define n generator polynomials, one for each
modulo-2 adder. Each polynomial is of degree K-1 or
less and describes the connection of the shift
registers to the corresponding modulo-2 adder.
Example:
Lecture 10 218
Encoder representation –cont’d
In more details:
m( X )g1 ( X ) (1 X 2 )(1 X X 2 ) 1 X X 3 X 4
m( X )g 2 ( X ) (1 X 2 )(1 X 2 ) 1 X 4
m( X )g1 ( X ) 1 X 0. X 2 X 3 X 4
m( X )g 2 ( X ) 1 0. X 0. X 2 0. X 3 X 4
U( X ) (1,1) (1,0) X (0,0) X 2 (1,0) X 3 (1,1) X 4
U 11 10 00 10 11
Lecture 10 219
State diagram
A finite-state machine only encounters a finite
number of states.
State of a machine: the smallest amount of
information that, together with a current input
to the machine, can predict the output of the
machine.
In a Convolutional encoder, the state is
represented by the content of the memory.
Hence, there are states.
2 K 1
Lecture 10 220
State diagram – cont’d
A state diagram is a way to represent the
encoder.
A state diagram contains all the states and all
possible transitions between them.
Only two transitions initiating from a state
Only two transitions ending up in a state
Lecture 10 221
State diagram – cont’d
State
S 0 00 0/00
1/11
S 2 10 0/11
1/00
S1 01 0/10
1/01
0/01
S3 11 1/10
ti ti 1 Time
Lecture 10 223
Trellis –cont’d
A trellis diagram for the example code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01
t1 t2 t3 t4 t5 t6
Lecture 10 224
Trellis – cont’d
t1 t2 t3 t4 t5 t6
Lecture 10 225
Block diagram of the DCS
Channel
Codeword sequence
2 K 1
Lecture 11 227
State diagram – cont’d
A state diagram is a way to represent the
encoder.
A state diagram contains all the states and all
possible transitions between them.
There can be only two transitions initiating
from a state.
There can be only two transitions ending up in
a state.
Lecture 11 228
State diagram – cont’d
State
S 0 00 0/00
1/11
S 2 10 0/11
1/00
S1 01 0/10
1/01
0/01
S3 11 1/10
ti ti 1 Time
Lecture 11 230
Trellis –cont’d
A trellis diagram for the example code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01
t1 t2 t3 t4 t5 t6
Lecture 11 231
Trellis – cont’d
t1 t2 t3 t4 t5 t6
Lecture 11 232
Optimum decoding
If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability of
error is the Maximum likelihood decoder.
Lecture 11 233
ML decoding for memory-less channels
Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
n
p (Z | U (m)
) p z1 , z2 ,...,zi ,... ( Z1 , Z 2 ,..., Z i ,... | U ( m)
) p( Z i | U i
(m)
) p ( z ji | u (jim ) )
i 1 i 1 j 1
The path metric up to time index "i," is called the partial path metric.
Lecture 11 234
Binary symmetric channels (BSC)
1 1
p
Modulator Demodulator p p (1 | 0) p (0 | 1)
input output
p 1 p p (1 | 1) p(0 | 0)
0 1-p 0
If is the Hamming distance between Z and
d m d (Z, U (m ) )
U, then
p (Z | U ( m ) ) p d m (1 p ) Ln d m Size of coded sequence
1 p
U (m) d m log Ln log(1 p )
p
ML decoding rule:
Choose the path with minimum Hamming distance
from the received sequence.
Lecture 11 235
AWGN channels
For BPSK modulation the transmitted sequence
corresponding to the codeword U (m )is denoted by
and Si ( m ) ( s1(im ) ,..., s (jim ) ,..., sni( m ) )
where S ( m ) ( S1( m ) , S 2( m ) ,..., Si( m ) ,...)
and sij E.c
The log-likelihood function becomes
n
U (m) z ji s (jim ) Z, S ( m ) Inner product or correlation
between Z and S
i 1 j 1
Lecture 11 236
Soft and hard decisions
In hard decision:
The demodulator makes a firm or hard decision
whether a one or a zero was transmitted and
provides no other information for the decoder such
as how reliable the decision is.
Lecture 11 237
Soft and hard decision-cont’d
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with a
measure of confidence for the decision.
The demodulator outputs which are called soft-
bits, are quantized to more than two levels.
Decoding based on soft-bits, is called the
“soft-decision decoding”.
On AWGN channels, a 2 dB and on fading
channels a 6 dB gain are obtained by using
soft-decoding instead of hard-decoding.
Lecture 11 238
The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding.
It finds a path through the trellis with the largest
metric (maximum correlation or minimum distance).
It processes the demodulator outputs in an iterative
manner.
At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path with
the smallest metric, called the survivor, together with its
metric.
It proceeds in the trellis by eliminating the least likely
paths.
It reduces the decoding complexity to !
L 2 K 1
Lecture 11 239
The Viterbi algorithm - cont’d
Viterbi algorithm:
A. Do the following set up:
For a data block of L bits, form the trellis. The trellis has
L+K-1 sections or levels and starts at time t1 ends
and
up at time .t L K
Label all the branches in the trellis with their
corresponding branch metric.
For each state in the trellis at the time ti which is
denoted by S (ti ) {0,1,...,2 K ,1}define a parameter S (ti ), ti
B. Then, do the following:
Lecture 11 240
The Viterbi algorithm - cont’d
1. Set (0, t1 ) 0and i 2.
2. At time ti , compute the partial path metrics for all
the paths entering each state.
3. Set S (ti ), ti equal to the best partial path metric
entering each state at time t.i
Keep the survivor path and delete the dead paths
from the trellis.
1. If i L K, increase iby 1 and return to step 2.
A. Start at state zero at time . Follow the
surviving branches backwards t L K through the
trellis. The path found is unique and
corresponds to the ML codeword.
Lecture 11 241
Example of Hard decision Viterbi
decoding
m (101)
U (11 10 00 10 11)
Z (11 10 11 10 01)
t1 t2 t3 t4 t5 t6
Lecture 11 242
Example of Hard decision Viterbi
decoding-cont’d
Label all the branches with the branch metric
(Hamming distance)
S (ti ), ti
0 2 1 2 1 1
0 1 0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Lecture 11 243
Example of Hard decision Viterbi
decoding-cont’d
i=2
0 2 2
1 2 1 1
0 1 0
0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Lecture 11 244
Example of Hard decision Viterbi
decoding-cont’d
i=3
0 2 2
1 3
2 1 1
0 1 0
0 3
0 1 1
2
0 1 0
0
1
2 2
2 1
1
t1 t2 t3 t4 t5 t6
Lecture 11 245
Example of Hard decision Viterbi
decoding-cont’d
i=4
0 2 2
1 3
2 0
1 1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 246
Example of Hard decision Viterbi
decoding-cont’d
i=5
0 2 2
1 3
2 0
1 1
1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 247
Example of Hard decision Viterbi
decoding-cont’d
i=6
0 2 2
1 3
2 0
1 1
1 2
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 248
Example of Hard decision Viterbi decoding-
cont’d
Trace back and then:
ˆ (100)
m
ˆ (11 10 11 00 00)
U
0 2 2
1 3
2 0
1 1
1 2
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Lecture 11 249
Example of soft-decision Viterbi decoding
2 2 2 2 2 2 ˆ (101)
m
Z (1, , , , ,1, , 1, ,1)
3 3 3 3 3 3 ˆ (11 10 00 10 11)
U
m (101)
U (11 10 00 10 11)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S (ti ), ti
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 11 250
Trellis of an example ½ Conv. code
Channel
Codeword sequence
Lecture 12 252
Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero was transmitted and provides
no other information for the decoder such as how
reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Lecture 12 253
Soft and hard decision decoding …
ML soft-decisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received sequence
Lecture 12 254
The Viterbi algorithm
Lecture 12 255
Example of hard-decision Viterbi
decoding
ˆ (100)
m
Z (11 10 11 10 01) ˆ (11 10 11 00 11)
U
m (101)
U (11 10 00 10 11)
0 2 2
1 3
2 0
1 1
1 2
1 0
0
0 3 2
0 1 1
1 2 Partial metric
0 S (ti ), ti
0 0 3 2
1
2 2
2 1 3
Branch metric
1
t1 t2 t3 t4 t5 t6
Lecture 12 256
Example of soft-decision Viterbi decoding
2 2 2 2 2 2 ˆ (101)
m
Z (1, , , , ,1, , 1, ,1)
3 3 3 3 3 3 ˆ (11 10 00 10 11)
U
m (101)
U (11 10 00 10 11)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S (ti ), ti
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 12 257
Today, we are going to talk about:
The properties of Convolutional codes:
Free distance
Transfer function
Systematic Conv. codes
Catastrophic Conv. codes
Error performance
Interleaving
Concatenated codes
Error correction scheme in Compact disc
Lecture 12 258
Free distance of Convolutional codes
Distance properties:
Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and re-merge to the all-zero
path.
It is called the minimum free distance or the free distance of the
code, denoted by d free or d f
Lecture 12 259
Free distance …
t1 t2 t3 t4 t5 t6
Lecture 12 260
Transfer function of Convolutional codes
Transfer function:
The transfer function of the generating function is a
tool which provides information about the weight
distribution of the codewords.
The weight distribution specifies weights of different paths
in the trellis (codewords) with their corresponding lengths
and amount of information.
T ( D, L, N ) D i j
L N l
i d f j K l 1
D, L, N : place holders
i : distance of the path from the all - zero path
j : number of branches that the path takes until it remerges to the all - zero path
l : weight of the information bits corresponding to the path
Lecture 12 261
Transfer function …
a = 00 b = 10 c = 01 e = 00
D 2 LN DL D2L
DLN DL
d =11
DLN
Lecture 12 262
Transfer function …
Write the state equations ( X a ,..., X dummy
e variables)
X b D 2 LNX a LNX c
X c DLX b DLX d
X d DLNX b DLNX d
X D 2 LX
e c
Solve T ( D, L, N ) X e / X a
D 5 L3 N
T ( D , L, N ) D 5 L3 N D 6 L4 N 2 D 6 L5 N 2 ....
1 DL(1 L) N
One path with weight 5, length 3 and data weight of 1
One path with weight 6, length 4 and data weight of 2
One path with weight 5, length 5 and data weight of 2
Lecture 12 263
Systematic Convolutional codes
A Conv. Coder at rate k / n is systematic if the
k-input bits appear as part of the n-bits branch
word.
Input Output
Lecture 12 264
Catastrophic Convolutional codes
Catastrophic error propagations in Conv. code:
A finite number of errors in the coded bits cause an
infinite number of errors in the decoded data bits.
A Convolutional code is catastrophic if there is a
closed loop in the state diagram with zero
weight.
Systematic codes are not catastrophic:
At least one branch of output word is generated by
input bits.
Small fraction of non-systematic codes are
catastrophic.
Lecture 12 265
Catastrophic Conv. …
Example of a catastrophic Conv. code:
Assume all-zero codeword is transmitted.
Three errors happens on the coded bits such that the decoder
takes the wrong path abdd…ddce.
This path has 6 ones, no matter how many times stays in the
loop at node d.
It results in many erroneous decoded data bits.
10
Input Output
a 11 b 10 c 01 e
00 10 01 00
01 d 11
11
00
Lecture 12 266
Performance bounds for Conv. codes
Error performance of the Conv. codes is
analyzed based on the average bit error
probability (not the average codeword error
probability), because
Codewords have variable sizes due to different
sizes of the input.
For large blocks, codeword error probability may
converge to one bit but the bit error probability may
remain constant.
….
Lecture 12 267
Performance bounds …
Analysis is based on:
Assuming the all-zero codeword is transmitted
Evaluating the probability of an “error event”
(usually using bounds such as union bound).
An “error event” occurs at a time instant in the trellis if a
non-zero path leaves the all-zero path and re-merges to it
at a later time.
Lecture 12 268
Performance bounds …
Bounds on bit error probability for
memoryless channels:
Hard-decision decoding:
dT ( D, L, N )
PB
dN N 1, L 1, D 2 p (1 p )
Lecture 12 269
Performance bounds …
Error correction capability of Convolutional codes,
given by t (d f 1) / 2, depends on
If the decoding is performed long enough (within 3 to 5
times of the constraint length)
How the errors are distributed (bursty or random)
For a given code rate, increasing the constraint
length, usually increases the free distance.
For a given constraint length, decreasing the
coding rate, usually increases the free distance.
The coding gain is upper bounded
coding gain 10 log10 ( Rc d f )
Lecture 12 270
Performance bounds …
Basic coding gain (dB) for soft-decision Viterbi
decoding
Lecture 12 271
Interleaving
Convolutional codes are suitable for memoryless
channels with random error events.
Lecture 12 272
Interleaving …
Interleaving is achieved by spreading the
coded symbols in time (interleaving) before
transmission.
The reverse in done at the receiver by
deinterleaving the received sequence.
“Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:
Block interleaving
Convolutional or cross interleaving
Lecture 12 273
Interleaving …
Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
Interleaver Deinterleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors
Lecture 12 274
Concatenated codes
A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).
Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon
codes as the outer code
The purpose is to reduce the overall complexity, yet
achieving the required error performance.
Channel
Output Outer Inner
Deinterleaver Demodulate
data decoder decoder
Lecture 12 275
Practical example: Compact disc
Lecture 12 276
Compact disc – cont’d
Encoder
C2 D* C1 D
interleave encode interleave encode interleave
C2 D* C1 D
deinterleave decode deinterleave decode deinterleave
Decoder
Lecture 12 277
Goals in designing a DCS
Goals:
Maximizing the transmission bit rate
Minimizing probability of bit error
Minimizing the required power
Minimizing required system bandwidth
Maximizing system utilization
Minimize system complexity
Lecture 13 278
Error probability plane
(example for coherent MPSK and MFSK)
M-PSK M-FSK
bandwidth-efficient power-efficient
k=5
k=4
Bit error probability
k=1
k=2
k=4
k=3
k=5
k=1,2
Eb / N 0 [dB] Eb / N 0 [dB]
Lecture 13 279
Limitations in designing a DCS
Limitations:
The Nyquist theoretical minimum bandwidth
requirement
The Shannon-Hartley capacity theorem (and the
Shannon limit)
Government regulations
Technological limitations
Other system requirements (e.g satellite orbits)
Lecture 13 280
Nyquist minimum bandwidth requirement
The theoretical minimum bandwidth needed
for baseband transmission of Rs symbols per
second is Rs/2 hertz.
H( f ) h(t ) sinc(t / T )
T 1
0 f 2T T 0 T 2T t
1 1
2T 2T
Lecture 13 281
Shannon limit
Channel capacity: The maximum data rate at
which error-free communication over the channel is
performed.
Channel capacity of AWGV channel (Shannon-
Hartley capacity theorem):
S
C W log 2 1 [bits/s]
N
W [Hz] : Bandwidth
S EbC [ Watt ] : Average received signal power
N N 0W [Watt] : Average noise power
Lecture 13 282
Shannon limit …
The Shannon theorem puts a limit on the
transmission data rate, not on the error
probability:
Theoretically possible to transmit information at any
rate Rb
, with an arbitrary small error probability
Rb C coding scheme
by using a sufficiently complicated
Lecture 13 283
Shannon limit …
C/W [bits/s/Hz]
Unattainable
region
Practical region
SNR [bits/s/Hz]
Lecture 13 284
Shannon limit …
S
C W log 2 1
N C Eb C
log 2 1
S EbC W N0 W
N N 0W
C
As W or 0, we get :
W
Eb 1 Shannon limit
0.693 1.6 [dB]
N0 log 2 e
Lecture 13 285
Shannon limit …
W/C [Hz/bits/s]
Practical region
Unattainable
region
R>C M=256
Unattainable region
M=64
M=16 Bandwidth limited
M=8
M=4
M=2 R<C
Practical region
M=4 M=2
M=8
M=16
Shannon limit MPSK
MQAM PB 10 5
Power limited MFSK
Eb / N 0 [dB]
Lecture 13 287
Power and bandwidth limited systems
Bandwidth-limited systems:
save bandwidth at the expense of power (for example by
using spectrally efficient modulation schemes)
Lecture 13 288
M-ary signaling
Bandwidth efficiency:
Rb log 2 M 1
[bits/s/Hz]
W WTs WTb
Assuming Nyquist (ideal rectangular) filtering at baseband,
the required passband bandwidth is:
W 1 / Ts Rs [Hz]
M-PSK and M-QAM (bandwidth-limited systems)
Rb / W log 2 M [bits/s/Hz]
Bandwidth efficiency increases as M increases.
Lecture 13 289
Design example of uncoded systems
Design goals:
1. The bit error probability at the modulator output must meet the
system error requirement.
2. The transmission bandwidth must not exceed the available
channel bandwidth.
Input M-ary
modulator R
R [bits/s] Rs [symbols/s]
log 2 M
Output M-ary
E
demodulator Pr E E
b R s Rs
PE ( M ) f s , PB g PE ( M ) N0 N0 N0
N0
Lecture 13 290
Design example of uncoded systems …
Lecture 13 291
Design example of uncoded systems …
Choose a modulation scheme that meets the following
system requirements:
An AWGN channel with WC 45 [kHz]
Pr
48 [dB.Hz] Rb 9600 [bits/s] PB 10 5
N0
Eb P 1
r 6.61 8.2 [dB]
N0 N0 Rb
Rb WC and relatively small Eb / N 0 power - limited channel MFSK
M 16 W MRs MRb /(log 2 M ) 16 9600 / 4 38.4 [ksym/s] WC 45 [kHz]
Es E P 1
(log 2 M ) b (log 2 M ) r 26.44
N0 N0 N0 Rb
M 1 Es 2 k 1
PE ( M 16) exp 1.4 10 PB k
5
PE ( M ) 7.3 10 6 10 5
2 2N0 2 1
Lecture 13 292
Design example of coded systems
Design goals:
1. The bit error probability at the decoder output must meet the
system error requirement.
2. The rate of the code must not expand the required transmission
bandwidth beyond the available channel bandwidth.
3. The code should be as simple as possible. Generally, the shorter
the code, the simpler will be its implementation.
Input M-ary
Encoder
R [bits/s] modulator Rs
R
[symbols/s]
n log 2 M
Rc R [bits/s]
k
Output M-ary
Decoder
PB f ( pc ) demodulator
E Pr E E E
PE ( M ) f s , pc g PE ( M ) b R c Rc s Rs
N0 N0 N0 N0 N0
Lecture 13 293
Design example of coded systems …
Choose a modulation/coding scheme that meets the following
system requirements:
An AWGN channel with WC 4000 [Hz]
Pr
53 [dB.Hz] Rb 9600 [bits/s] PB 10 9
N0
Lecture 13 294
Design example of coded systems
Using 8-PSK, satisfies the bandwidth constraint, but
not the bit error probability constraint. Much higher
power is required for uncoded 8-PSK.
Eb
PB 10 9
16 dB
N0 uncoded
Lecture 13 295
Design example of coded systems
For simplicity, we use BCH codes.
The required coding gain is:
Eb Ec
G (dB)
(dB) (dB) 16 13.2 2.8 dB
N 0 uncoded N 0 coded
The maximum allowed bandwidth expansion due to coding is:
R n Rb n 9600 n
Rs WC 4000 1.25
log 2 M k log 2 M k 3 k
The current bandwidth of uncoded 8-PSK can be expanded by
still 25% to remain below the channel bandwidth.
Among the BCH codes, we choose the one which provides the
required coding gain and bandwidth expansion with minimum
amount of redundancy.
Lecture 13 296
Design example of coded systems …
Bandwidth compatible BCH codes
Lecture 13 297
Design example of coded systems …
Examine that the combination of 8-PSK and (63,51)
BCH codes meets the requirements:
n Rb 63 9600
Rs 3953 [sym/s] WC 4000 [Hz]
k log 2 M 51 3
Es Pr 1 2 Es
50.47 PE ( M ) 2Q sin 1.2 10 4
N 0 N 0 Rs N0 M
PE ( M ) 1.2 10 4
pc 4 105
log 2 M 3
1 n
n j
PB
n
j p
j c
j t 1
(1 pc ) n j
1 .2 10 10
10 9
Lecture 13 298
Effects of error-correcting codes on error
performance
Error-correcting codes at fixed SNR
influence the error performance in two ways:
1. Improving effect:
The larger the redundancy, the greater the error-
correction capability
2. Degrading effect:
Energy reduction per channel symbol or coded bits for
real-time applications due to faster signaling.
The degrading effect vanishes for non-real time
applications when delay is tolerable, since the
channel symbol energy is not reduced.
Lecture 13 299
Bandwidth efficient modulation schemes
Lecture 13 300
Course summary
In a big picture, we studied:
Fundamentals issues in designing a digital
communication system (DSC)
Basic techniques: formatting, coding, modulation
Design goals:
Probability of error and delay constraints
Lecture 13 301
Block diagram of a DCS
Digital modulation
Channel
Digital demodulation
Lecture 13 302
Course summary – cont’d
In details, we studies:
1. Basic definitions and concepts
Signals classification and linear systems
Random processes and their statistics
WSS, cyclostationary and ergodic processes
Autocorrelation and power spectral density
Power and energy spectral density
Noise in communication systems (AWGN)
Bandwidth of signal
2. Formatting
Continuous sources
Nyquist sampling theorem and aliasing
Uniform and non-uniform quantization
Lecture 13 303
Course summary – cont’d
1. Channel coding
Linear block codes (cyclic codes and Hamming codes)
Encoding and decoding structure
Generator and parity-check matrices (or
polynomials), syndrome, standard array
Codes properties:
Linear property of the code, Hamming distance,
minimum distance, error-correction capability,
coding gain, bandwidth expansion due to
redundant bits, systematic codes
Lecture 13 304
Course summary – cont’d
Convolutional codes
Encoder and decoder structure
Encoder as a finite state machine, state diagram,
decisions
Coding gain, Hamming distance, Euclidean distance,
affects of free distance, code rate and encoder
memory on the performance (probability of error and
bandwidth)
Lecture 13 305
Course summary – cont’d
1. Modulation
Baseband modulation
Signal space, Euclidean distance
Orthogonal basic function
Matched filter to reduce ISI
Equalization to reduce channel induced ISI
Pulse shaping to reduce ISI due to filtering at the
transmitter and receiver
Minimum Nyquist bandwidth, ideal Nyquist pulse
shapes, raise cosine pulse shape
Lecture 13 306
Course summary – cont’d
Baseband detection
Structure of optimum receiver
Optimum receiver structure
Optimum detection (MAP)
Maximum likelihood detection for equally likely
symbols
Average bit error probability
Union bound on error probability
Lecture 13 307
Course summary – cont’d
Passband modulation
Modulation schemes
One dimensional waveforms (ASK, M-PAM)
Lecture 13 308
Course summary – cont’d
1. Trade-off between modulation and coding
Channel models
Discrete inputs, discrete outputs
Memoryless channels : BSC
Channels with memory
Discrete input, continuous output
AWGN channels
Shannon limits for information transmission rate
Comparison between different modulation and coding
schemes
Probability of error, required bandwidth, delay
Trade-offs between power and bandwidth
Uncoded and coded systems
Lecture 13 309
Information about the exam:
Exam date:
8th of March 2008 (Saturday)
Allowed material:
Any calculator (no computers)
Mathematics handbook
Swedish-English dictionary
A list of formulae that will be available with the
exam.
Lecture 13 310