0% found this document useful (0 votes)
34 views31 pages

Multilevel Coding and Iterative Multistage Decoding: Rice University

Multilevel coding and iterative multistage decoding can improve error performance over coded modulation schemes. It uses multiple parallel encoders, with the outputs selecting symbols for transmission. Decoding is done in stages, with later decoders using the results from earlier decoders. This allows iterative refinement of the symbol decisions to approach maximum likelihood decoding performance with lower complexity. The encoding rates are designed based on the individual channel capacities at each level, and iterative decoding provides feedback of soft symbol decisions between decoding stages.

Uploaded by

Reggie Gustilo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views31 pages

Multilevel Coding and Iterative Multistage Decoding: Rice University

Multilevel coding and iterative multistage decoding can improve error performance over coded modulation schemes. It uses multiple parallel encoders, with the outputs selecting symbols for transmission. Decoding is done in stages, with later decoders using the results from earlier decoders. This allows iterative refinement of the symbol decisions to approach maximum likelihood decoding performance with lower complexity. The encoding rates are designed based on the individual channel capacities at each level, and iterative decoding provides feedback of soft symbol decisions between decoding stages.

Uploaded by

Reggie Gustilo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 31

Multilevel Coding and

Iterative Multistage Decoding


ELEC 599 Project Presentation

Mohammad Jaber Borran

Rice University
April 21, 2000
Multilevel Coding
A number of parallel encoders
The outputs at each instant select one symbol
q1 K1 N x1
E1 (rate R1)
data bits
from the q2 K2 N x2 Signal
information E2 (rate R2)
M-way Mapping Point
source Partitioning (to 2M-point
of data constellation)

qM KM N xM
EM (rate RM)

M M
1
R   Ri  K i bits/symbol
i 1 N i 1
Distance Properties
• Minimum Hamming distance for encoder i: dHi ,
Minimum Hamming distance for symbol
sequences
d H  min (d Hi )
i1,, M 

• For TCM (because of the parallel transitions)


dH = 1
• MLC is a better candidate for coded modulation
on fast fading channels
Probability of error for Fading Channels
• Rayleigh fading with independent fading coefficients
Chernoff bound

1 1
Pe (ci , c j )   L
E N0 
L
 d k2 (c i , c j ) 
s
 
k 1 
4

2

d k 0

L’: effective length of the error event (Hamming distance)


dk(ci,cj): distance between the kth symbols of the two
sequences
Design Criterion for Fading Channels
• For a fast fading channel, or a slowly fading channel with
interleaving/deinterleaving
Design criterion (Divsalar)

max min d H (c i , c j ) L
{c1 ,c 2 ,,c n } i, j
d P (ci , c j )   d k2 (ci , c j )
max min d P (c i , c j ) k 1
{c1 ,c 2 ,,c n } i, j d k2  0

• For a slowly fading channel without


interleaving/deinterleaving
Design criterion max min d E (c i , c j )
{c1 ,c 2 ,,c n } i, j
Decoding Criterion

• For a fast fading channel, or a slowly fading channel with


interleaving/deinterleaving

L
min  |  k |2 d k2 ( ~
y, ci ) where ~
yk  yk  k
i
k 1

k is the fading coefficient for kth symbol)


– Maximizes the likelihood function
Decoding

• Optimum decoder: Maximum-Likelihood decoder

• If the encoder memories are 1, 2, …,M,


the total number of states is 2,
where = 1 + 2 + … + M.

• Complexity  Need to look for suboptimum


decoders
• If A and Y denote the transmitted and received
symbol sequences respectively, using the chain
rule for mutual information:

I (Y ; A)  I (Y ; X1 , X 2 ,  , X M )
 I (Y ; X 1 )  I (Y ; X 2 | X 1 )  
 I (Y ; X M | X 1 , X 2 , , X M 1 )

• Suggests a rule for a low-complexity staged


decoding procedure
Multistage Decoding
X̂1
Decoder D1
X̂ 2
Decoder D2
Y

X̂ M
Decoder DM

• At stage i, decoder Di processes not only the


sequence of received signal points, but also
decisions of decoders Dj, for j = 1, 2, …, i-1.
X̂1 X̂ 2 ˆ
X i 1

...
Y Decoder Di X̂ i

• The decoding (in stage i) is usually


done in two steps
– Point in subset decoding
– Subset decoding

• This method is not optimal in maximum


likelihood sense, but it is asymptotically optimal
for high SNR.
Optimal Decoding
xi

Pr{a}
M i ( xˆ1 , , xˆi 1 , xi )   fY | A ( y | a)
aA i ( xˆ1 ,, xˆi 1 , xi )  Pr{b}
bA i 1 ( xˆ1 ,, xˆi1 )

– Ai(x1,…, xi) is the subset determined by x1,…, xi


– fY|A(y|a) is the transition probability (determined by
the channel)
Rate Design Criterion
X̂1
Decoder D1
X̂ 2
Decoder D2
Y

X̂ M
Decoder DM

C1  I (Y ; X 1 ) then the rate of the code at


C2  I (Y ; X 2 | X 1 ) level i, Ri, should satisfy

Ri  Ci
C M  I (Y ; X M | X 1 , X 2 ,  , X M 1 )
3
C
C1
C2
2.5

2
Capacity (bits/symbol)

1.5

0.5

0
-5 0 5 10 15 20
SNR (dB)

Two-level, 8-ASK, AWGN channel


Rate Design Criterion

Using the multiaccess channel analogy, if optimal


decoding is used,
R2
Ri  I (Y ; X i | { X k }k i )
I(Y;X2|X1)
Ri  R j  I (Y ; X i , X j | { X k }k i , j )
 I(Y;X2)

R
i
i  I (Y ; X 1 ,  , X M )  I (Y ; A)
R1
I(Y;X1) I(Y;X1|X2)
3

2.5

2
Capacity (bits/symbol)

1.5

C
0.5
C1
C2
I(Y;X1|X2)
0
-5 0 5 10 15 20
SNR (dB)

Two-level, 8-ASK, AWGN channel


Iterative Multistage Decoding
Assuming – Two level Code
– R1  I(Y;X1|X2)
– Decoder D1: Pr{x1 | xˆ1}
then the a posteriori probabilities are
Pr{a | xˆ1}  Pr{A1 ( x1 ) | xˆ1} Pr{a | A1 ( x1 )}
Pr{a}
 Pr{x1 | xˆ1}
 Pr{b}
bA1 ( x1 )

This expression, then, can be used as a priori


probability of point a for the second decoder.
Probability Mass Functions

Error free decoding Non-zero symbol


error probability
3

C
C1
2.5 C2
I(Y;X1|X2)
I(Y;X2|partial X1)
2
Capacity (bits/symbol)

1.5

0.5

0
-5 0 5 10 15 20
SNR (dB)

Two-level, 8-ASK, AWGN channel


3
C
C1
2.5 C2
I(Y;X1|X2)
I(Y;X2|partial X1)

2
Capacity (bits/symbol)

1.5

0.5

0
-5 0 5 10 15 20 25 30 35
SNR (dB)

Two-level, 8-ASK, Fast Rayleigh fading channel


0
10
Overall
Encoded
Uncoded
-1
10

-2
Error Probability

10

-3
10

-4
10

-5
10
0 1 2 3 4 5 6 7
SNR per Bit

8-PSK, 2-level, 4-state, uncoded, AWGN channel


-1
10
Overall
Encoded
Uncoded

-2
10
Error Probability

-3
10

-4
10

-5
10
6 8 10 12 14 16 18 20
SNR per Bit

8-PSK, 2-level, 4-state, uncoded , fast Rayleigh fading channel


0
10
Overall
First Level
Second Level
-1
10

-2
Error Probability

10

-3
10

-4
10

-5
10
6 8 10 12 14 16 18 20
SNR per Bit

8-PSK, 2-level, 4-state, zero-sum, fast Rayleigh fading channel


0
10
Overall
First Level
Second Level
-1
10

-2
Error Probability

10

-3
10

-4
10

-5
10
6 8 10 12 14 16 18
SNR per Bit

8-PSK, 2-level, 4-state, 2-state , fast Rayleigh fading channel


0
10
4-state, zero-sum
4-state, 2-state, 1-iteration
4-state, 2-state, 2-iteration

-1
10
Error Probability

-2
10

-3
10

-4
10
6 8 10 12 14 16 18 20
SNR per Bit

8-PSK, 2-level, fast Rayleigh fading


Higher Constellation Expansion Ratios

• For AWGN, CER is usually 2


– Further expanding  Smaller MSED
 Reduced coding gain
• For fading channels,
– Further expanding  Smaller product distance
 Reduced coding gain
– Further expanding  Larger Hamming distance
 Increased diversity gain
0
10
TCM, 8-PSK
2-level, 1-iteration, 16-PSK

-1
10

-2
Error Probability

10

-3
10

-4
10

-5
10
0 2 4 6 8 10 12 14
SNR per Bit
-2
10

-3
10
Error Probability

-4
10

TCM, 8-PSK
2-level, 1-iteration, 16-PSK
2-level, 2-iteration, 16-PSK

-5
10
14 15 16 17 18 19 20
SNR per Bit
Conclusion
• Using iterative MSD with updated a priori
probabilities in the first iteration, a broader
subregion of the capacity region of MLC scheme
can be achieved.

• Lower complexity multilevel codes can be
designed to achieve the same performance.
• Coded modulation schemes with constellation
expansion ratio greater than two can achieve better
performance for fading channels.
Coding Across Time
• If channels are encoded separately, assuming
– A slowly fading channel in each frequency bin, and
– Independent fades for different channels
(interleaving/deinterleaving across frequency bins is
used)

 Es h 2 
Pr c  cˆ | h  exp c
2
n  cˆn 
 4 N 0 n 
1
Eh  Pr c  cˆ | h  
Es
 cn  cˆn
2
1
4N0 n
Coding Across Frequency Bins
• If coding is performed across frequency bins,
assuming independent fades for different
channels (interleaving/deinterleaving across
frequency bins is used)

 E 2
Pr c  cˆ | h  exp s n n n n 
2
h c  cˆ
 4N0 
1
Eh  Pr c  c | h   
ˆ
n 1
Es 2
cn  cˆn
4N0
0
10
Accross time, 1-iteration
Accross time, 2-iteration
Accross frequency, 1-iteration
-1 Accross frequency, 2-iteration
10
Error Probability

-2
10

-3
10

-4
10
6 8 10 12 14 16 18 20
SNR per Bit

8-PSK, 2-level, 4-state, 2-state

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy