0% found this document useful (0 votes)
26 views27 pages

Turbo Codes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views27 pages

Turbo Codes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

TURBO CODES

Turbo Codes
 Background
◦ Proposed by Berrou and Glavieux in the 1993
International Conference in Communications.
◦ Performance within 0.5 dB of the channel
capacity limit for BPSK was demonstrated.
 Features of turbo codes
◦ Parallel concatenated coding
◦ Recursive convolutional encoders
◦ Pseudo-random interleaving
◦ Iterative decoding
Encoder
Systematic
Input Output
Xk Xk

“Upper” Uninterleaved
RSC Parity Output
Encoder Zk

Interleaved
“Lower” Parity
Interleaved
Interleaver Input RSC Z’k
X’k Encoder
Recursive Systematic Convolutional
Encoding
xi( 0 )
 An RSC encoder can be
mi xi constructed from a
D D standard convolutional
encoder by feeding back
xi(1)
one of the outputs.
 An RSC encoder has an
Constraint Length K= 3 infinite impulse response.
mi xi( 0 )  An arbitrary input will
cause a “good” (high
weight) output with high
xi probability.
ri  Some inputs will cause
D D “bad” (low weight)
xi(1) outputs.
Pseudo Random Interleaver
▪ Random interleavers are constructed as block interleavers

▪ The data positions are determined randomly

▪ If the expected burst length b > t, interleaver is used

▪ In this work, an interleaver of depth 155 x 1240 is used


Interleaving and Recursive
Encoding
 In a coded system:
◦ Performance is dominated by low weight code words.
 A “good” code:
◦ will produce low weight outputs with very low probability.
 An RSC code:
◦ Produces low weight outputs with fairly low probability.
◦ However, some inputs still cause low weight outputs.
 Because of the interleaver:
◦ The probability that both encoders have inputs that cause low
weight outputs is very low.
◦ Therefore the parallel concatenation of both encoders will
produce a “good” code.
Iterative Decoding
Deinterleaver
APP
APP
Interleaver
systematic Decoder Decoder
data #1 #2 hard bit
parity decisions
data DeMUX

Interleaver

 There is one decoder for each elementary encoder.


 Each decoder estimates the a posteriori probability (APP) of each
data bit.
 The APP’s are used as a priori information by the other decoder.
 Decoding continues for a set number of iterations.
◦ Performance generally improves from iteration to iteration, but follows a
law of diminishing returns.
Apriori Probability Estimate
If ‘d’ is data and ‘x’ is its noisy observation

L(x|d) = LLR of test statistics ‘x’ for transmitted data d =


+1 or d = -1
L(d) = Apriori LLR of data ‘d’
The above equation can be rewritten as
Apriori Probability Estimate
Assume two observations x1 and x2 are available for a
noisy data. If x1 and x2 are independent,

Le(d): Extrinsic information about data from decoding a


parity bit related to data
Product code example

ph: Horizontal parity bits


pv: Vertical parity bits
Consider the following example

d1=1 d2=0 p12=1

d3=0 d4=1 p34=1

p13=1 p24=1

Parity bit pij = di  dj or di = pij  dj

Let xi = di + n be received data bit and


xij = pij + n be received parity bit
When d1 ≠ d2, d1 d2 = +1 else d1 d2 = -1
When
Example:

Extrinsic information for data d1 can be obtained from


observation of d2 and p12
For the example product code, the iterative computation
of soft values is shown below.

Apriori L(d) values are initially ‘0’.

Horizontal extrinsic computations:


Vertical extrinsic computations:
1.5 0.1 -0.1 -1.5 1.4 -1.4

0.2 0.3 -0.3 -0.2 -0.1 0.1

Initial Lc(xk) values Leh(d) after Ist Improved LLR


horizontal decoding

0.1 -0.1 1.5 -1.5


-1.4 1.0 -1.5 1.1

Lev(d) after Ist Improved LLR


vertical decoding (horizontal+vertical)
Turbo decoder
L2e (dk )
P-1

d’k d’k
Map Map
p’1k decoder 1 decoder 2 p’2k

L1e (dk )
P
For a convolutional coder
p3,3
S3 S3

S2 S2

S1 S1

p0,0
S0 S0

S+ : set of all possible state transitions associated with a


data bit dk=1
S- : set of all possible state transitions associated with a
data bit dk=0
3,3
(s3) (s3)

(s2) (s2) : Probability of arriving at a branch


in a particular state and sequence of
(s1) (s1) observations are given out

(s0) 0,0 (s0)


By summing over all paths leading into that state, we get
a forward recursion for calculating

k-1(s3) 3,3 k(s3)

k-1(s1)

To initialize, we know all coders start with initial state s 0


βk(s): probability of exiting a branch through a particular
state ‘s’ giving out observations

By summing over all paths exiting that state we get


recursion formula

3,3
k(s3) k+1(s3)

k+1(s2)
Considering coder1 and coder2 terminated with state s 0

uk : Event that there is a transition from state s’ to s

Let

Then,
We know

Then

or
……(1)

Ak : constant wrt uk

means two observations for a coder


- systematic-
- parity-
…(2)

...(1) (2)
Since terms independent of uk come out of summation, they cancel
in numerator and denominator.
Therefore

:Apriori information

:Information through parity constraint

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy