0% found this document useful (0 votes)
35 views6 pages

Turbo Codes For PCS Applications: Abstract

This document summarizes research on turbo codes for personal communication system (PCS) applications. Turbo codes were introduced in 1993 and can achieve near Shannon-limit error correction with simple convolutional codes and large interleavers. The paper confirms previous claims about turbo code performance and presents an encoder/decoder pair suitable for PCS. It introduces a new trellis termination method, analyzes interleaver effects on weight distribution, and develops decoders for multiple convolutional code encoders.

Uploaded by

Mudita Chandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views6 pages

Turbo Codes For PCS Applications: Abstract

This document summarizes research on turbo codes for personal communication system (PCS) applications. Turbo codes were introduced in 1993 and can achieve near Shannon-limit error correction with simple convolutional codes and large interleavers. The paper confirms previous claims about turbo code performance and presents an encoder/decoder pair suitable for PCS. It introduces a new trellis termination method, analyzes interleaver effects on weight distribution, and develops decoders for multiple convolutional code encoders.

Uploaded by

Mudita Chandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Turbo Codes for PCS Applications

D. Divsalar and E Pollara‘


Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91 109

ABSTRACT:Turbo codes are the most exciting and potentially important correspond to 9.6 Kbps and 13 Kbps with roughly 20ms frames). Higher
development in coding theory in many years. They were introduced in capacity can be obtained with larger interleavers. Note that low rate codes
1993 by Berrou, Glavieux and Thitimajshima [l], and claimed to achieve can be used for CDMA since an integer number of chips per coded symbol
near Shannon-limit error correction performance with relatively simple is used and bandwidth is defined mainly by chip rate.
component codes and large interleavers. A required Eb/N, of 0.7 dB was Three new contributions are reported in this paper: a new simple
reported for BER of and code rate of 1/2 [l]. However, some im- method for trellis termination, the use of unequal rate component codes
portant details that are necessary to reproduce these results were omitted. which results in better performance, and the development of decoders for
This paper confirms the accuracy of these claims, and presents a complete multiple-code encoders -the original turbo decoder scheme operates in
description of an encodeddecoder pair that could be suitable for PCS ap- serial mode, while for multiple-code encoders we found that the decoder
plications. We describe a new simple method for trellis termination, we for the whole turbo code based on the optimum MAP rule must operate
analyze the effect of interleaver choice on the weight distribution of the in parallel mode, and we derived the appropriate metric as illustrated in
code, and we introduce the use of unequal rate component codes which Sec. 111.
yields better performance. Turbo codes are extended to encoders with
11. PARALLEL
CONCATENATION
OF CONVOLUTIONAL
multiple codes and a suitable decoder structure is developed, which is
CODES
substantially different from the decoder for two-code based encoders.
The codes considered in this paper consist of the parallel concatenation
I. INTRODUCTION of multiple convolutional codes with random interleavers (permutations)
Coding theorists have traditionally attacked the problem of designing at the input of each encoder. Fig. 1 illustrates a particular example that
good codes by developing codes with a lot of structure which lends to will be used in this paper to verify the performance of these codes. The
feasible decoders, although coding theory suggests that codes chosen “at
random” should perform well if their block size is large enough. The
challenge to find practical decoders for “almost” random, large codes has
not been seriously considered until recently. Perhaps the most exciting
and potentially important development in coding theory in recent years
has been the dramatic announcement of “Turbo-codes” by Berrou et al.
in 1993 [l]. The announced performance of these codes was so good that
the initial reaction of the coding establishment was deep skepticism, but
recently researchers around the world have been able to reproduce those
results [3]- [4]. The introduction of turbo-codes has opened a whole new
way of looking at the problem of constructing good codes and decoding
them with low complexity.
These codes are claimed to achieve near Shannon-limit error correc-
tion performance with relatively simple component codes and large inter-
leavers. A required &,/No of 0.7 dB was reported for BER of [l].
However, some important details that are necessary to reproduce these
results were omitted. The purpose of this paper is to shed some light on
the accuracy of these claims, and to present a complete description of an
encodeddecoder pair that could be suitable for PCS applications, where
lower rate codes can be used.
For example, in multiple-access schemes like CDMA the capacity Figure 1: Example of encoder with three codes
(maximum number of users per cell) can be expressed as C = &+ 1,
where 17 is the processing gain and E,,/N, is the required signal-to-noise
encoder contains three recursive binary convolutional encoders, with M I ,
ratio to achieve a desired bit error rate (BER) performance. For a given
M2 and M3 memory cells respectively. In general, the three component
BER, a smaller required E b / N o implies a larger capacity or cell size.
encoders may not be identical. The first component encoder operates
Unfortunately, to reduce E h / No it is necessary to use very complex codes
directly on the information bit sequence U = (u1. . . . , U N ) of length N ,
(e.g. large constraint length convolutional codes). In this paper, we design
producing the two output sequences y l rand y l p .The second component
turbo codes suitable for CDMA and PCS applications that can achieve
encoder operates on a reordered sequence of information bits u2 produced
superior performance with limited complexity. For example, if a (7,1/2)
by an interleaver n2 of length N , and outputs the sequence y2,,.Similarly,
convolutional code is used at BER=10-3, the capacity is C = 0.5q.
subsequent component encoders operate on a reordered sequence of in-
However, if two (5,1/3) punctured convolutional codes or three (4,1/3)
formation bits u j produced by interleaver nj and output the sequence y j p .
punctured codes are used in a turbo encoder structure, the capacity can
The interleaver is a pseudo-random block scrambler defined by a permu-
be increased to C = 0.817 (with 192-bits and 256-bits interleavers which
tation of N elements with no repetitions: a complete block is read into
‘The research described in this paper was carried out at the Jet Propulsion the the interleaver and read out in a specified (fixed) random order. The
Laboratory, California Institute of Technology, under contract with the National same interleaver is used repeatedly for all subsequent blocks. Figure 1
Aeronautics and Space Administration. shows an example where a rate r = 1/ n = 1/4 code is generated by three

0-7803-2486-2195$4.00 0 1995 IEEE 54


component codes with M I = M2 = M3 = M = 2, producing the outputs 14+2 E;=,
t i . Thus, one of the considerations in designing theinterleaver
Yil 11, Y i p = U . &?Olga, Yz,, = U 2 ’ g h / g a , and y3,, = U 3 . gb/ga, where is to avoid integer triplets (tl , t z , t 3 ) that are simultaneously small in all
the generator polynomials g, and go have octal representation (7)octal and three components. In fact, it would be nice to design an interleaver to
(5)octal,respectively. Note that various code rates can be obtained by guarantee that the smallest value of E:=, ti (for integer t i ) grows with the
proper puncturing of y i p , y2,, and y3,,. The design of the constituent block size N .
convolutional codes, which are not necessarily optimum convolutional For comparison we consider the same encoder structure in Fig. 1, ex-
codes, IS still under investigation. It was suggested in [SI that good codes cept with the roles of ga and gb reversed. Now the minimum distances
are obtained if g b is a primitive polynomial. of the three component codes are 5 , 3, and 3, producing an overall min-
Trelliis Termination - We use the encoder in Fig. 1 to generate a imum distance of 11 for the total code without any permutations. This
(n(N t- M ) , N) block code, where the M tail bits of code 2 and code 3 is apparently a better code, but it turns out to be inferior as a turbo code.
are not transmitted. Since the component encoders are recursive, it is not This paradox is explained by again considering the critical weight-2 data
sufficient to set the last M information bits to zero in order to drive the +
sequences. For this code, weight-2 sequences with 1 2tl zeros sepa-
encoder to the all zero state, i.e. to terminate the trellis. The termination rating the two l’s produce non-terminating output and hence low-weight
(tail) st:quence depends on the state of each component encoder after N encoded words. In the turbo encoder, such sequences will be permuted
bits, which makes it impossible to terminate both component encoders +
to have separations 1 2ti, i = 2, 3, for the second and third encoders,
with just M bits. This issue has not been resolved in previously proposed where now each ti is defined as a multiple of 1/2. But now the total
turbo code implementations. Fortunately, the simple stratagem illustrated encoded weight for integer triplets (tl, t2.t3)is 11 +E;=,
ti. Notice how
in Fig. 2 is sufficient to terminate the trellis at the end of the block. (The
this weight grows only half as fast with ti as the previously calcu-
specific code shown is not important). Here the switch is in position “A”
for the first N clock cycles and is in position “B” for M additional cycles,
lated weight for the original code. If E;=,
ti can be made to grow with
block size by proper choice of interleaver, then clearly it is important to
which will flush the encoders with zeros. The decoder does not assume
knowledge of the M tail bits. The same termination method will he used choose component codes that cause the overall weight to grow as fast as
for all encoders. possible with the individual separations t i . This consideration outweighs
the criterion of selecting component codes that would produce the highest
minimum distance if unpermuted.
There are also many weight-n, n = 3 , 4 , 5, ..., data sequences that
lnpi produce terminating output and hence low encoded weight. However, as
-
argued below, these sequences are much more likely to be broken up by the
random interleavers than the weight-2 sequences and are therefore likely
xP
to produce non-terminating output from at least one of theencoders. Thus,
turbo code structures which have low minimum distances (if unpermuted)
due strictly to higher-weight input sequences are often superior to other
Figure 2: Trellis Termination turbo codes with higher unpermuted minimum distances that are caused
by weight-2 input sequences.
Weight Distribution - In order to estimate the performance of Weight Distribution with Random Interleavers-NOW
a code it is necessary to have information about its minimum distance, we briefly examine the issue of whether one or more random interleavers
weight distribution, or actual code geometry, depending on the accuracy can avoid matching small separations between the 1’s of a weight-2 data
required for the bounds or approximations. The challenge is in finding sequence with equally small separations between the 1’s of its permuted
the pairing of codewords from each individual encoder, induced by a version(s). Consider for example a particular weight-2 data sequence
particular set of interleavers. Intuitively, we would like to avoid joining (. . . 001001000 . . 1) which corresponds to a low weight codeword in each
low-weight codewords from one encoder with low-weight words from of the encoders of Fig. 1. If we select randomly an interleaver of size
the other encoders. In the example of Fig. 1, the component codes have N, the probability that this sequence will be permuted into another se-
minimum distances 5,2 and 2. This will produce a worst-case minimum quence of the same form is roughly 2/N (assuming that N is large,
distance of 9 for the overall code. Note that this would be unavoid- and ignoring minor edge effects). The probability that such an unfor-
able if the encoders were not recursive since, in this case, the minimum tunate pairing happens for at least one possible position of the original
weight word for all three encoders is generated by the input sequence sequence (. . .001001000.. .) within the block size of N, is approxi-
U = (00. . . 0000100 . . . 000) with a single “l”, which will appear again mately 1 - (1 - 2/N)N x 1 - e-*. This implies that the minimum
in the other encoders, for any choice of interleavers. This motivates the distance of a two-code turbo code constructed with a random permuta-
use of recursive encoders, where the key ingredient is the recursiveness tion is not likely to be much higher than the encoded weight of such
and not the fact that the encoders are systematic. For our example, the an unpermuted weight-2 data sequence, e.g. 14 for the code in Fig. 1.
input sequence U = (00.. . 00100100 . . ,000) generates a low weight (For the worst case permutations, the dminof the code is still 9, but these
codeword with weight 6, for the first encoder. If the interleavers do not permutations are highly unlikely if chosen randomly). By contrast, if
“break” this input pattern, the resulting codewords weight will be 14. In we use three codes and two different interleavers, the probability that
+
general weight-2 sequences wilh 2 3t zeros separating the 1’s would a particular sequence (. , .001001000., .) will be reproduced by both
result in a total weight of 14 + 1st if there were no permutations. interleavers is only (2/N)2. Now the probability of finding such an un-
With permutations before the second and third encoders, a weight- fortunate data sequence somewhere within the block of size N is roughly
2 sequence with its 1’s separated by 2 + 3rl zeros will be permuted 1 - [l - (2/N)’IN % 4,”. Thus it is probable that a three-code turbo
into two other weight-2 sequences with 1’s separated by 2 3t, zeros, + code using two random interleavers will see an increase in its minimum
1 = 2, 3, where each tl is defined as a multiple of 113. If any tl is not distance beyond the encoded weight of an unpermuted weight-2 data
an integer, the corresponding encoded output will have a high weight sequence. This argument can be extended to account for other weight-
because then the convolutional code output is non-terminating (until the 2 data sequences which may also produce low weight codewords, e.g.
end of the block). If all t, ’s are integers, the total encoded weight will be (. . .00100(000)‘1000. . .), for the code in Fig. 1. For comparison, let

55
us consider a weight-3 data sequence such as (. . . 001 1100. . ,) which erating random integers i, 1 5 i 5 N , without replacement. We define
for our example corresponds to the minimum distance of the code (using a “S-random” permutation as follows: each randomly selected integer
no permutations). The probability that this sequence is reproduced with is compared to S previously selected integers. If the current selection
one random interleaver is roughly 6 / N 2 , and the probability that some is equal to any S previous selections within a distance of fS,then the
sequence of the form (. . ,0011100. . .) is paired with another of the same current selection is rejected. This process is repeated until all N integers
form is 1 - (1 - 6 / N 2 ) Nx 6 / N . Thus for large block sizes, the bad are selected. While the searching time increases with S, we observed
weight-3 data sequences have a small probability of being matched with that choosing S < usually produces a solution in reasonable time.
bad weight-3 permuted data sequences, even in a two-code system. (For S = 1 we have a purely random interleaver).
For a turbo code using q codes and q - 1 random interleavers this In the simulations we used S = 11 for N = 256 and S = 31 for
probability is even smaller, 1 - [l - (6/N2)9- ‘1” :(+)q-*. This N = 4096.
The advantage of using three or more constituent codes is that the
implies that the minimum distance codeword of the turbo code in Fig. 1
is more likely to result from a weight-2 data sequence of the form corresponding two or more interleavers have a better chance to break se-
(, . .001001000 . . .) than from the weight-3 sequence (. . ,0011100. . .) quences that were not taken care by another interleaver. The disadvantage
that produces the minimum distance in the unpermuted version of the is that, for an overall desired code rate, each code must be punctured more,
same code. Higher weight sequences have even smaller probability of resulting in weaker constituent codes. In our experiments, we have used
reproducing themselves after being passed through a random interleaver. randomly selected interleavers and interleavers based on the row-column
permutation described above. In general, randomly selected permuta-
For a turbo code using q codes and q - 1 interleavers, the probability
that a weight-n data sequence will be reproduced somewhere within the r o o . . . 0 0 0 1
block by all q - 1 permutations is of the form 1 - [ 1 - (/?/N“-’)q-‘]N,
0
WRITE

where fi is a number that depends on the weight-n data sequence but does +
not increase with block size N . For large N , this probability is propor-
tional to (l/N)“q-“-q, which falls off rapidly with N , when n and q are 0 0
L o o 0 . . . 0 0 1
greater than two. Furthermore, the symmetry of this expression indicates
that increasing either the weight of the data sequence n or the number of
codes q has roughly the same effect on lowering this probability. Figure 3: Example where a block interleaver fails to “break” the
In summary, from the above arguments we conclude that weight-2 data input sequence.
sequences are an important factor in the design of the component codes,
and that higher weight have decreasing importance. Also, increasing the tions are good for low SNR operation (e.g., PCS applications requiring
number of codes may result in better turbo codes. More accurate results f$ = where the overall weight distribution of the code is more
and derivations are discussed in [6]. important than the minimum distance
The minimum distance is not the most important quantity of the turbo 111. TURBO
DECODINGCONFIGURATION
code, except for its asymptotic performance, at very high E!,/N,. At mod- The turbo decoding configuration proposed in [1] for two codes is shown
erate SNRs, the weight distribution for the first several possible weights schematically in Fig. 4. This configuration operates in serial mode, i.e.
is necessary to compute the code performance. Estimating the complete “Dec1” processes data before “Dec2” starts its operation, and so on.
weight distribution of these codes for large N and fixed interleavers is still An obvious extension of this configuration to three codes is shown in
an open problem. However, it is possible to estimate the weight distribu-
tion for large N for random interleavers by using probabilistic arguments.
(See [4] for further considerations on the weight distribution).
I n t e r l e a v e r Design - Interleavers should be capable of spread-
ing low-weight input sequences so that the resulting codeword has high
Figure 4: Decoding structure for two codes.
weight. Block interleavers, defined by a matrix with U, rows and U,
columns such that N = ur x U,, may fail to spread certain sequences. For
Fig. 5(a), which also operates in serial mode. But, with more than two
example, the weight 4 sequence shown in Fig. 3 cannot be broken by a
codes, there are other possible configurations, as that shown in Fig. 5(b)
block interleaver. In order to break such sequences random interleavers
where “Decl” communicates with the other decoders, but these decoders
are desirable. (A method for the design of interleavers is discussed in [ 3 ] ) .
do not exchange information among each other. This “Master & Slave’’
Block interleavers are effective if the low-weight sequence is confined to
configuration operates in a mixed serial-parallel mode, since all other
a row. If low-weight sequences (which can be regarded as the combina-
decoders except the first operate in parallel. Another possibility, shown in
tion of lower weight sequences) are confined to several consecutive rows,
then the U, columns of the interleaver should be sent in a specified order Fig. 5(c) is that all decoders operate in parallel at any given time. Note that
self loops are not allowed in these structures since they cause degradation
to spread as much as possible the low-weight sequence. A method for re-
or divergence in the decoding process (positive feedback). We are not
ordering the columns is given in [8]. This method guarantees that for any
considering other possible hybrid configurations. Which configuration
number of columns U, = aq + r , ( r 5 a - 1), the minimum separation
performs better? Our selection of the best configuration and its associated
between data entries is q - 1, where a is the number of columns affected
decoding rule is based on a detailed analysis of the minimum bit error
by a burst. However, as can be observed in the example in Fig. 3, the se-
decoding rule (MAP algorithm) as described below.
quence 1001 will still appear at the input of the encoders for any possible
Turbo D e c o d i n g for M u l t i p l e Codes - Let uk be a binary
column permutation. Only if we permute the rows of the interleaver in ad-
random variable taking values in (0, l}, representing the sequence of
dition to its columns it is possible to break the low-weight sequences. The
information bits U = ( U , , . . . , u N ) ,The MAP algorithm [7] provides the
method in [SI can be used again for the permutation of rows. Appropriate
log likelihood ratio Lk given the received symbols y:
selection of of a , and q for rows and columns depends on the particular
set of codes used and on the specific low-weight sequences that we would
like to break. We designed random permutations (interleavers) by gen-

56
Note that P(ulyi)is not separable in general A reasonable criterion for
(a) SERIAL
this approximation is to choose n,”=,
pi( u k ) such that it minimizes the
Kullback distance or free energy [9, lo]. Define

eukilk
pi (uk) = -
e.. +
1 eick
9

where uk E IO, 1). Then the Kullback distance is given by


(b) MASTER 8 SLAVE

Such minimization involves forward and backward recursions analogous


to the MAP decoding algorithm! Therefore, if such an approximation
can be obtained, we can use it in eq.(3) for i = 2 and i = 3 (by Bayes’
rule) to complete the algorithm. Now, instead of using eq.(6) to obtain
{ F i }or equivalently (Li),
we use (4) and ( 5 ) for i = 2 , 3 (by Bayes’ rule)
(c) PARALLEL
to express (3) as

Lk = f ( y l , fJ2, 6 3 , k) f L2k f L3k (7)


where
F’igure 5: Different decoding structures for three codes.

c U , ~ = ‘(Ylu) = 1)
= log
l n,fk ‘(‘1)
+ log p(uk (2) We can use (4) and (5) again, but this time for i = 1 , 3 , to express (3) as
c U , u k = O‘(ylu) n,#k ‘(’1) p(uk = 0)

For efficient computation of eq.(2) when the a-priori probabilities P (U,)


Lk = f ( Y 2 ,El, L 3 , k ) +i l k + L3k (9)

are non-uniform, the modified MAP algorithm in [2] is simpler to use and similarly
than the version considered in [l]. Therefore, in this paper we use the Lk = f ( Y 3 , Ll, L2, k) f i l k fL 2 k (10)
modified MAP algorithm of [2] as we did in [4]. A solution to eqs. 7,9, and 10 is
The channel model is shown in Fig. 6 where the nz,k’s and the n p k ’ S
are i.i.d. zero mean Gaussian random variables with unit variance, and i l k f ( y l , E 2 1 L3, k ) ; L2k = f(Y2, fJ1, k); k)
L3k = f(Y3, fJI, 6 2 ,
p = ,/- is the signal-lo-noise ratio. (The same model is used (11)
for each encoder). To explain the basic decoding concept we restrict provided that a solution does indeed exist. fork = 1 , 2 , , . , , N.The final
decision is then based on

Lk = i l k + +
i2k i,k (12)

which is passed through a hard-limiter with zero threshold. We attempted

Figure 6: Channel model.

ourselves to three codes, but extension to several codes is straightforward.


In order to simplify the notation, consider the combination of permuter
and encoder as a block code with input U and outputs x,,i = 1 , 2 , 3and
the corresponding received sequences yl,i = 1, 2 , 3. The optimum MAP
decision metric on each bit is (for data with uniform probabilities)
Figure 7: Structure of block decoder 2

to solve the nonlinear equations in ( 1 1) for L,, L 2 , and L 3 by using the


iterative procedure
but in practice we cannot compute eq.(3) for large N. Suppose that we
evaluate P(yI\ U ) , 1 = 2 , 3, in eq.(3) using Bayes’ rule and using the,
L ( ,k
m+li
-
- c r r “ f ( y l , L?’. Ly,k ) (13)
following approximation for k = 1 , 2 , . , . , N ,iterating on m. Similar recursions hold for LEi and
LEi. The gain ay’ should be equal to one, but we noticed experimentally
that better convergence can be obtained by optimizing this gain for each
(4)
iteration starting from a value slightly less than one, and increasing toward
one with the iterations, as often done in simulated annealing methods. We N = 16384 is compared in Fig. 9 to the capacity of a binary-input Gaus-
start the recursion with the initial condition* E:’ = E:) = I?$) = 0. sian channel for rate r = 1/4. The hest performance curve in Fig. 9 is
For the computation of f ( . ) we use the modified MAP algorithm with approximately 0.7 dB from the Shannon limit at BER=10-4.
permuters (direct and inverse) where needed, as shown in Fig. 7 for block
decoder 2. The MAP algorithm always starts and ends at the all-zero io-’.g. ’ ’ . ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ” ’ ’

state since we used perfect termination. Similar structures apply for


block decoder 1 (nl = I , identity), and block decoder 3. The overall RATE = I t 4

decoder is composed of block decoders connected as in Fig. 5(c), which


can he implemented as a pipeline or by feedback.
Multiple Code Algorithm Applied to Two Codes. -For
turbo codes with only two constituent codes, eq. (13) reduce to

where, for each iteration, a?’ and a?’ can be optimized (simulated an-
nealing) or set to 1 for simplicity. The decoding configuration for two
codes, according to the previous section, is shown in Fig. 8. In this spe-

X X

Figure 8: Parallel structure for two codes.

cia1 case, since the two paths in Fig. 8 are disjoint, the decoder structure
reduces to that of Fig. 4, i.e. to the serial mode.
If we optimize a y ) and a?), our method for two codes is similar to -1.5 -1.0 -0.5 0.0 0.5 1.o
the decoding method proposed in [l], which requires estimates of the EdN,, dB
variances of i l k and i2k for each iteration in presence of errors. In the
method proposed in [2] the received “systematic” observation was sub- Figure 9: Turbo codes performance, r = 1/4
tracted from i l kwhich
, results in performance degradation. In [3] the
method proposed in [2] was used hut the received “systematic” observa- Unequal Rate Encoders -We now extend the results to encoders
tion was interleaved and provided to decoder 2. In [4],we argued that with unequal rates with two K = 5 constituent codes (1, gb/ga, g,/ga)
there is no need to interleave the received “systematic” Observation and
and (gb/ga), where ga = (Wocrar, 8 6 = (33)octar and gc = (25)octar.
provide it to decoder 2, since E l k does this job. It seems that our proposed This structure improves the performance of the overall, rate 1/4, code, as
method with ay’ and a?) equal to 1 is the simplest and achieves the same
shown in Fig.9. This improvement is due to the fact that we can avoid
performance reported in [3] for rate 1/2 codes.
using the interleaved information data at the second encoder and that the
Terminated Parallel Convolutional Codes as Block
rate of the first code is lower than that of the second code. For PCS ap-
Codes. - Consider the combination of permuter and encoder as a
plications, short interleavers should be used, since the vocoder frame is
linear block code. Define P, as the parity matrix of the terminated convo-
usually 20ms. Therefore we selected 192 and 256 bits interleavers as an
lutional code i . Then the overall generator matrix for three parallel codes
example, corresponding to 9.6 and 13 Kbps. (Note that this small dif-
is ference of interleaver size does not affect significantly the performance).
G = [ I Pi ~ 2 P 2n3P3] The performance of codes with short interleaver is shown in Fig. 10 for the
where n,are the permutations (interleavers). In order to maximize the K = S codes described above for random permutation and row-column
minimum distance of the code given by G we should maximize the number permutation with a = 2 for rows and a = 4 for columns.
of linearly independent columns of the corresponding parity check matrix Three Codes - The performance of a three-code turbo code with
H . This suggests that the design of P, (code) and n,(permutation) are random interleavers is shown in Fig. 1 1 for N = 4096. The three recursive
closely related and it does not necessarily follow,that optimum component codes shown in Fig. 1 where used for K = 3. Three recursive codes with
codes (maximum d,,,) yield optimum parallel concatenated codes. For g, = (13)ocror and gb = (ll)octalwere used for K = 4. Note that the
very small N we used this concept to design jointly the permuter and the non-systematic version of this encoder is catastrophic, hut the recursive
component convolutional codes. systematic version is non-catastrophic. We found that this K = 4 code
IV. PERFORMANCE has better performance than several others.
Two Codes - The performance obtained by turbo decoding the Although it was suggested [SI that g, be a primitive polynomial, we
code with two constituent codes (1, go/g,), where g, = (37),,r,l and found several counterexamples that show better performance, e.g. g, for
g b = (21),,ral, and with random permutations of lengths N = 4096 and
K = 5 proposed in [l] is not primitive.
In Fig. 11 , the performance of the K = 4 code was improved by
2Note that the components of the E’s corresponding to the tail bits are set to going to 30 iterations and using a S-random interleaver with S = 3 1 . For
zero for all iterations. shorter blocks (192 and 256), the results are shown in Fig. 10 where it

58
can be observed that approximately 1 dB SNR is required for BER=10-3,
which implies a CDMA capacity C = 0 . 8 ~We have noticed that the Random Interleaving N=4096
Code Rate=1/4
slope of the BER curve changes around BER=10-5 (flattening effect) if 1 Y
the interleaver is not designed properly to maximize d,,, or is chosen at
random. Galileo Code

r=l I4 10-
m=20

Three K=3 Codes


Random Interleaver, N=192

K
W
m N=l9:

1
Three &State Codes
t \

-0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5


dB

TWOK=5 Codes Figure 11: Three-code performance

[2] J. Hagenauer and P. Robertson, “Iterative (Turbo) decoding of systematic


convolutional codes with the MAP and SOVA algorithms”, Proc. of the ITG
conference “Source and channel coding”, Oct. 1994, Frankfurt.
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
[3] P. Robertson, “Illuminating the structure of code and decoder of parallel con-
E@,, dB catenated recursive systematic (Turbo) codes”, Proceedings GLOBECOM
’94, Dec. 1994, pp.1298-1303.

Figure 10: Performance with short block sizes. [4] D. Divsalar and E Pollara, “Turbo Codes for Deep-Space Communications”,
JPL TDA Progress Report 42-120, Feb. 15, 1995.
V. CONCLUSIONS [5] G. Battail, C. Berrou and A. Glavieux, “Pseudo-random recursive con-
We have shown how turbo codes and decoders can be used to improve volutional coding for near-capacity performance”, Comm. Theory Mini-
the coding gain for PCS applications. These are just preliminary results conference, GLOBECOM ’93, Dec. 1993.
that require extensive further analysis. In particular, we need to improve [6] D. Divsalar, S. Dolinar and E Pollara, “Weight distribution of multiple turbo
our understanding of the influence of the interleaver choice on the code codes”, JPL TDA Progress Report, (In preparation).
performance, to explore the sensitivity of the decoder performance to the [7] L. R. Bahl, J. Cocke, E Jelinek, and J. Raviv, “Optimal Decoding of Lin-
precision with which we can estimate Ebf No. ear Codes for Minimizing Symbol Error Rate,” IEEE Trans. Znfurm. Theory,
An linteresting theoretical qwstion is to determine “how random” these vol. IT-20 (1974), pp. 284-287.
codes can be so as to draw conclusions on their performance based on [8] E. Dunscombe and EC. Piper, “ Optimal interleaving scheme for convolutional
comparison with random coding bounds. In [4] we obtained the complete codes”, Electronic Letters, 26 Oct. 1989, Vol. 25, No. 22, pp. 1517-1518.
weight distribution of a turbo code, calculated the upper bound on BER [9] M. Moher, “Decoding via Cross-entropy Minimization”, Proceedings
and compared it with maximum-.likelihood(ML) decoding. Those results GLOBECOM ’93, Dec. 1993, p.809-813.
showed that the performance of turbo decoding is close to ML decoding [lo] G. Battail and R. Sfez, “Suboptimum Decoding using the Kullback Princi-
and to optimum MAP decoding. However, the approximation used in ple”, Lecture Notes in Computer Science, Vol. 313, pp. 93-101, 1988.
eq.(4) implies that turbo decoding is only close to but not equal to MAP
decoding.
VI. ACKNOWLEDGMENTS
The authors are grateful to S. Dolinar and R.J. McEliece for their helpful
comments.
RE~FERENCES
[1] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon Limit Error-
Correcting Coding: Turbo Codes.” Proc. 1993 IEEE International Conference
on Communications, pp. 1064--1070.

59

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy