0% found this document useful (0 votes)
100 views6 pages

Digital Communication Systems by Simon Haykin-117

This document discusses trellis-coded modulation techniques. It provides two examples of partitioning constellations for 8-PSK and 16-QAM modulation schemes to increase minimum distance between signal points. The document then describes Ungerboeck codes, a class of trellis codes that use convolutional encoding and signal point partitioning. Examples of Ungerboeck codes for 8-PSK modulation are shown, including the trellis diagrams and encoder states. Asymptotic coding gain of Ungerboeck codes is defined based on the ratio of free Euclidean distance to minimum reference distance.

Uploaded by

matilda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views6 pages

Digital Communication Systems by Simon Haykin-117

This document discusses trellis-coded modulation techniques. It provides two examples of partitioning constellations for 8-PSK and 16-QAM modulation schemes to increase minimum distance between signal points. The document then describes Ungerboeck codes, a class of trellis codes that use convolutional encoding and signal point partitioning. Examples of Ungerboeck codes for 8-PSK modulation are shown, including the trellis diagrams and encoder states. Asymptotic coding gain of Ungerboeck codes is defined based on the ratio of free Euclidean distance to minimum reference distance.

Uploaded by

matilda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Haykin_ch10_pp3.

fm Page 677 Friday, January 4, 2013 5:03 PM

10.15 Trellis-Coded Modulation 677

In Figure 10.37, we illustrate the partitioning procedure by considering a circular


constellation that corresponds to 8-PSK. The figure depicts the constellation itself and the
two and four subsets resulting from two levels of partitioning. These subsets share the
common property that the minimum Euclidean distances between their individual points
follow an increasing pattern, namely:
d0 < d1 < d2

EXAMPLE 12 Three-level Partitioning of QAM Constellation


For a different two-dimensional example, Figure 10.38 illustrates the partitioning of a
rectangular constellation corresponding to 16-QAM. Here again, we see that the subsets
have increasing within-subset Euclidean distances, as shown by
d0 < d1 < d2 < d3

d0

0 1

d1

0 1 0 1

d2

0 1 0 1 0 1 0 1
d3

000 100 010 110 001 101 011 111


Signal 0 4 2 6 1 5 3 7
number

Figure 10.38 Partitioning of 16-QAM constellation, which shows that d0 < d1 < d2 < d3.

Based on the subsets resulting from successive partitioning of a two-dimensional


constellation, illustrated in Examples 11 and 12, we may devise relatively simple, yet highly
effective coding schemes. Specifically, to send n bitssymbol with quadrature modulation
(i.e., one that has in-phase and quadrature components), we start with a two-dimensional
constellation of 2n+1 signal points appropriate for the modulation format of interest; a
circular grid is used for M-ary PSK and a rectangular one for M-ary QAM. In any event, the
constellation is partitioned into four or eight subsets. One or two incoming message bits per
symbol enter a rate-12 or rate-23 binary convolutional encoder, respectively; the resulting
two or three coded bits per symbol determine the selection of a particular subset. The
remaining uncoded messege bits determine which particular signal point from the selected
Haykin_ch10_pp3.fm Page 678 Friday, January 4, 2013 5:03 PM

678 Chapter 10 Error-Control Coding

subset is to be signaled over the AWGN channel. This class of trellis codes is known as
Ungerboeck codes in recognition of their originator.
Since the modulator has memory, we may use the Viterbi algorithm (discussed in
Section 10.8) to perform maximum likelihood sequence estimation at the receiver. Each
branch in the trellis of the Ungerboeck code corresponds to a subset rather than an
individual signal point. The first step in the detection is to determine the signal point
within each subset that is closest to the received signal point in the Euclidean sense. The
signal point so determined and its metric (i.e., the squared Euclidean distance between it
and the received point) may be used thereafter for the branch in question, and the Viterbi
algorithm may then proceed in the usual manner.

Ungerboeck Codes for 8-PSK


The scheme of Figure 10.39a depicts the simplest Ungerboeck 8-PSK code for the
transmission of 2 bits/symbol. The scheme uses a rate-12 convolutional encoder; the
corresponding trellis of the code is shown in Figure 10.39b, which has four states. Note
that the most significant bit of the incoming message sequence is left uncoded. Therefore,
each branch of the trellis may correspond to two different output values of the 8-PSK
modulator or, equivalently, to one of the four two-point subsets shown in Figure 10.37.
The trellis of Figure 10.39b also includes the minimum distance path.
The scheme of Figure 10.40a depicts another Ungerboeck 8-PSK code for transmitting
2 bits/sample; it is next in the level of increased complexity, compared to the scheme of
Figure 10.39a. This second scheme uses a rate-23 convolutional encoder. Therefore, the
corresponding trellis of the code has eight states, as shown in Figure 10.40b. In this latter
scheme, both bits of the incoming message sequence are encoded. Hence, each branch of
the trellis corresponds to a specific output value of the 8-PSK modulator. The trellis of
Figure 10.40b also includes the minimum distance path.
Figures 10.39b and 10.40b also include the pertinent encoder states. In Figure 10.39a,
the state of the encoder is defined by the contents of the two-stage shift register. On the
other hand, in Figure 10.40a, it is defined by the content of the single-stage (top) shift
register followed by that of the two-stage (bottom) shift register.

Asymptotic Coding Gain


Following the discussion in Section 10.8 on maximum likelihood decoding of
convolutional codes, we define the asymptotic coding gain of Ungerboeck codes as
follows:
2
 d free
G a = 10 log 10  ---------- (10.149)
 d 2ref 
where dfree is the free Euclidean distance of the code and dref is the minimum Euclidean
distance of an uncoded modulation scheme operating with the same signal energy per bit.
For example, by using the Ungerboeck 8-PSK code of Figure 10.39a, the signal
constellation has eight message points and we send two message bits per signal point.
Hence, uncoded transmission requires a signal constellation with four message points. We
Haykin_ch10_pp3.fm Page 679 Friday, January 4, 2013 5:03 PM

10.15 Trellis-Coded Modulation 679

Input Flip-flop 8-PSK


signal mapper Most significant bit

0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
Modulo-2
adder 0 1 2 3 4 5 6 7
Output
Signal number
Rate–1/2 convolutional encoder
(a)

Encoder
state
000 000 000
00
100
01
0
11 0
0 01
0

0
01

10

01
0
0

00
11

0 00
10 1
00
1
10
1
01

1
01
1 01
01
1

1
11

11
1

001
11
101
(b)

Figure 10.39 (a) Four-state Ungerboeck code for 8-PSK; the mapper follows Figure 10.37.
(b) Trellis of the code.

may therefore regard uncoded 4-PSK as the frame of reference for the Ungerboeck 8-PSK
code of Figure 10.39a.
The Ungerboeck 8-PSK code of Figure 10.39a achieves an asymptotic coding gain of 3
dB, which is calculated as follows:
1. Each branch of the trellis in Figure 10.39b corresponds to a subset of two antipodal
signal points. Hence, the free Euclidean distance dfree of the code can be no larger
than the Euclidean distance d2 between the antipodal signal points of such a subset.
We may therefore write
dfree = d2 = 2
where the distance d2 is defined in Figure 10.41a.
Haykin_ch10_pp3.fm Page 680 Friday, January 4, 2013 5:03 PM

680 Chapter 10 Error-Control Coding

Rate–2/3 convolutional encoder


Flip-flop

Input

8-PSK
signal mapper Most significant bit
Modulo-2
adder
0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1
0 1 2 3 4 5 6 7 Output
Signal number
(a)

Encoder
state
000 000 000
000
110

010

100

110

110

001 111

011

101

111
(b)

Figure 10.40 (a) Eight-state Ungerboeck code for 8-PSK; the mapper follows Figure 10.37.
(b) Trellis of the code with only some of the branches shown.
Haykin_ch10_pp3.fm Page 681 Friday, January 4, 2013 5:03 PM

10.16 Turbo Decoding of Serial Concatenated Codes 681

Quadrature Quadrature

dref

d2
In-phase 1 In-phase
O O

(a) (b)

Figure 10.41 Signal-space diagrams for calculation of asymptotic coding gain


of Ungerboeck 8-PSK code: (a) definition of distance d2; (b) definition of
reference distance dref.

2. From Figure 10.41b, we see that the minimum Euclidean distance of an uncoded
QPSK, viewed as the frame of reference operating with the same signal energy per
bit, assumes the following value:
d ref = 2
Hence, as previously stated, the use of (10.149) yields an asymptotic coding gain of
10 log102 = 3 dB.
The asymptotic coding gain achievable with Ungerboeck codes increases with the number
of states in the convolutional encoder. Table 10.8 presents the asymptotic coding gain (in
dB) for Ungerboeck 8-PSK codes for increasing number of states, expressed with respect
to uncoded 4-PSK. Note that improvements on the order of 6 dB require codes with a very
large number of states.

Table 10.8 Asymptotic coding gain of Ungerboeck 8-PSK codes, with


respect to uncoded 4-PSK

Number of states 4 8 16 32 64 128 256 512


Coding gain (dB) 3 3.6 4.1 4.6 4.8 5 5.4 5.7

10.16 Turbo Decoding of Serial Concatenated Codes

In Section 10.12 we pointed out that there are two types of concatenated codes: parallel
and serial. The original turbo coding scheme involved a parallel concatenated code, since
the two encoders operate in parallel on the same set of message bits. We now turn our
attention in this section to a serial concatenation scheme as depicted in Figure 10.42,
comprised of an “outer” encoder whose output feeds an “inner” encoder. Whereas the
serial concatenation idea can be traced to as early as Shannon’s seminal work, the
Haykin_ch10_pp3.fm Page 682 Friday, January 4, 2013 5:03 PM

682 Chapter 10 Error-Control Coding

Code Inner encoder


vector
Message vector c Channel output
Outer Trellis
m
encoder π encoder
Channel r

Figure 10.42 Serial concatenated codes; as usual,  denotes an interleaver.

connection with turbo coding occurred only after the parallel concatenated scheme of
Berrou et al. (see Section 10.12) gained widespread acclaim. The iterative decoding
algorithm for the serial concatenated scheme was first analyzed in detail by Benedetto and
coworkers (Benedetto and Montorsi, 1996; Benedetto et al., 1998); the algorithm follows
a similar logic to the parallel concatenated scheme, in the form of information exchange
between the two decoders as in Figure 10.43. This iterative information exchange is
observed to significantly improve the overall error-correction abilities of the decoder, just
as in the conventional turbo decoder. We shall review the basics of the iterative decoding
algorithm in what follows in order to emphasize the common points with the iterative
algorithm described in Section 10.12.
The particular interest in the serial concatenated scheme, however, becomes apparent
once we recognize that the inner encoder–decoder pair need not be a conventional error-
correction code, but in fact may assume more general forms that are often encountered in
communication systems. A few examples may be highlighted as follows:
1. The inner encoder may in fact be a TCM stage, as studied in Section 10.15. The
iterative decoding algorithm connecting the trellis-coded demodulator with the outer
error-correction code leads to turbo TCM.23
2. The inner encoder may be the communication channel itself, which is of interest
when the channel induces ISI. The output symbols of the channel may then be
expressed as a convolution between the input symbol sequence and the channel
impulse response, and the decoder operation corresponds to channel equalization
(Chang and Hancock, 1966). Combining the equalizer with the outer channel
decoder gives rise to turbo equalization.24

Inner decoder Outer decoder


π–1

E A
ℙ(r | c) BCJR BCJR φ (c)
L decoder decoder L
A E

Key:
π L = Likelihood function
A = A priori probabilities
E = Extrinsic probabilities

Figure 10.43 Iterative decoder structure.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy