TTS Chapter8
TTS Chapter8
Telecommunications
Networks
Anton Čižmár
Ján Papaj Department of electronics
and multimedia telecommunications
CONTENTS
Preface ....................................................................................................................................... 5
1 Introduction ...................................................................................................................... 6
1.1 Mathematical models for communication channels .................................................... 8
1.2 Channel capacity for digital communication ............................................................ 10
1.2.1 Shannon Capacity and Interpretation ................................................................ 10
1.2.2 Hartley Channel Capacity .................................................................................. 12
1.2.3 Solved Problems ................................................................................................. 13
1.3 Noise in digital communication system ..................................................................... 15
1.3.1 White Noise ........................................................................................................ 17
1.3.2 Thermal Noise .................................................................................................... 18
TS
1.3.3 Solved Problems ................................................................................................. 19
1.4 Summary .................................................................................................................... 20
1.5 Exercises .................................................................................................................... 21
2 Signal and Spectra .......................................................................................................... 23
2.1 Deterministic and random signals ............................................................................. 23
TT
2.2 Periodic and nonperiodic signals .............................................................................. 23
2.3 Analog and discrete Signals ...................................................................................... 23
2.4 Energy and power Signals ......................................................................................... 23
2.5 Spectral Density ......................................................................................................... 25
2.5.1 Energy Spectral Density ..................................................................................... 25
M
1
3.2.2 Statistical Averages of Random Variables ......................................................... 37
3.2.3 Some Useful Probability Distributions .............................................................. 38
3.3 Stochastic processes .................................................................................................. 41
3.3.1 Stationary Stochastic Processes ......................................................................... 41
3.3.2 Statistical Averages ............................................................................................ 41
3.3.3 Power Density Spectrum .................................................................................... 43
3.3.4 Response of a Linear Time-Invariant System (channel) to a Random Input
Signal ............................................................................................................................ 43
3.3.5 Sampling Theorem for Band-Limited Stochastic Processes .............................. 44
3.3.6 Discrete-Time Stochastic Signals and Systems .................................................. 45
3.3.7 Cyclostationary Processes ................................................................................. 46
3.3.8 Solved Problems ................................................................................................. 47
TS
3.4 Summary .................................................................................................................... 50
3.5 Exercises .................................................................................................................... 52
4 Signal space concept ....................................................................................................... 55
4.1 Representation Of Band-Pass Signals And Systems .................................................. 55
4.1.1 Representation of Band-Pass Signals ................................................................ 55
TT
4.1.2 Representation of Band-Pass Stationary Stochastic Processes ......................... 58
4.2 Introduction of the Hilbert transform ........................................................................ 59
4.3 Different look at the Hilbert transform...................................................................... 59
4.3.1 Hilbert Transform, Analytic Signal and the Complex Envelope ........................ 59
M
2
5.2.2 Phase-modulated signal (PSK) .......................................................................... 85
5.2.3 Quadrature Amplitude Modulation (QAM)........................................................ 86
5.3 Multidimensional Signals .......................................................................................... 88
5.3.1 Orthogonal multidimensional signals ................................................................ 88
5.3.2 Linear Modulation with Memory ....................................................................... 92
5.3.3 Non-Linear Modulation Methods with Memory................................................. 95
5.4 Spectral Characteristic Of Digitally Modulated Signals ........................................ 101
5.4.1 Power Spectra of Linearly Modulated Signals ................................................ 101
5.4.2 Power Spectra of CPFSK and CPM Signals .................................................... 102
5.4.3 Solved Problems ............................................................................................... 106
5.5 Summary .................................................................................................................. 110
TS
5.6 Exercises .................................................................................................................. 110
6 Optimum Receivers for the AWGN Channel ............................................................ 113
6.1 Optimum Receivers For Signals Corrupted By Awgn ............................................. 113
6.1.1 Correlation demodulator.................................................................................. 114
6.1.2 Matched-Filter demodulator ............................................................................ 116
TT
6.1.3 The Optimum detector ...................................................................................... 118
6.1.4 The Maximum-Likelihood Sequence Detector ................................................. 120
6.2 Performance Of The Optimum Receiver For Memoryless Modulation .................. 123
6.2.1 Probability of Error for Binary Modulation .................................................... 123
6.2.2 Probability of Error for M-ary Orthogonal Signals ........................................ 126
M
3
7.5.1 Bandwidth Efficiency of MPSK and MFSK Modulation .................................. 151
7.5.2 Analogies Between Bandwidth-Efficiency and Error-Probability Planes ....... 152
7.6 Modulation And Coding Trade-Offs ........................................................................ 153
7.7 Defining, Designing, And Evaluating Digital Communication Systems ................. 154
7.7.1 M-ary Signaling................................................................................................ 154
7.7.2 Bandwidth-Limited Systems ............................................................................. 155
7.7.3 Power-Limited Systems .................................................................................... 156
7.7.4 Requirements for MPSK and MFSK Signaling ................................................ 157
7.7.5 Bandwidth-Limited Uncoded System Example ................................................ 158
7.7.6 Power-Limited Uncoded System Example ....................................................... 160
7.8 Solved Problems ...................................................................................................... 162
TS
7.9 Summary .................................................................................................................. 165
7.10 Exercise ................................................................................................................... 166
8 Why use error-correction coding ................................................................................ 167
8.1 Trade-Off 1: Error Performance versus Bandwidth ............................................... 167
8.2 Trade-Off 2: Power versus Bandwidth .................................................................... 168
TT
8.3 Coding Gain ............................................................................................................ 168
8.4 Trade-Off 3: Data Rate versus Bandwidth .............................................................. 168
8.5 Trade-Off 4: Capacity versus Bandwidth ................................................................ 169
8.6 Code Performance at Low Values of Eb/N0 ............................................................. 169
8.7 Solved problem ........................................................................................................ 170
M
4
PREFACE
Providing the theory of digital communication systems, this textbook prepares senior undergraduate
and graduate students for the engineering practices required in the real word.
With this textbook, students can understand how digital communication systems operate in practice,
learn how to design subsystems, and evaluate end-to-end performance.
The book contains many examples to help students achieve an understanding of the subject. The
problems are at the end of the each chapter follow closely the order of the sections.
The entire book is suitable for one semester course in digital communication.
All materials for teaching texts were drawn from sources listed in References.
TS
TT
M
KE
5
Chapter VIII Why use error-correction coding
TS
TT
M
167
Chapter VII Performance Analysis Of Digital Modulations
addition of redundant bits dictates a faster rate of transmission, which of course means more
bandwidth.
10−6 �, has been delive·red to a customer. The customer has no complaints about the quality of the
𝐸𝑏
data, but the equipment is having some reliability problems as a result of providing an of 14 dB. In
𝑁0
𝐸𝑏
other words, the equipment keeps breaking down. If the requirement on or power could be reduced,
𝑁0
the reliability difficulties might also be reduced. Figure 8.1 suggests a trade-off by moving the
operating point from point D to point E. That is, if error-correction coding is introduced, a reduction in
𝐸𝑏
the required can be achieved. Thus, the trade-off is one in which the same quality of data is
𝑁0
TS
𝐸𝑏
achieved, but the coding allows for a reduction in power or . What is the cost? The same as before -
𝑁0
more bandwidth.
Notice that for non-real-time communication systems, error-correction coding can he used with a
somewhat different trade-off. It is possible to obtain improved bit-error probability or reduced power
(similar to trade-off 1 or 2 above) by paying the price of delay instead of bandwidth.
TT
8.3 CODING GAIN
𝐸𝑏
The trade-off example described in the previous section has allowed a reduction in from 14 dB to 9
𝑁0
dB, while maintaining the same error performance. In the context of this example and Figure 8.1, we
M
now define coding gain. For a given bit-error probability, coding gain is defined as the “relief” or
𝐸𝑏
reduction in that can be realized through the use of the code. Coding gain G is generally expressed
𝑁0
in dB, such as
𝐸𝑏 𝐸𝑏
KE
𝐸𝑏 𝐸𝑏 𝐸𝑏
where � � and � � , present the required , uncoded and coded, respectively.
𝑁0 𝑢 𝑁0 𝑐 𝑁0
10−6 � has been developed. Assume that there is no problem with the data quality and no particular
need to reduce power. However, in this example, suppose that the customer's data rate requirement
increases. Recall the relationship:
𝐸𝑏 𝑃𝑟 1
= � � (8.2)
𝑁0 𝑁0 𝑅
168
Chapter VII Performance Analysis Of Digital Modulations
If we do nothing to the system except increase the data rate R, the above expression shows that the
𝐸𝑏
received would decrease, and in Figure 8.1, the operating point would move upwards from point D
𝑁0
to, let us say, some point F. Now, envision "walking" down the vertical line to point E on the curve
that represents coded modulation. Increasing the data rate has degraded the quality of the data. But, the
𝑃𝑟 𝐸𝑏
use of error-correction coding brings back the same quality at the same power level � �. The is
𝑁0 𝑁0
𝐸𝑏
reduced, but the code facilitates getting the same error probability with a lower . What price do we
𝑁0
pay for getting this higher data rate or greater capacity? The same as before- increased bandwidth.
TS
interferer to each of the other users in the same cell or nearby cells. Hence, the capacity (maximum
𝐸𝑏 𝐸𝑏
number of users) per cell is inversely proportional to . In this application, a lowered results in a
𝑁0 𝑁0
raised capacity; the code achieves a reduction in each user's power, which in turn allows for an
increase in the number of users. Again, the cost is more bandwidth. But, in this case, the signal-
bandwidth expansion due to the error-correcting code is small compared with the more significant
spread-spectrum bandwidth expansion, and thus, there is no impact on the transmission bandwidth.
TT
In each of the above trade-off examples, a "traditional" code involving redundant bits and faster
signaling (for a real-time communication system) has been assumed: hence, in each case, the cost was
expanded bandwidth. However, there·, exists an error-correcting technique, called trellis-coded
modulation, that does not require faster signaling or expanded bandwidth for real-time systems.
M
𝑁0
𝐸𝑏
coding. However, in part b) of Exercise 2, where the has been reduced to 10 dB, coding provides
𝑁0
no improvement, in fact, there is a degradation. One might ask, why does part b) of Exercise 2
manifest a degradation? After all, the same procedure is used for applying the code in both parts of the
problem. The answer can be seen in the coded-versus-uncoded pictorial shown in Figure 8.1, Even
though Exercise 2 deals with message-error probability, And Figure 8.1 displays bit-error probability,
the following explanation still applies. In all such plots, there is a crossover between the curves
𝐸𝑏
(usually at some low value of ). The reason for such crossover (threshold) is that every code system
𝑁0
has some fixed error-correcting capability. If there are more errors within a block than the code is
𝐸𝑏
capable of correcting, the system will perform poorly. Imagine that is continually reduced. What
𝑁0
happens at the output of the demodulator? It makes more and more errors. Therefore. such a continual
𝐸𝑏
decrease in must eventually cause some threshold to be reached where the decoder becomes
𝑁0
overwhelmed with errors. When that threshold is crossed, we can interpret the degraded performance
as being caused by the redundant bits consuming energy but giving block nothing beneficial in return.
169
Chapter VII Performance Analysis Of Digital Modulations
𝐸𝑏
Does it strike the reader as a paradox that operating in a region (low values of ), where one would
𝑁0
best like to see an error-performance improvement, is where the code makes things worse? There is,
however, a class of powerful codes called turbo codes that provide error -performance improvement,
𝐸𝑏
at low values of : the crossover point is lower for turbo codes compared with conventional codes.
𝑁0
Compare the message error probability for a communications link with and without the use of error-
correction coding. Assume that the uncoded transmission characteristics are: BPSK modulation.
𝑃𝑟
Gaussian noisse = 43,776, data rate 𝑅 = 4800 𝑏𝑖𝑡𝑠/𝑠. For the coded case, also assume the use of
𝑁0
a (15.11) error-correcting code that is capable of correcting any single-error pattern within a block of
TS
15 bits. Consider that the demodulator makes hard decisions and thus feeds the demodulated code bits
directly to the decoder, which in turn outputs an estimate of the original message.
Solution
2𝐸𝑏 2𝐸𝑐
Following Equation, let 𝑝𝑢 = 𝑄 �� � and 𝑝𝑢 = 𝑄 �� � be the uncoded and coded channel
𝑁0 𝑁0
TT
𝐸𝑏 𝐸𝑐
symbol error probabilities, respectively, where is the bit energy per noise spectral density and is
𝑁0 𝑁0
the code-bit energy per noise spectral density.
Without coding
𝐸𝑏 𝑃𝑟 1
= � � = 9,12 (9,6𝑑𝐵)
M
𝑁0 𝑁0 𝑅
And
KE
2𝐸𝑏
𝑝𝑢 = 𝑄 �� � = 𝑄��18,24� = 1,02 ∗ 10−5 (8.3)
𝑁0
1 −𝑥 2
𝑄(𝑥) ≈ 𝑒𝑥𝑝 � � , 𝑓𝑜𝑟 𝑥 > 3
𝑥√2𝜋 2
𝑢
The probability that the uncoded message block 𝑃𝑀 will be received in error is 1 minus the product of
the probabilities that each bit will be detected correctly. Thus,
𝑢
𝑃𝑀 = 1 − (1 − 𝑝𝑢 )𝑘
1− (1
��−
���𝑝𝑢��)11 1,12 ∗ 10−4
������� (8.4)
= 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑎𝑙𝑙 = 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑡ℎ𝑎𝑡 𝑎𝑡
11𝑏𝑖𝑡𝑠 𝑖𝑛 𝑢𝑛𝑐𝑜𝑑𝑒𝑑 𝑙𝑒𝑎𝑠𝑡 1 𝑏𝑖𝑡 𝑜𝑢𝑡
𝑏𝑙𝑜𝑐𝑘 𝑎𝑟𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 11 𝑖𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟
170
Chapter VII Performance Analysis Of Digital Modulations
Without coding
Assuming a real-time communication system such that delay is unacceptable, the channel-symbol rate
or code-hit rate R, is 15/11 times the data hit rate:
15
𝑅𝑐 = 4800 ∗ ≈ 6545 𝑏𝑝𝑠
11
And
𝐸𝑐 𝑃𝑟 1
= � � = 6,69 (8,3 𝑑𝐵)
𝑁0 𝑁0 𝑅𝑐
𝐸𝑐
The for each colic bit is less than that for the data bit in the uncoded case because the channel-bit
𝑁0
has increased, but the transmitter power is assumed to be fixed:
TS
2𝐸𝑐
𝑝𝑐 = 𝑄 �� � = 𝑄��13,38� = 1,36 ∗ 10−4 (8.5)
𝑁0
It can be seen by comparing the results of Equation 8.3 with those of Equation 8.5 that because
redundancy was added, the channel bit-error probability has degraded. More bits must he detected
during the same time interval and with the same available power: the performance improvement due to
TT
𝑐
the coding is not yet apparent. We now compute the coded message error rate 𝑃𝑀 :
𝑛=15
𝑐 15
𝑝𝑀 = � � � (𝑝𝑐 )𝑗 (1 − 𝑝𝑐 )15−𝑗
𝑗
𝑗=2
The summation is started with 𝑗 = 2, since the code corrects all single errors within a block of n = 15
bits. An approximation is obtained by using only the first term of the summation. For 𝑝𝑐 , we use the
M
𝑐 15 (𝑝 )2
𝑃𝑀 =� � 𝑐 (1 − 𝑝𝑐 )13 = 1,94 ∗ 10−6 (8.6)
2
KE
By comparing the results of Equation 8.4 with 8.6, we can see that the probability of message error has
improved by a factor of 58 due to the error-correcting code used in this example. This example
illustrates the typical behavior of all such real-time communication systems using error-correction
coding. Added redundancy means faster signaling, less energy per channel symbol, and more errors
out of the demodulator. The benefits arise because the behavior of the decoder will (al reasonable
𝐸𝑏
values of ) more than compensate for the poor performance of the demodulator.
𝑁0
8.8 EXERCISE
1. For a fixed probability of channel symbol error, the probability of bit error for a Hamming
(15, 11) code is worse than that for a Hamming (7, 4) code. Explain why. What, then, is the
advantage of the (15, 11) code? What basic trade-off is involved?
2. Consider a (24, 12) linear block code capable of double – error corrections. Assume that a
noncoherently detected binary orthogonal frequency – shift keying (BFSK) modulation
𝐸𝑏
format is used and that the received = 14𝑑𝐵.
𝑁0
171
Chapter VII Performance Analysis Of Digital Modulations
a. Does the code provide any improvement in probability of message error? If it does,
how much? If it does not, explain why not.
𝐸𝑏
b. Repeat part a) with = 14𝑑𝐵.
𝑁0
3. Information from a source is organized in 36-bit messages that are to be transmitted over an
AWGN channel using noncoherently detected BFSK modulation.
𝐸𝑏
a. If no error control coding is used, compute the required to provide a message error
𝑁0
probability of 10−3 .
b. Consider the use of a (127,36) linear block code (minimum distance is 31) in the
transmission of these messages. Compute the coding gain for this code for a message
error probability or 10−3 . (Hint: The coding gain is defined as the difference between
𝐸𝑏 𝐸𝑏
, required without coding and the required with coding.).
𝑁0 𝑁0
TS
4. A message consists of English text (assume that each word in the message contains six
letters). Each letter is encoded using the 7-bit ASCII character code. Thus, each word of text
consists of a 42-bit sequence. The message is to be transmitted over a channel having a
symbol error probability of 10−1
a. What is the probability that a word will be received in error?
b. If a repetition code is used such that each letter in each word is repeated three times
TT
and at the receiver, majority voting is used to decode the message, what is the
probability that a decoded word will be in error?
c. If a (126, 42) BCH code with error-correcting capability of t = 14 is used to encode
each 42-bit word, what is the probability that a decoded word will be in error?
d. For a real system, it is not fair to compare uncoded versus coded message error
performance on the basis of a fixed probability of channel symbol error, since this
M
𝐸𝑐
implies a fixed level of received for all choices of coding (or lack of coding).
𝑁0
Therefore, repeat parts (a), (b), and (c) under the condition that the channel symbol
𝐸𝑏 𝐸𝑏
error probability is the determined by a received of 12 dB, where is the
𝑁0 𝑁0
KE
information bit energy per noise spectral density. Assume that the information rate
must be the same for all choices of coding or lack of coding. Also assume that
noncoherent orthogonal binary FSK modulation is used over an AWGN channel.
e. Discuss the relative error performance capabilities of the above coding schemes under
𝐸𝑏
the two postulated conditions- fixed channel symbol error probability, and fixed .
𝑁0
Under what circumstances can a repetition code offer error performance
improvement? When will it cause performance degradation?
172
APPENDIX A
APPENDIX A
THE Q-FUNCTION
Computation of probabilities that involve a Gaussian process require finding the area under the tail of
the Gaussian (normal) probability density function as shown in Figure.
TS
TT
Figure A.1 Gaussian probability density function. Shaded area is 𝑃𝑟(𝑥 ≥ 𝑥0 ) for a Gaussian random variable.
Figure A.l illustrates the probability that a Gaussian random variable x exceeds 𝑥0 , 𝑃𝑟(𝑥 ≥ 𝑥0 ),
which is evaluated as:
M
∞
1 𝑥−𝑚𝑥 2
−� �
𝑃𝑟(𝑥 ≥ 𝑥0 ) = � 𝑒 𝜎√2 (A.1)
𝜎√2𝜋
𝑥0
The Gaussian probability density function in Equation A.1 cannot be integrated in closed form. Any
KE
Gaussian probability density function may be rewritten through use of the substitution:
𝑥 − 𝑚𝑥
𝑦= (A.2)
𝜎
To yield
∞
𝑥0 − 𝑚𝑥 1 −� 𝑦 �2
𝑃𝑟 �𝑦 > �= � 𝑒 √2 (A.3)
𝜎 𝑥0 −𝑚𝑥
√2𝜋
𝜎
where the kernel of the integral on the right-hand side of Equation A.3 is the normalized Gaussian
probability density function with mean of 0 and standard deviation of 1. Evaluation of the integral in
Equation A.3 is designated as the Q-function, which is defined as
∞
1 𝑦 2
−� �
𝑄(𝑧) = � 𝑒 √2 (A.4)
√2𝜋
𝑧
Hence
173
APPENDIX A
𝑥0 − 𝑚𝑥 𝑥0 − 𝑚𝑥
𝑃 �𝑦 > � = 𝑄� � = 𝑄(𝑧) (A.5)
𝜎 𝜎
The Q-function is bounded by two analytical expressions as follows:
1 1 𝑦 2 1 𝑦 2
−� � −� �
�1 − � 𝑒 √2 ≤ 𝑄(𝑧) ≤ 𝑒 √2 (A.6)
𝑧 2 𝑧√2𝜋 𝑧√2𝜋
For values of z greater 3.0, both of these bounds closely approximate Q(z). Two important properties
of Q(z) are:
𝑄(−𝑧) = 1 − 𝑄(𝑧)
1 (A.6)
𝑄(0) =
2
TS
The error function, denoted by erf(x), is defined in a number of different ways in the literature. We
shall use the following definition:
𝑥
2 2
erf (𝑥) = � 𝑒 −(𝑦) 𝑑𝑦 (A.7)
√𝜋
TT
0
The error function has two useful properties:
erf(𝑥) = � 𝑒 −(𝑦) 𝑑𝑦 = 1
√𝜋
0
∞
2 2
erf (𝑥) = � 𝑒 −(𝑦) 𝑑𝑦 (A.8)
√𝜋
𝑥
The complementary error function is related to the error function as follows:
1 𝑥
Q(x) = erfc � � (A.10)
2 √2
174
APPENDIX B
APPENDIX B
COMPARISON OF M-ARY SIGNALING TECHNIQUES
Modulation M-ASK and QAM
r
= 2 log 2 𝑀 (B.1)
𝐵
Modulation M-PSK
r
= log 2 𝑀 (B.2)
𝐵
Modulation M-FSK
r 2 log 2 𝑀
= (B.4)
𝐵 𝑀
TS
ERROR PERFORMANCE OF M-ARY SIGNALING TECHNIQUES
Modulation M-ASK
TT
2(𝑀 − 1) 6 log 2 𝑀 𝐸𝑏
𝑃𝑒 (𝑠) = 𝑄 �� 2 �
𝑀 𝑀 − 1 𝑁0
(B.5)
𝑃𝑒 (𝑠)
𝑃𝑒 (𝑏) =
log 2 𝑀
Modulation B-PSK
M
2𝐸𝑏
𝑃𝑒 (𝑠) = 𝑄 �� � (B.6)
𝑁0
KE
Modulation M-PSK
1 𝜋 2𝐸𝑏
𝑃(𝑏𝑖𝑡𝑒𝑟𝑟𝑜𝑟) ≅ 𝑄 ��𝑘𝑠𝑖𝑛2 � � �
log 2 𝑀 𝑀 𝑁0 (B.7)
Modulation M-FSK
𝐸
− 𝑠 (B.8)
𝑃𝑒 < 𝑀𝑒 2𝑁0
Modulation M-QAM
3𝑘 𝐸𝑏
𝑃𝑒 < 4𝑄 �� � (B.8)
(𝑀 − 1) 𝑁0
175
REFERENCES
REFERENCES
Proakis, J.G.: Digital Communications, 4th edition, 2001, ISBN: 9780071181839
Sklar B.: Digital Communications: Fundamentals and Applications (2nd Edition), 2001 ISBN-10: 0-
13-084788-7
Ziemer, Rodger E.: Principles of communications: systems, modulation, and noise, 2009, ISBN: 978-
0-470-25254-3
Apurba, D.: Digital Communication: Principles and System Modelling, 2010, ISBN 9783642-12743-4
Ha, Tri T.: Theory and Design of Digital Communication Systems, 2011, ISBN: 978-0-521-76174-1
TS
Lecture notes available at: http://kemt.fei.tuke.sk/tts/teoria-telekomunikacnych-systemov-o-predmete/
TT
M
KE
176
TS
TT
M
KE
Department of electronics
and multimedia telecommunications