0% found this document useful (0 votes)
81 views19 pages

Lecture 6 - Communication Channels and Channel Capacity

This document discusses communication channels and channel capacity. It defines different types of channels including symmetric, binary symmetric, ternary symmetric, discrete memoryless, and binary erasure channels. It also covers channel capacity for discrete and symmetric channels. Examples are provided to demonstrate calculating channel capacity and efficiency for binary symmetric channels.

Uploaded by

hanokarrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views19 pages

Lecture 6 - Communication Channels and Channel Capacity

This document discusses communication channels and channel capacity. It defines different types of channels including symmetric, binary symmetric, ternary symmetric, discrete memoryless, and binary erasure channels. It also covers channel capacity for discrete and symmetric channels. Examples are provided to demonstrate calculating channel capacity and efficiency for binary symmetric channels.

Uploaded by

hanokarrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

University of Technology

Department of Electrical Engineering


Electrical Engineering Division
Fourth Year – Semester 2 (2023-2024)
Communication Engineering III
` Prepared by: Dr. Ikhlas M. Farhan

Lecture Six: Communication Channels and Channel Capacity

6.1 Introduction
In telecommunications and computer networking, a communication channel refers to
a physical transmission medium, such as a wire, or a logical connection over a
multiplexed medium, such as a radio channel. A channel is used to convey an
information signal, for example, a digital bit stream, from one or several senders (or
transmitters) to one or several receivers.
6.2 Types of channel
A channel has a specific capacity for transmitting information, often measured by its
Hz bandwidth or data rate in bits per second.
1. Symmetric channel
The symmetric channel has the following conditions:
a- Equal number of symbols in X&Y, i.e. P(Y∣X) is a square matrix.
b- Any row in the P (Y∣X) matrix comes from some permutation of other rows.
For example, the following conditional probability of various channel types is shown:
0.9 0.1
a- 𝑃(𝑌|𝑋) = [ ] It is a BSC because it is a square matrix.
0.1 0.9

and 1𝑠𝑡 a row is a permutation of 2𝑛𝑑 Row.

0.9 0.05 0.1


b- 𝑃(𝑌|𝑋) = [0.05 0.9 0.05] It is a Ternary symmetric channel (TSC)
0.05 0.05 0.9

Because it is a square Matrix, each row is a permutation of others.

1
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

0.8 0.1 0.1


c- 𝑃(𝑌|𝑋) = [ ] It is non-symmetric since it is not square,
0.1 0.8 0.9
although each row is a permutation of others.

0.8 0.1 0.1


d- 𝑃(𝑌|𝑋) = [0.1 0.7 0.2] is a non-symmetric, although it is square, but
0.1 0.1 0.8
2𝑛𝑑 A row is not a permutation of other rows.

2. Binary symmetric channel (BSC)


It is a common communication channel model used in coding and information theory.
In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver
receives a bit. It is assumed that the bit is usually transmitted correctly but that it will
be "flipped" with some probability (the "crossover probability").

A binary symmetric channel with crossover probability p, denoted by BSCp, is a


channel with binary input and binary output and probability of error p; that is, if X is
the transmitted random variable and Y the received variable, then the channel is
characterized by the conditional probabilities:

P (𝑌=0∣𝑋=0) =1−𝑃
P (𝑌=0∣𝑋=1) =𝑃
P (𝑌=1∣𝑋=0) =𝑃
P (𝑌=1∣𝑋=1) =1−𝑃

2
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

3. Ternary symmetric channel (TSC)


The transitional probability of TSC is:

The TSC is symmetric but impractical since practically 𝑥1 and 𝑥3 are not affected as
much as 𝑥2. The interference between 𝑥1 and 𝑥3 is much less than the interference
between 𝑥1 and 𝑥2 or 𝑥2 and 𝑥3.

Hence, the more practical but non-symmetric channel has the transinformation
probability.

Where 𝑥1 interferes with 𝑥2 precisely the same as interference between 𝑥2 and 𝑥3, but
𝑥1 and 𝑥3 are not interfering.

3
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

4. Discrete Memoryless Channel


The Discrete Memoryless Channel (DMC) has an input X, and an output Y. DMC is
represented by the conditional probability of the output Y = y given the input X = x, or
P(Y|X). At any given time (t), the channel output Y= y only depends on the input X =
x at that time (t) and does not depend on the input's past history.

5. Binary Erasure Channel (BEC)


The Binary Erasure Channel (BEC) models are widely used to represent channels or
links that “losses” data. Prime examples of such channels are Internet links and routes.
A BEC channel has a binary input X and a ternary output Y.

Note that the BEC's probability of “bit error” is zero. In other words, the following
conditional probabilities hold for any BEC model:

P (𝑌="𝑒𝑟𝑎𝑠𝑢𝑟𝑒"∣𝑋=0) =𝑃
P (𝑌="𝑒𝑟𝑎𝑠𝑢𝑟𝑒"∣𝑋=1) =𝑃
P (𝑌=0∣𝑋=0) =1−𝑃
P (𝑌=1∣𝑋=1) =1−𝑃
P (𝑌=0∣𝑋=1) =0
P (𝑌=1∣𝑋=0) =0

4
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

6. Special Channels
A. Lossless channel: It has only one nonzero element in each transitional matrix
P (Y∣X) column.

This channel has H(X∣Y)=0 and I(X, Y)=H(X) with zero losses entropy.

B. Deterministic channel: It has only one nonzero element in each row, the
transitional matrix P(Y|X), as an example:

This channel has H(Y∣X)=0 and I(Y, X)=H(Y) with zero noisy entropy.
C. Ideal channel: It has only one nonzero element in each row and column, the
transitional matrix P(Y|X), i.e., it is an identity matrix, as an

𝑦1 𝑦2 𝑦3
𝑥1 1 0 0
𝑃(𝑌|𝑋) = 𝑥2 [ ]
0 1 0
𝑥3 0 0 1

This channel has H(Y∣X)= H(X∣Y)=0 and I(Y, X)=H(Y)=H(X).

D. Noisy channel: No relation between input and output:


H(X∣Y)=H(Y), H(Y∣X)=H(X),
I(X,Y)=0,C=0
H(X,Y)=H(X)+H(Y)
5
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

6.3 Channel Capacity of Discrete channel


This is defined as the maximum of I(X, Y):
𝐶 = 𝑐ℎ𝑎𝑛𝑛𝑒𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 = max[𝐼(𝑋, 𝑌)] 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙 (1)
Physically, it is the maximum amount of information each symbol can carry to the
receiver. Sometimes, this capacity is also expressed in bits/sec if related to the rate of
producing symbols r:
𝐼(𝑋,𝑌)
𝑅(𝑋, 𝑌) = r x 𝐼(𝑋, 𝑌) 𝑏𝑖𝑡𝑠/𝑠𝑒𝑐 or 𝑅(𝑋, 𝑌) = 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝜏̃

6.3.1 Channel capacity of Symmetric channels


The channel capacity is defined as max [I(𝑋,𝑌)]:
𝐼(𝑋, 𝑌) = 𝐻(𝑌) − 𝐻(𝑌|𝑋)
𝑚 𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + ∑ ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 )log 2 𝑃(𝑦𝑗 |𝑥𝑖 )


𝑗=1 𝑖=1

But we have
𝑃(𝑥𝑖 , 𝑦𝑗 ) = P(𝑥𝑖 ) 𝑃(𝑦𝑗 |𝑥𝑖 ) 𝑝𝑢𝑡 𝑖𝑛 𝑎𝑏𝑜𝑣𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑦𝑖𝑒𝑙𝑑𝑒𝑠:
𝑚 𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + ∑ ∑ P(𝑥𝑖 ) 𝑃(𝑦𝑗 |𝑥𝑖 ) log 2 𝑃(𝑦𝑗 |𝑥𝑖 )


𝑗=1 𝑖=1

If the channel is symmetric, the quantity:


𝑚

∑ 𝑃(𝑦𝑗 |𝑥𝑖 )log 2 𝑃(𝑦𝑗 |𝑥𝑖 ) = 𝐊


𝑗=1

Where K is constant and independent of the row number 𝑖 so that the equation
becomes:
𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + 𝐾 ∑ 𝑃(𝑥𝑖 ) (2)


𝑖=1

Hence, for symmetric channels:


I(𝑋,𝑌)=𝐻(𝑌)+𝐾 (3)

6
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

Max of 𝐼(𝑋,𝑌)=max[𝐻(𝑌)+𝐾]=max[𝐻(𝑌)]+𝐾
When Y has equal probability symbols, then max [H(𝑌)] = log 2 𝑚
Then
I(X, Y) = log 2 𝑚 + 𝐾 or 𝐶 = log 2 𝑚 + 𝐾 (4)

 Channel efficiency and redundancy


The channel efficiency :
𝐼(𝑋, 𝑌) 𝐼(𝑋, 𝑌)
η= = (5)
[𝐼(𝑋, 𝑌)]𝑚𝑎𝑥 𝐶

To find the channel redundancy:


𝑅=1−𝜂 (6)

Example 1: For the BSC shown:

Find the channel capacity and efficiency if I(𝑥1 )=2 bits.


Solution:
0.7 0.3
𝑃(𝑌|𝑋) = [ ]
0.3 0.7

Since the channel is symmetric, then


C = log 2 𝑚 + 𝐾 and 𝑛=𝑚
𝑤ℎ𝑒𝑟𝑒 𝑛 𝑎𝑛𝑑 𝑚 𝑎𝑟𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑟𝑜𝑤 𝑎𝑛𝑑 𝑐𝑜𝑙𝑢𝑚𝑛 𝑟𝑒𝑝𝑒𝑠𝑡𝑖𝑣𝑒𝑙𝑦
𝐾=0.7log 2 0.7+0.3log 2 0.3=−0.88129
𝐶=1−0.88129=0.1187 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙

7
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

𝐼(𝑋,𝑌)
The channel efficiency η =
𝐶

I(𝑥1 )=− log 2 𝑃(𝑥1 )=2


P(𝑥1 )=2−2 =0.25 𝑡ℎ𝑒𝑛 𝑃(𝑋)=[0.25 0.75]𝑇
And we have 𝑃(𝑥𝑖 , 𝑦𝑗 ) = P(𝑥𝑖 ) 𝑃(𝑦𝑗 |𝑥𝑖 ) so that
0.7x0.25 0.3x0.25 0.175 0.075
𝑃(𝑋, 𝑌) = [ ]=[ ]
0.3x0.75 0.7x0.75 0.225 0.525

P (𝑌) = [0.4 0.6] →H(𝑌)= 0.4log 2 0.4+0.6log 2 0.6= 0.97095 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙


I (𝑋, 𝑌) =H(𝑌)+𝐾=0.97095−0.88129=0.0896 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
Then
𝐼(𝑋, 𝑌) 0.0896
η= ∗ 100% = ∗ 100% = 75.6%
𝐶 0.1187
To find the channel redundancy: 𝑅=1−𝜂=1−0.756=0.244 𝑜𝑟 24.4%.

Example 2: Given a BSC below, Determine the channel capacity and channel
Efficiency. If p(x1=0.4).

Solution:
0.9 0.1
𝑃(𝑌|𝑋) = [ ]
0.1 0.9

8
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

𝐼(𝑋, 𝑌) 0.513
η= ∗ 100% = ∗ 100% = 96.61%
𝐶 0.532

6.3.2 Cascading of Channels


If two channels are cascaded, then the overall transition matrix is the product of the
two transition matrices.

9
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

For the series information channel, the overall capacity does not exceed each channel
individually.

Example 3: find the transition matrix P( z / x) for the cascaded channel shown:

Solution:

Homework: For previous example) 3(, find p(Y) and p(Z) if p(X)=[ 0.7 0.3]

10
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

6.4 Shannon’s Theorem


a) A given communication system has a maximum rate of information C, known as
the channel capacity.
b) If the transmitted information rate R is less than C, one can approach arbitrarily
small error probabilities using intelligent coding techniques.
c) The encoder must work on more extended signal data blocks to get lower error
probabilities. This entails longer delays and higher computational requirements.
Thus, if R ≤ C, the transmission may be accomplished without error in the presence
of noise; the negation of this theorem is also true: if R > C, then errors cannot be
avoided regardless of the coding technique used.
Consider a band-limited Gaussian channel operating in the presence of additive
Gaussian noise:

The Shannon-Hartley theorem states that the channel capacity is given by:
S
C = B log 2 (1 + ) (7)
N

Where C is the capacity in bits per second, B is the channel's bandwidth in Hertz,
and S/N is the signal-to-noise ratio.

6.5 Channel capacity of Gaussian channels


Definition: A Gaussian channel is a channel affected by the Gaussian noise.
 Review of Gaussian signal:
If the noise signal n (t) is Gaussian, then it’s PDF (probability density function):

11
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

 n 
2

1  0.5 
  
p ( n)  e (8)
2 

Where 𝜇=mean of n(t) and 𝜎 2 is the variance of n(t).


If n(t) is a thermal noise, then we can assume that 𝜇 =0, and the frequency spectrum of
this noise is flat over a wide range of frequencies, as shown.

This has two-sided power spectral density Gn(f)=/2 W/Hz and one-sided power
spectral density Gn(f)= W/Hz.

Since the spectrum is flat, we call this noise white noise. This white noise affects the
signal x (t) as an additive term, i.e., the received signal y(t)=x(t)+n(t). A very popular
name of Additive, White, Gaussian Noise (AWGN) is used for such thermal noise. The
figure below shows how this AWGN affects equiprobable bipolar ±A signal.

12
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

Time domain PDF function

y(t)

 The entropy of Gaussian noise:


Mathematically, we can prove that if x(t) is a random variable, then the entropy of x is
maximum if x(t) has a Gaussian PDF. To find this entropy, then (and assuming =0)

 x2
1 0.5
H ( X )    p( x) ln[ e 2
]dx
 2  nats/sample

 
1
H(X )   p( x) ln 2  dx  x
2
p( x)dx

2 2


x p( x)dx  mean square of x   2   2   2


2
But:



and  p( x) dx  1

Then

H ( X )  ln 2   0.5  ln 2   ln e

H ( X )  ln( 2e  ) nats/sample

0r H ( X )  log 2 ( 2e  ) bits/sample


13
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

 Channel capacity of Gaussian channels


A Gaussian channel is affected by Gaussian noise n (t).

Then: C=max [H(Y) - H(Y/X)], =max[receiver entropy –noise entropy]

It should be noted that maximization is already included when we take the case of
Gaussian noise, then: C  log 2 ( 2e  y )  log 2 ( 2e  n )

Using previous expression of H(X) for Gaussian signal for the signal 𝑦 with variance
y2 then for the noise n (t) with variance n2 (noise power):

y 1  y2
C  log 2  log 2 But y2=x2+n2 sum of power,
n 2  n2

But x2 = S=signal power and n2 = N=noise power, then:

1 SN 1 S
C log 2  log 2 (1  ) bits/sample
2 N 2 N
For an analog signal sampled at the Nyquist rate, then the sampling frequency is fs=
2 B samples/sec, where B is the bandwidth of the signal, hence:

1 SN
C log 2 * 2 B bits/sec,
2 N
Or:
S
C = B log 2 (1 + ) bps
N

This is a very important formula, the Shannon Equation, named after C.E. Shannon.
It is sometimes called the Shannon-Hartly equation.

Notes on Shannon equation:

 Care must be taken regarding the units; here, B is in Hz, SNR=signal to noise power
ratio is absolute, and C is in bits/sec. If SNR is given in dB, then:

SNR (absolute) =100.1(SNR in dB).

 The ratio [C/B] = log 2 (1+SNR) gives what is called channel utilization ratio ( bps
per Hz) that increases with SNR, as shown.
14
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

 The equation C=B log 2 (1+SNR) gives the maximum theoretical performance in
terms of the maximum bit rate that can be transmitted over a channel having a
bandwidth B and SNR ratio.

Example 4: Find the channel capacity of a Gaussian channel if its bandwidth increases
without a limit.

Solution:
B increases without a limit means B, then: lim 𝐶 = lim B log 2 (1 + SNR)
𝐵→∞ 𝐵→∞

Note that SNR itself is a function of B:

N= B ( is the one-sided noise spectral density), then:


S
lim 𝐶 = lim B log 2 (1 + )
𝐵→∞ 𝐵→∞ B
To find this limit, let x=S/ (B), then:

15
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

S
=1

Note:

The result of the previous example indicates that the channel capacity C approaches a
limit of 1.44S/ even if B is very large. This result is very important for bandwidth-
unlimited channels. Still, power-limited channels such as satellite channels, where the
bandwidth may be large, but signal power is a very important parameter.

Example 5: Find the maximum theoretical information rate that can be transmitted
over a telephone channel having 3.5 KHz bandwidth and 15dB SNR.

Solution:
C is the maximum theoretical information rate, using Shannon eq., then:

C = B log 2 (1+ SNR), where, SNR=15dB, changing into absolute SNR=100.1*15=31


then:

C =3500 log 2 (1+31) =17500bps.

Example 6: A source produces 16 equiprobable symbols at a rate of 500 symbols/sec;


check the possibility of transmitting this rate over the telephone channel of the previous
example (example 5).

Solution:
16
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

First, we find the rate of information from the source, which is the source entropy rate
R(X):

R(X) = H(X)* rate of symbols.

H(X) =H(X)|max=log 2 16=4bits (equiprobable case)

Then: R(X) =4 * 500= 2000 bps. Now, since R(X) < 17500, it is possible to transmit
source output over this channel.

Example 7: Find the minimum theoretical time it would take to transmit 2500 octal
digits over the telephone channel of the previous example (example 6).

Solution:
From the previous example, then C=17500 bps. A minimum theoretical transmission
time is obtained if the channel operates at the maximum rate, which is C, then:

Tmin= [amount of information to be transmitted]/C

Amount of information = 2500 * log 2 8 =7500 bits (note each octal digit has
log 2 8=3bits of information), then:

Tmin= 7500/17500=0.428sec.

Example 8: Find the minimum theoretical SNR required to transmit compressed video
information at a rate of 27Mbps over a channel having 5MHz bandwidth.

Solution:
For the minimum theoretical SNR, then put C=source bit rate =27Mbps, then:
𝑆
𝐶 = 𝐵 log 2 (1 + )
𝑁
27* 106 = 5* 106 log 2 (1+SNR), or

1+SNR =25.4 SNR=41.2 or SNR=16.1 dB

17
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

Example 9: If the channel capacity is 2000 bps, find its bandwidth if the SNR is 3.
Solution:
𝑆
𝐶 = 𝐵 log 2 (1 + )
𝑁
2000 = 𝐵 log 2 (1 + 3)
2000 = 𝐵 log 2 (4)
𝐵 = 1000 𝑘𝐻𝑧

Example 10: Determine the SNR required to achieve a channel capacity of 1585 bps
if the bandwidth is 1 kHz.
Solution:
𝑆
𝐶 = 𝐵 log 2 (1 + )
𝑁
𝑆
1585 = 1000 log 2 (1 + )
𝑁
1585 𝑆
= log 2 (1 + )
1000 𝑁
−1
log 2 (1.585) (1
= + 𝑆𝑁𝑅)
3=(1 + 𝑆𝑁𝑅)
SNR=2
H.W
Q1: For the channel below having a transition matrix:

0.7 0.2 0.1


𝑃(𝑌|𝑋) = [0.1 0.7 0.2]
0.2 0.1 0.7
Find the channel capacity and efficiency if p(x1) =p(x2) =0.25.

Q2: The channel module shown below P(x1) =0.6 and P(x2) =0.4 Determine:
1- Marginal entropy.
3- Joint entropy.
4- Noise entropy H (Y|X).
5- Losses entropy H(X|Y).

18
Lecture 6: Communication Channels Dr.Ikhlas M. Farhan

Q3: A binary source sends x1 with a probability of 0.4 and x2 with 0.6 probabilities
through a channel with the probability of errors of 0.1 for x1 and 0.2 for x2.
Determine:
1-Source entropy.
2- Marginal entropy.
3- Joint entropy.
4- Conditional entropy (Y|X).
5- Losses entropy (X|Y).
6- Transformation.

Q4: draw the channel model for the transition matrix:

2/3 1/3
𝑃(𝑌|𝑋) = [ 0 1 ]
0.5 0.5

Q5: Consider a communication system that operates over an Additive White Gaussian
Noise (AWGN) channel with a bandwidth of 4 kHz. The received signal power S is 10
mW, and the noise power spectral density 𝑁𝑂 is =10(−9) ⁄2 W/Hz?

1. Calculate the signal-to-noise ratio (SNR) of the channel.


2. Determine the channel capacity using the Shannon-Hartley theorem.
3. Discuss how increasing the bandwidth would affect the channel capacity.

Q6: what is the capacity of an AWGN channel with bandwidth of 1MHz, power of
10W and noise power spectral density of 𝑁𝑂 ⁄2 =10(−9) W/Hz?

19

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy