0% found this document useful (0 votes)
43 views4 pages

Informationtheory Ii: Model Answers To Exercise 4 of March 24, 2010

This document contains model answers to exercises on information theory provided by Professor Dr. A. Lapidoth. It includes solutions to problems about the capacity of double-Gaussian, additive noise, and exponential noise channels. For the double-Gaussian channel, the document shows that scheme B achieves higher capacity than scheme A. For the additive noise channel, it proves that the capacity is infinite. And for the exponential noise channel, it derives the capacity formula.

Uploaded by

GauravSaini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views4 pages

Informationtheory Ii: Model Answers To Exercise 4 of March 24, 2010

This document contains model answers to exercises on information theory provided by Professor Dr. A. Lapidoth. It includes solutions to problems about the capacity of double-Gaussian, additive noise, and exponential noise channels. For the double-Gaussian channel, the document shows that scheme B achieves higher capacity than scheme A. For the additive noise channel, it proves that the capacity is infinite. And for the exponential noise channel, it derives the capacity formula.

Uploaded by

GauravSaini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

InformationTheory II

Prof. Dr. A. Lapidoth

Model Answers to Exercise 4 of March 24, 2010


http://www.isi.ee.ethz.ch/teaching/courses/it2

Double-Gaussian Channel

Problem 1
1. The Capacity of this channel is given by
C =



1
1
.
log 1 +
2
N

3.5

2.5

1.5

0.5

5
N

10

Figure 1: Channel capacity C (in bits per channel use) versus variance of the noise N.

2. (a) Since the total power is divided equally between the two channels,
!
!
P
P
1
1
C =
+ log 1 + 2
.
log 1 + 2
2
N1
2
N2
For scheme A we therefore get
CA = 2

c Amos Lapidoth, 2010






P
P
1
log 1 +
= log 1 +
.
2
2N
2N


P
is a convex function of x. Therefore, using
As it is seen in Figure 1, f (x) = log 1 + 2x
Jensens inequality




1
P
P
1
log 1 +
C =
+ log 1 +
2
2N1
2
2N2


P
log 1 +
2(N1 /2 + N2 /2)


P
= log 1 +
= CA .
2N
Thus, scheme B is better.
= 1 ,
is a concave function of x. Therefore, with




1
P
P
1
log 1 +
+ log 1 +
2
N
2
N
1
1 
f (P) + f P
2
2 
1
1
f
P + P
2
2
 
1
f
P
2


P
log 1 +
2N




1
P
P
1
log 1 +
+ log 1 +
,
2
2N
2
2N

(b) Note first that f (x) = log 1 +


CA =
=

=
=
=

x
N

which means that for scheme A, an equal input power ( P2 ) achieves the highest capacity.
Now, since scheme B is better even under equal input power (see i)), if we optimize over
input power we are going to find that scheme B is still better.

Additive Noise Channel

Problem 2

1. The probability density of the random variables Zk for all k is given as follows:
fZ (z) =

9
1
(z) + fB (z),
10
10

where fB (z) is the probability density of the normal distribution N (0, N). Hence,
Z
fZ (z) log fZ (z) dz
h(Z) =



Z
1
1
9
=
(z) log
(z) + fB (z) dz
10
10
10


Z
1
9
9
fB (z) log
(z) + fB (z) dz

10
10
10




Z
9
9
9
1
1
1
fB (z) log
(0) + fB (0)
(z) + fB (z) dz
= log
10
10
10
10
10
10


Z
1
9
9
fB (z) log
=
(z) + fB (z) dz.
10
10
10
c Amos Lapidoth, 2010

Since
log




1
9
9
(z) + fB (z)
log
fB (z)
10
10
10

z,

we get
Z

fB (z) log


9
1
(z) + fB (z) dz
10
10


Z
9
fB (z) dz
fB (z) log

10

 
Z
Z
9
fB (z) log
=
fB (z) log (fB (z)) dz
dz +
10

 
1
9
log(2eN),
= log
10
2

and thus
9
h(Z)
10

  

1
9
log
log(2eN) = ,
10
2

i.e., h(Z) = . This is actually clear as Zk is with non-zero probability deterministic.


Further, we have
fY (y) =

1
9
fGP (y) + fGP+N (y),
10
10

where fG2 () denotes the Gaussian probability density with variance 2 (fY (y) is the convolution of fX () and fZ ()). Thus,
Z
fY (y) log fY (y) dy
h(Y ) =

clearly is finite. Therefore,


C I(X; Y ) = h(Y ) h(Y |X)
= h(Y ) h(X + Z|X)
= h(Y ) h(Z|X)
= h(Y ) h(Z)
= h(Y ) () = ,
and the capacity is infinity.
2. In order for a coding scheme to achieve capacity, we must show that for any > 0, there exists
an n0 such that for any n > n0 , we can send an arbitrary number of bits using n channel uses
with probability of error less that . Our coding scheme is the following:
Map the message to a real number with absolute value less that P. Transmit the number
repeatedly. Receiver looks at the output sequence to see if there are any repeated numbers. If
yes, declare them to be the input. Otherwise, declare an error. With probability one, the only
repeated received symbols will be the noise free copies of the input. Thus, the probability
of error for this scheme is the probability that Zk = 0 for at most one transmission or
(0.9)n + n1 (0.1)(0.9)(n1) . Again, we can choose n0 large enough to ensure probability of
error less than any > 0 and there are an infinite number of real numbers between P and
P.
c Amos Lapidoth, 2010

Exponential Noise Channel

Problem 3
We shall use the following capacity formula without proving it:
C = max I(X; Y ).
E[X]

We can rewrite I(X; Y ) as


I(X; Y ) = h(Y ) h(Y |X)
= h(Y ) h(Z|X)
= h(Y ) h(Z).
Recall that, for a nonnegative random variable W satisfying E[W ] = , the entropy-maximizing
distribution is the exponential distribution with mean :
f (w) =

1 w
e ,

w 0,

and that the corresponding differential entropy is


h(W ) = log e.
(See Example 12.2.5 in Cover & Thomas, 2nd edition.)
Thus, we can bound I(X; Y ) under the condition E[X] as
I(X; Y ) = h(Y ) h(Z)
= h(Y ) log e
log eE[Y ] log e
= log e(E[X] + E[Z]) log e
log e( + ) log e



= log 1 +
.

Therefore we obtain

C log 1 +

(1)

We next show that equality in (1) can be achieved. To this end, we choose X to be equal to 0 with

probability +
and to be exponentially distributed with mean + otherwise. In other words,
we choose X in such a way that
Pr(X x) =

x
+
e
.
+

It can be easily verified that the distribution on Y induced by the above choice of X is exponential
with mean + . Hence, this choice achieves equality in (1) and therefore we have



.
C = log 1 +

Remark: A more rigorous derivation of this result uses Fanos Inequality to prove the converse and
uses joint-typicality decoder to prove the achievability. It is done in a similar way to the derivation
of the capacity of the Gaussian channel.

c Amos Lapidoth, 2010


You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy