Informationtheory Ii: Model Answers To Exercise 4 of March 24, 2010
Informationtheory Ii: Model Answers To Exercise 4 of March 24, 2010
Double-Gaussian Channel
Problem 1
1. The Capacity of this channel is given by
C =
1
1
.
log 1 +
2
N
3.5
2.5
1.5
0.5
5
N
10
Figure 1: Channel capacity C (in bits per channel use) versus variance of the noise N.
2. (a) Since the total power is divided equally between the two channels,
!
!
P
P
1
1
C =
+ log 1 + 2
.
log 1 + 2
2
N1
2
N2
For scheme A we therefore get
CA = 2
P
P
1
log 1 +
= log 1 +
.
2
2N
2N
P
is a convex function of x. Therefore, using
As it is seen in Figure 1, f (x) = log 1 + 2x
Jensens inequality
1
P
P
1
log 1 +
C =
+ log 1 +
2
2N1
2
2N2
P
log 1 +
2(N1 /2 + N2 /2)
P
= log 1 +
= CA .
2N
Thus, scheme B is better.
= 1 ,
is a concave function of x. Therefore, with
1
P
P
1
log 1 +
+ log 1 +
2
N
2
N
1
1
f (P) + f P
2
2
1
1
f
P + P
2
2
1
f
P
2
P
log 1 +
2N
1
P
P
1
log 1 +
+ log 1 +
,
2
2N
2
2N
=
=
=
x
N
which means that for scheme A, an equal input power ( P2 ) achieves the highest capacity.
Now, since scheme B is better even under equal input power (see i)), if we optimize over
input power we are going to find that scheme B is still better.
Problem 2
1. The probability density of the random variables Zk for all k is given as follows:
fZ (z) =
9
1
(z) + fB (z),
10
10
where fB (z) is the probability density of the normal distribution N (0, N). Hence,
Z
fZ (z) log fZ (z) dz
h(Z) =
Z
1
1
9
=
(z) log
(z) + fB (z) dz
10
10
10
Z
1
9
9
fB (z) log
(z) + fB (z) dz
10
10
10
Z
9
9
9
1
1
1
fB (z) log
(0) + fB (0)
(z) + fB (z) dz
= log
10
10
10
10
10
10
Z
1
9
9
fB (z) log
=
(z) + fB (z) dz.
10
10
10
c Amos Lapidoth, 2010
Since
log
1
9
9
(z) + fB (z)
log
fB (z)
10
10
10
z,
we get
Z
fB (z) log
9
1
(z) + fB (z) dz
10
10
Z
9
fB (z) dz
fB (z) log
10
Z
Z
9
fB (z) log
=
fB (z) log (fB (z)) dz
dz +
10
1
9
log(2eN),
= log
10
2
and thus
9
h(Z)
10
1
9
log
log(2eN) = ,
10
2
1
9
fGP (y) + fGP+N (y),
10
10
where fG2 () denotes the Gaussian probability density with variance 2 (fY (y) is the convolution of fX () and fZ ()). Thus,
Z
fY (y) log fY (y) dy
h(Y ) =
Problem 3
We shall use the following capacity formula without proving it:
C = max I(X; Y ).
E[X]
1 w
e ,
w 0,
= log 1 +
.
Therefore we obtain
C log 1 +
(1)
We next show that equality in (1) can be achieved. To this end, we choose X to be equal to 0 with
probability +
and to be exponentially distributed with mean + otherwise. In other words,
we choose X in such a way that
Pr(X x) =
x
+
e
.
+
It can be easily verified that the distribution on Y induced by the above choice of X is exponential
with mean + . Hence, this choice achieves equality in (1) and therefore we have
.
C = log 1 +
Remark: A more rigorous derivation of this result uses Fanos Inequality to prove the converse and
uses joint-typicality decoder to prove the achievability. It is done in a similar way to the derivation
of the capacity of the Gaussian channel.