0% found this document useful (0 votes)
6 views4 pages

Tutorial 4

The document is a tutorial from the Department of Electronics & Communication Engineering at IIT Roorkee, focusing on digital signal processing and communication. It includes various problems related to discrete memoryless sources, channel capacity, coding efficiency, quantization, and filter bank design. Each problem requires calculations and analysis of concepts such as entropy, mutual information, coding redundancy, and quantization noise.

Uploaded by

princes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views4 pages

Tutorial 4

The document is a tutorial from the Department of Electronics & Communication Engineering at IIT Roorkee, focusing on digital signal processing and communication. It includes various problems related to discrete memoryless sources, channel capacity, coding efficiency, quantization, and filter bank design. Each problem requires calculations and analysis of concepts such as entropy, mutual information, coding redundancy, and quantization noise.

Uploaded by

princes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 4

Department of Electronics & Communication Engineering

Indian Institute of Technology Roorkee

ECC 511 – Digital Signal Processing & Digital Communication


tutorial #4

1. Consider a discrete memoryless binary source that outputs bit 1 three times more
frequently than bit 0. The source outputs are carried over a discrete memoryless
binary channel having transition probabilities P(Y = 0 | X = 0) = 0.7, P(Y = 1 | X = 0)
= 0.3, P(Y = 0 | X = 1) = 0.4, P(Y = 1 | X = 1) = 0.6, where X and Y are binary-valued
r.v.’s representing the input and output of the channel, respectively. Calculate
(a) Self-information I(0) and I(1) at the source.
(b) Source entropy H(X) and the entropy at the channel output H(Y).
(c) Conditional self information and mutual information for all combinations of
channel input-output.
(d) Average information carried by the channel per transition. Hence verify that
I(X; Y) = H(X) − H(X | Y) = H(X) + H(Y) − H(X, Y)

2. In Problem 1 above, the source output bits are transmitted as it is. What is the
percentage coding redundancy?
In order to increase the coding efficiency, it is recommended that the source outputs
be encoded using variable length codes by taking a block of N bits at a time. Use
Huffman encoding procedure to determine a binary code for N = 2. Hence, calculate
the increase in coding efficiency.

3. Check whether the following sets of codewords are uniquely decodable. If so, then
check whether they are also instantaneously decodable.
(a) 0, 01, 011
(b) 0, 10, 110, 1110, 1011, 1101

4. A binary source emits bits 0 and 1 at a rate 16 kbps with probabilities 0.25 and 0.75,
respectively. If these bits are to be transmitted over a binary symmetric channel with
bit error probability p = 0.25, determine the minimum transmission bit-rate necessary
for reliable error-free transmission.
Suppose the channel coder at the transmitting end maps a block of 10 bits to N bits.
Determine the minimum value of N necessary for reliable error-free transmission.
Hence, calculate the coding efficiency.
In order to increase the coding efficiency, we first apply some source coding scheme
followed by channel coding at the transmitter. If the Huffman coding scheme in
Problem 2 is applied here prior to the channel coding scheme as above, calculate the
increase in coding efficiency.
5. A telephone channel has a bandwidth B = 3 kHz and a signal-to-noise power ratio 26
dB. The channel is characterized as a zero-mean AWGN channel with noise psd 10 −20
Watts per Hz.
(a) Calculate the channel capacity in bits per second.
(b)Determine the minimum bit-energy necessary for reliable transmission over this
channel.
6. Determine the entropy of a zero-mean uniform pdf memoryless source. Hence,
calculate the coding efficiency when the source symbols of a uniform pdf memoryless
source, with the source outputs in the range −1 to +1, are encoded in an optimal PCM
quantizer using fixed length codes of 3 bits per symbol.

7. A baseband signal, band-limited to 4 kHz, is sampled at Nyquist rate. The samples x


are uniformly quantized at 2 bits per sample followed by Huffman coding. The pdf of
the r.v. X at the sampler output is shown below.

8.
If the encoder outputs are transmitted over an AWGN channel with noise psd η / 2 =
2.5 × 10−3 Watts per Hz, calculate the following.
(a) Compression ratio and the coding efficiency of the Huffman encoder,
(b)Minimum value of the transmitted bit-energy Eb necessary for supporting error-
free transmission if the channel is band-limited to 40 kHz,
(c) Minimum value of the transmitted bit-energy Eb necessary for supporting error-
free transmission if the channel is not band-limited.

9. Consider a bandpass signal with center frequency 4W and bandwidth W.


(a) Determine the Nyquist rate (minimum sampling rate) for this signal.
(b)Determine the frequencies that are good for sampling the signal so as to achieve
alias-free reconstruction.

10. Consider a uniform R-bit midrise quantizer with overload point xol, and a uniform
input pdf in the range (−xm, + xm). Calculate total quantization noise when (a) xm > xol
(b) xm = xol and (c) xm < xol
11. Determine the decision levels and the reconstruction levels of a 2-level pdf-optimized
non-uniform quantizer designed for a Gaussian input with zero mean and unit
variance. Hence, calculate the noise that will be introduced by the quantizer.

Hint:

where

12. In an adaptive quantization backward (AQB) system, the variance of the input signal
at nth sampling instant is estimated as

where y(n) is the quantized version of the input sample x(n). Show that a one-word
memory is enough for realizing effective adaptation.

13. Consider an input {X(n)} of variance σX2 and normalized adjacent sample correlation
ρ, contaminated by additive white Gaussian noise of variance σN2. Determine the
prediction coefficient in a first-order predictor that minimizes the prediction error.

14. Show analytically that the C-means (or K-means) –based LBG algorithm always
converges to a minimum distortion.

15. It is required to design a four-band filter bank [0 – 0.5], [0.5 – 1.0], [1.0 – 2.0], and
[2.0 – 3.0] kHz for 16 kbps coding of speech.
(a) Show the band-division tree that realizes this partition using repeated quadrature
mirror filtering of the [0 – 4.0] kHz band.
(b)If maximum and minimum numbers of bits allocated for coding one sample in
each sub-band are 5 and 2, respectively, determine a suitable bit assignment that
realizes a total bit-rate of 16 kbps (assume fixed length coding for each band).

16. Consider the encoding of the random variables x1 and x2 that are characterized by the
joint pdf p(x1, x2) given as
where C is the rectangle with sides a and b, and oriented at 45o w.r.t. the horizontal
axis, as shown in the figure below. Evaluate the bit-rates required for uniform
quantization of x1 and x2 separately (scalar quantization), and combined (vector)
quantization of (x1, x2). Determine the difference in bit-rate when a = 4b.

x2

b
C
x1
a

17. Consider the orthogonal transform as follows.

(a) For unit variance and adjacent sample correlation ρ, determine the geometric
mean of coefficient variances.
(b)Find the value of a for which the geometric mean is minimized.

18. Consider KL transform of order 2, and an input with unit variance and adjacent
sample correlation ρ = 0.85.
(a) Calculate the coefficient variances.
(b)Determine the number of bits to be allocated for coding each of the coefficients
for an average bit-rate R = 1.5 bits. Hence, calculate the reconstruction error
variance for uniform quantization of the coefficients.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy