DC Theory Question Bank 24-25
DC Theory Question Bank 24-25
Question bank
Digital Communication
Unit 1
Random Processes & Noise
1. State properties of auto-correlation function. Show that when wide sense stationary process
passed through an LTI filter with impulse response h(t) produces constant mean square value.
2. State the properties of in phase & quadrature-phase components of narrow-band noise &
explain the process of generation with PSD.
3. A random process X(t)= A cos (ωct +θ), where A &ωc are constant while θ is a random
variable with uniform Pdf.
f θ (θ )=1/2π, - π < θ < π
1) Find mean, autocorrelation function and PSD of X(t). (Show that X (t) is WSS before
finding PSD)
2) Find the auto correlation function by time averaging & show that
<RXX(τ)>= RXX(τ)
4. Consider a random Process V (t) =A cos (ωot +Ф) where Ф is a random variable with
probability density
f(θ ) = 1/2π, -π<θ<π
=0 else where
1) Show that the first & second moments of V(t) are independent of time.
2) If the random variable is replaced by a fixed angle θo will the ensemble Meanof V(t) be
time-independent?
5. Show that the random process X (t) = A cos (ωot +Ф) where Ф is a random variableuniformly
distributed in the range (0, 2 π) is a wide sense stationary process.
6. With help of mathematical expression explain the stationary random process, non-
stationaryrandom process, & wide sense stationary processes and Ergodic processes.
7. A wide sense stationary random processes X(t) is applied to input of an LTI systemwith
impulse response h(t) = 3e-2tu(t). Find the mean value of output Y(t) of the system
ifE[X(t)]=2.
8. Define autocorrelation Function. State & explain any three properties of the autocorrelation
function.
9. Two jointly wide sense stationary random processes have the sample functions of theform
x(t)= A cos (ωot +θ) & y(t)= A cos (ωot + θ +Ф) A,B & Ф are constant. Θ is arandom variable
uniformly distributed between 0 to 2 π. Find the cross-correlationfunction Rxy(t).
10. What are the conditions for a random process to be a wide sense stationary? What
isergodicity?
11. A continuous random variable ‘x’ has probability density given by
F(x) = 2e-2xx>0
=0 x≤0
Find the mean & variance of the function. Explain the significance of these terms in
Communication system.
12. What is power spectral density? Derive the expression of PSD when the Random processis
transmitted through the LTI filter.
13. Given random process X(t)=K, where K is a random variable uniformly Distributedin the
range (-1, 1)
1. Sketch the ensemble of this process.
2. Determine X(t)
3. Determine Rx(t1,t2)
4. Is the process WSS
5. Is the process Ergodic
6. If the process is WSS what is its power Px.
14. Classification of random processes with mathematical expressions.
15. A WSS random process X(t) is applied to the input of the LTI system with impulseresponse
h(t)=a exp(-at) u(t). Find the mean value of the output Y(t) of the system if E[X(t)]=6 and a=2.
16. Two random processes z(t) and y(t) are given byz(t)=A cos (ωt + θ) & y(t)=A sin (ωt + θ)
Where A and ω are constants and θ is a uniform random variable over (0,2π).
FindCross-correlation of z(t) and y(t).
17. Show that the output of the LTI system is WSS if the input applied to it is WSS.
18. If the process X(t)=A cos(2πfct+Φ), where Φ is a random variable uniformlydistributed in the
range (0,2π) is passed through a filter with(f)=j2πf. Find the outputPSD.
19. The random variable X has a uniform distribution over a 0≤x≤2. Find the mean and
meansquare value for the random process V(t)=6ext.
20. Define random process. What are the time averages associated with random Processes?
21. If X(t) = A cos (ωot +Ф) is a random process with Ф is a random variable
uniformlydistributed in the range(0,2π). Prove that x(t) is ergodic in the mean.
22. Explain the Gaussian process with its properties in detail.
23. Explain the concept of representation of narrowband noise in terms of in-phase &quadrature
components with their properties.
24. Explain the concept of representation of narrowband noise in terms of envelope &Phase
components.
25. Explain the concept of a sine wave plus narrowband noise.
26. The o/p of an oscillator is described by X(t)=A cos(2πfct-Φ),where A is a constant & fc &θ are
independent random variables. The probability density function of Φ is defined by
FQ(Q) = 1/2π 0 ≤ θ ≤2π
= 0 otherwise
Find PSD of X(t) in terms of the PDF of the Frequency fc. What happens to this PSDwhen
the Frequency fc assumes a constant value.
27. A random process g(t) has power spectral density G(f)=η/2 for -∞≤f≤∞. The Randomprocess
is passed through a low pass filter with transfer function H (f) =2 for -fm≤ f ≤fmand H(f)=0
otherwise. Find the PSDof the wave at the output of the filter.
28. Find the power spectral density of random process X(t) defined by
X(t)=A cos(2πfct-Φ)
Where Ф is a uniformly distributed random variable over the integral (0, 2 π).
29. Define the mean, correlation, standard deviation, and variance of a random process.
30. let X(t) be a zero mean,stationary, Gaussian process with autocorrelation functionRx(t). This
process is applied to a square law device, which is defined by the input-outputrelation Y(t) =
X(t)2 where Y(t) is the output. Show that the mean of Y(t) isRx(0).
31. Explain stationary, non-stationary, wide sense stationary and Ergodic processeswith thehelp
of mathematical expression.
32. A random process is defined as X(t)= A cos (ωct +Φ), where A &ωc are constant while Ø is a
uniformly distributed random variable in the range (- π <Φ< π )is a wide sense stationary
process.
33. Show that total normalised noise power can be obtained by superposition of the power of
individual noise components.
34. Define ergodic random process.
35. What is narrowband noise? Explain generation of narrowband noise from its in-phase and
quadrature components.
36. Find the PSD at the output of filter with H(f)=j2π f when a random process X(t)= A cos (ωct
+θ) is applied at the input, where θ is uniformly distributed random variable over the integral
(0, 2π)
37. Explain thermal noise.
Unit 2
Digital Modulation-I
10. A received signal has an amplitude of +/-2 V for a time T signal is corrupted by whiteGaussian noise,
having PSD 10-4 V2/Hz. If the signal is processed by the MF receiver, whatshould be the minimum
time T during which the signal must be sustained so that the probabilityof error is not exceeding
1.1×10-5.
11. Binary data is transmitted over an RF bandpass channel with a usable bandwidth of10MHz at a rate
4.8×106bits/sec using an ASK signalling method. The carrier amplitudeat the receiver antenna is 1mv
and the noise spectral density at the receiver input is 10-5Watt/Hz. Find the error probability of a
coherent and non-coherent receiver.Error function values; Q(3.10)=.00097, Q(3.15)=.00082,
Q(5.00)=2×10-7,Q(5.20)=10-7
12. Binary data is transmitted over a telephone link that has a usable bandwidth of3000Hz and a maximum
achievable signal-to-noise power ratio of 6 dB at its Output.
1) Determine the maximum signalling rate & probability of error if a
2) coherent ASKScheme is used for transmitting binary data through this channel.
3) If the data ismaintained at 300bits/sec, find the error probability.
Error function values; Q(3.40)=.00034, Q(3.45)=.00028, Q(6.30)=10-10,
13. An FSK system transmitted binary data at a rate 2.5×106bits/sec. During the Gaussiannoise having
zero mean and Power spectral density at 10-20 Watt/Hz is added to thesignal. In absence of noise, the
amplitude of the received sinusoidal signal for 1 or 0 is 1μV.
Determine the average probability of a symbol error assuming coherent detection.
14. Consider the signal S(t) shown in fig.
Determine the impulse response of a filter matched to this signal and sketch it as Afunction of time
Plot the matched filter output as a function of time.
15. Derive the error probability of matched filter.
16. Show that the performance of matched filter & correlator is identical.
17. State the various properties of the matched filter. Explain impulse response in detail.
18. Derive the expression for the output of an optimum filter using a correlator.
19. Derive the expression for the signal to the ratio of the integrator and DUMP filter.
20. Explain the phenomena of Minimum Error Test & Error Probability.
21. 24. Explain the concept of Signal space representation or Geometric representation ofthe signal.
22. Explain the Conversion of continuous AWGN channel to vector channel.
23. A matched filter has a time response given by (triangular shape)
h(t)=[1000X t] Volts 0 ≤ t ≤0.01 m sec
=0 otherwise
1)What should be the shape of the input signal for this matched filter?
2) What is thesampling instant for decision purpose.
3) What is the maximum possible data rate at theinput?
4) If No=0.625X10-10 W/Hz. Estimate the probability of error.
24. Show that the impulse response of a matched filter is a time-reversed and delayed version ofthe input
signal.
25. Binary data is transmitted at a rate of 10 Mbps over a channel whose bandwidth is 6MHz. Final signal
energy per bit at a receiver input for coherent BPSK &DPSK to achieve probability error. Pe ≤10-4
assume No/2=10-10 W/Hz.
26. Binary data is transmitted over a telephone link that has a usable bandwidth of 3000Hz & a maximum
achievable Signal to power ratio of 6 dB at its output.
1)Determine the maximum signalling rate & probability of error.
2) coherent ASK scheme is used for transmitting binary data through this channel.
3) if the data is maintained at 300 bits/sec, calculate the error probability.
27. Binary data is transmitted over an RF bandpass channel with a usable bandwidth of10MHz at a rate of
4.8 x 106 bits/sec using the ASK signalling method. The carrier amplitudeat the receiver antenna is 1
mv and the noise power spectral density at the receiver inputis 10-15 W/Hz. Calculate the error
probability for coherent & non-coherent receivers.
28. A BPSK signal is received at the input of a coherent optimum receiver with amplitude10mv and
frequency 1 MHz, the signal is corrupted with the white noise of PSD 10-9W/Hz if the data rate is 104
bits/sec.
1) Find the error probability
2) Find the error probability ifthe local oscillator has the phase shift of π/6 radian with the input signal.
3) Find errorprobability if there is10% mistiming in bit synchronization while sampling &
4) finderror probability when both 2 & 3) occur.
29. Derive the expression for error probability if the BPSK signal is detected using the optimumfilter.
30. Calculate the symbol error probability of QPSK.
31. Derive the error probability expression for BPSK & BFSK.
32. A binary data is transmitted using PSK at a rate of 2 Mbps over an RF link havinga Bandwidth is 2
MHz Final signal power is required at a receiver input so that errorProbability is less than or equal to
10-4. Assume No/2=10-10 W/Hz, Q (3.71) =10-4.
33. Binary data is transmitted using M-ary PSK at a rate of 2 Mbps over an RF link havinga bandwidth is
2 MHz. Final signal power required at a receiver input so that bit errorprobability is less than or equal
to 10-5and channel noise PSD is 10-8 W/Hz, Calculate forM=16 and M=32
Given erf(0.99996)=3.1 and erf(.99995)=3.2
34. Show that the probability of error of QPSK is the same as that of BPSK for 1-bit duration.
35. A QPSK signal is received at the input of a coherent optical receiver with an amplitudeof 10 mv and
frequency of 2 MHz the signal is corrupted with the white noise of PSD 10-11W/Hz.If the data rate is
104 bits/sec find the probability of error, also the probability of error forthe BPSK system if the local
oscillator has a phase shift of π/6 rad with the input signal.
36. A system transmits Binary data at the rate of 2.5x106 bits/sec. duringtransmission, white Gaussian
noise of zero mean and power spectral density 10-20 W/Hzis added to the signal. In absence of a nose,
the amplitude of the received sinusoidal wave nfor digit 1 or 0 is 1 mV. Determine the average
probability of symbol error for thefollowing system configuration.
1. Coherent binary FSK
2. Noncoherent binary FSK
3. MPSK, use the following table
38. State the properties of the PN sequence. Draw a ckt diagram of shift reg. with m=4 at (4, 1).Find
generated output sequence if initial contents of Shift register 1000. If the chip rate is107 chips/sec.
calculate the chip & PN sequence duration & period of output sequence.
39. For all the shift registers given in the problem, demonstrate the balance property of the
PNsequence. Also, calculate & plot the Auto correlation function of the PN sequenceproduced by
this shift register.
40. Consider a slow hop spread spectrum system with binary FSK, two symbols perFrequency hop
and a PN sequence generator with outputs with the binary message of011011011000. The message
is transmitted using the following PN sequence with
k=3:{010,110,101,100,000,101,011,001,001,111,011,001}, plot the output frequencies for
theinput message.
41. Explain spread spectrum transmission and reception process in SS communication with neat block
diagram.
42. The information bit duration in DS-BPSK spread spectrum communication system is 4ms while
the chipping rate is 1MHz. Assuming an average error probability of 10-5 for proper detection of
message signal. Calculate the jamming margin. Given Q(4.25)=10-5
43. With the neat block schematic and waveforms explain DSSS generation and detection.
44. What is PN sequence? Explain properties of PN sequence with 3 stage Shift register.
45. Write a short note on : i)CDMA ii)FHSS
46. With the help of neat schematic describe ranging using DSSS in detail.
Unit 5
Information Theoretic Approach to Communication System
1. A DMC having channel transmission matrix as [0.6, 0.4: 0.4,0.6], emits equip probable messages
X1 and X2, draw channel diagram and Find H(X),H(Y), H(X,Y), H(X/Y), I(X,Y). Comment on
the Type of Channel,
2. List various source coding techniques. Explain the need for source coding with an example.
3. A zero memoryless source emits six messages (N,I,R,K,A,T), with probabilities of {0.30,
0.10,0.02,0.15,0.40,0.03} respectively, find
1. Entropy of sources,
2. Determine Shannon Fano's code
4. Define Entropy and Mutual Information.
5. A source emits 1000 samples /sec from a change of 5 symbols with probabilities {1/2, 1/4, 1/8,
1/16, 1/16} to find entropy and information rate.
6. What is Mutual Information? Calculate all entropies and mutual information for the channel with
channel matrix given as P(Y/X) = [0.9,0.1,0;0,0.8,0.2;0,0.3,0.7] Given P(x1) =0.3 & P(x2) =0.25,
P(x3)=0.45.
7. A DMS channel has the following symbols and their probabilities. Apply the Huffman code
technique to generate a code with minimum variance. calculate code efficiency[X]=[S0, S1, S2 S3
S4 S5 S6] [P]=[0.125,0.0625, 0.25, 0.0625, 0.125, 0.125,0.25] .
8. A zero memoryless source emits seven symbols with probabilities of {0.2,
0.15,0.02,0.1,0.4,0.08,0.05}, Compute coding efficiency when the above symbols are encoded by
the Shannon Fano source coding technique.
9. List properties of Mutual information.
10. Encode the following symbols using the Huffman source coding technique and calculate coding
efficiency. [P]= [1/4, 1/8,1/16, 1/16,1/16,1/4,1/16,1/8].
11. State objectives of Source coding.
12. Apply Huffman coding for symbols (A E H N G S) Generated by a DMS with Probabilities {0.19,
0.15,0.2,0.16,0.4,0.08 } also calculate coding efficiency.
13. A discrete source emits messages X1& X2 with probabilities 3/4 &1/4 with BSC, Find H(X),
H(Y), H(X,Y), I(X;Y).if probability P=1/3 Draw a channel diagram.
14. A discrete memoryless source has r symbols X1,X2,X3 & X4 with probabilities 0.3,0.2,0.4 and 0.1
respectively, Construct Huffman code, and calculate code efficiency and redundancy.
15. Calculate H(X), H(Y), H(X,Y), I(X;Y), for a channel with three inputs X1, X2 and X3,
three outputs Y1, Y2 and Y3 with noise matrix as given below: P[Y/X] = [0.9, 0.1, 0.0:
0.0, 0.8 0.2: 0.0, 0.3, 0.7] where P(X1) = 0.3, P(X2) = 1/4, P(X3) = 0.9/2.
16. Prove that self-information is always positive.
17. List various source coding techniques. Explain the need of source coding with an example.
18. An ideal communication system with average power limitation and white Gaussian noise has a
bandwidth of 1 MHz and an S/N ratio of 10:1. Determine the channel Capacity. 2. If the S/N ratio
drop to 5, what bandwidth is required for the same channel capacity?
19. Define entropy with its properties? Show that the entropy of DMS is maximum when all
the messages are equiprobable.
20. A zero memory source emits six messages with probabilities of {0.30, 0.15, 0.12, 0.25,
0.08, 0.10} find code sequence for Shannon Fano code, the entropy of sources, average
codeword length, efficiency, and redundancy.
21. Find mutual information for the channel matrix given as P(X,Y) = [0.3, 0.05, 0 : 0 , 0.25,
0 : 0 , 0.15, 0.05: 0, 0.05,0.15] .
22. State & explain all three of Shannon’s theorems of information theory.
23. A voice-grade telephone channel has a bandwidth of 3400 Hz. If the SNR on the channel is 30dB.
Calculate channel capacity. If the channel is to be used to transmit 48000bps of data find the
minimum SNR need.
24. What is Mutual information Show that mutual information is always positive?
25. Explain different types of discrete memoryless channel.
26. Find coding efficiency of a source encoder generating messages with probability
1/4,1/8,1/2,1/8 using Shannon-fano coding technique.
27. A 3bit PCM system generates 1,000 samples per second, if the quantized samples
produced by the system have probabilities [1/4,1/4,1/8,1/8, 1/16,1/16,1/16,1/16].Find the
rate of information. If the samples are equiprobable, what will be the rate of information?
28. Apply Huffman coding for the following message ensemble..X=[x1 x2 x3 x4 x5 x6 x7]
P=[0.45 0.15 0.1 0.1 0.08 0.08 0.04]
29. Define entropy. Show that the entropy is maximum when binary messageshas 50%
probability of occurrence.
30. Prove that H(X,Y)= H(X/Y)+H(Y) and H(X,Y), =H(Y/X)+ H(X)
31. Compare Shannon Fano and Huffman coding techniques.
32. A source out one of the six messages during each message interval with probabilities
[1/2,1/4,1/8,1/8,1/16,1/32 and 1/32. Find the entropy of the system. Also find the rate of
information if there are 16 outcomes per second.
33. A Apply Huffman coding for the following message ensemble..X=[x1 x2 x3 x4 x5 x6 x7]
P=[0.45 0.15 0.1 0.1 0.08 0.08 0.04] and find the coding efficiency with M=2.
Unit 6
Error-Control Coding
1. A voice-grade telephone channel has a bandwidth of 3400 Hz. If the SNR on the channel is 30dB.
Calculate channel capacity. If the channel is to be used to transmit 48000bps of data find the
minimum SNR need.
2. Explain the linearity property of Linear Block Code with an example
3. For a Systematic LBC, the parity check bits are, C1 =M1 M2 M3; C2 = M2 M3 M4; C3
=M1 M2 M4, Find
1. Generator Matrix,
2. Error detecting and correcting capabilities,
3. Parity check matrix,
4. Corrected Codewords for received codeword[1101001].
4. State Channel Coding Theorem
5. For a Systematic (6,3), the parity matrix is given by P=[1 01 1;0 1 1;1 1 0] 1). Find all possible
code vectors,2)find error detecting and correcting capabilities.
6. State and explain 1) Shannon’s channel coding theorem, and 2) Shannon’s Information Capacity
theorem.
7. Define channel capacity. State channel coding theorem.
8. Obtain the code words for the (6, 3) LBC which has the generator matrix of G=[1 0 0 1 0 1: 0 1 0 0
1 1: 0 0 1 1 1 0 ], find possible code words , obtain corrected code words , if received code word is
r=0 0 1 1 1 0.
9. Parity matrix of (7,4) LBC is as follows: P[1 0 1; 1 1 1; 1 1 0; 0 1 1] find code word for the
message 1) 0101, 2)1010.
10. Explain the following terms concerning Linear Block code 1. Hamming Weight,2. Hamming
Distance.
11. An ideal communication system has an SNR of 10 and a bandwidth of 1 MHz
1. Determine the channel Capacity.
2. If the SNR ratio drops to 5, what bandwidth is required for the same channel capacity?
3. If bandwidth is dropped to half, what will be the new SNR for the same channel capacity?
12. For a Systematic (7,4) LBC ,the Parity matrix is as follows: P[1 1 0; 0 1 1; 1 1 1; 1 0 1]
1. Construct a generator matrix
2. find code words for the message 1) 1100, 2)0011
3. If the received code vector is R=0111101, find the corrected code word
13. State information capacity Theorem? A channel has a bandwidth of 5kHz and signal to noise
power ratio of 63,determine the bandwidth needed if SNR is reduced to 31.
14. Define and Give an example of a) Hamming Weight, b)Hamming Distance, c)Code rate and d)
Min Hamming distance
15. For (6,3) systematic Linear code , the three parity digits are given by C4 =M1 M2 ; C5 = M1
M2 M3; C6 =M1 M3.
1. Determine Generator Matrix,
2.Comment on error detection and correction ability of code and
3. If the received sequence is 101101,determine the message word.
16. Draw an encoder for cyclic code having generator polynomial g(x)=1+X 2+X3, Generate code word
for message [1011].
17. Define terms 1. Minimal Polynomial, 2. Generator Polynomial.
18. Find all elements of GF(8) with primitive polynomial and hence compute minimal polynomial for
α2+α+1.
19. Draw cyclic code decoder for g(x)=1+X+X3
20. Obtain generator matrix and parity check matrix for (7,4) cyclic code using generator polynomial
g(x)= X3+X+1.
21. Explain the cyclic property of cyclic code. Generate a systematic (7,4) cyclic code for the message
1) 1010, 2) 1000.
22. Draw a syndrome calculator for (7,4) cyclic encoder and obtain syndrome for received coed word
[1001001].
23. For cyclic code with generator polynomial g(x)= X3+X2+1, obtain the code words for [1011] ,
[1010], and [1100].
24. Draw hardware arrangements for (7,4) cyclic encoder using g(x)= 1+X2+X3.
25. By using the Polynomial division method obtain code vectors for d=[1010].Assume generating
polynomial g(x)= 1+X2+X3.
26. Explain the following terms 1). Galois field, 2.)Primitive Element.
27. Construct (23) finite field for primitive polynomial g(x)= X3+X+1, and find minimal polynomials
for all elements.
28. Draw cyclic encoder structure for systematic (7,4) cyclic code with g(x)= 1+X 2+X3,obtain code
word for message [1001].
29. For a (6,3)LBC code the with parity bits C4=d1+d2+d3 , C5=d1+d2 and C6=d1+d3 ,
1. find code word and
2. Determine error-correcting capability.
30. Show that Shannon’s limit for AWGN Channel is -1.6dB
31. Design (3, 1) cyclic repetition code and its decoding method.Find corrected code words
for 1) 010 and 2) 110.
32. Sketch the syndrome calculator for the generator polynomial g(X)= ( X3+X2+1) to obtain
the syndrome for the received code word 1 0 0 1 0 1 1.
33. Find the elements of the GF (8) field using the primitive polynomial.
34. State the properties of the Finite field and explain the concept of the primitive
polynomial., and minimal polynomial.
35. Write Note on Single parity check code.
36. Find all elements of GF(8) with primitive polynomial and hence compute minimal polynomial for
α2+α+1.
37. For a Systematic LBC, the three parity check bits are, C4 = d1d2d3; C5= d1d2; C6 = d1d3,
1. Construct Generator Matrix,
2. Construct all code generated by this matrix.
2. Determine Error correcting capabilities,
3. Prepare suitable decoding table
4.Decode received word [000110].
38. For given convolution encoder draw three graphical representation
39. Draw the encoder and syndrome calculator for the generator polynomial g(x)=1+x2+x3 and
obtain the syndrome for the received codeword 1001011
40. Explain properties of Linear block code and cyclic code with suitable example.
41. Define and explain following terms.
i)Hamming Distance
ii)Hamming Weight
iii)Code rate
iv)Constraint length
v)Generator Polynomial
42. The generator matrix for (7, 4) Linear block code is given below. Find all code vector.
Calculate syndrome for C4 without error.
G= 1 0 0 0 : 1 1 0
0 1 0 0 : 0 1 1
0 0 1 0 : 1 0 1
0 0 0 1 : 1 1 1