0% found this document useful (0 votes)
95 views29 pages

DCS Doc

1. The document discusses information theory, entropy, channel models, and modulation techniques. It defines information entropy as a measure of information in a random variable calculated using a probability distribution formula. 2. It describes different channel models including lossless, deterministic, and noiseless channels. The channel capacity is defined as the maximum rate of information transfer and depends on factors like bandwidth and noise. 3. Digital and analog modulation techniques are compared. Digital modulation involves modulating a digital message onto an analog carrier, while analog modulation modulates both message and carrier in analog form. Digital techniques are more noise immune and complex but allow for multiple access.

Uploaded by

Prem Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views29 pages

DCS Doc

1. The document discusses information theory, entropy, channel models, and modulation techniques. It defines information entropy as a measure of information in a random variable calculated using a probability distribution formula. 2. It describes different channel models including lossless, deterministic, and noiseless channels. The channel capacity is defined as the maximum rate of information transfer and depends on factors like bandwidth and noise. 3. Digital and analog modulation techniques are compared. Digital modulation involves modulating a digital message onto an analog carrier, while analog modulation modulates both message and carrier in analog form. Digital techniques are more noise immune and complex but allow for multiple access.

Uploaded by

Prem Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

CE00036-3 Data Communication System Individual Assignment Page 1 of 29

Answer the following questions:

a. Explain measurement of information entropy?

Aim: The aim of this question is to describe briefly about information, information theory,
entropy of the information and demonstrate it using an example.

Theory:

Information theory: It is a branch of applied mathematics, electrical engineering, and computer


science involving the quantification, storage, and communication of information. Information
theory was originally developed by Claude E. Shannon to find fundamental limits on signal
processing and communication operations such as data compression.

Information theory is based on probability theory and statistics. Information theory often
concerns itself with measures of information of the distributions associated with random
variables. Important quantities of information are entropy and mutual information.

Let there be any source with i number of symbols, S = {x 1, x2, x3,,xn}. Then the information of
the source would be, I(xi) = -logb P(xi) (i)

Entropy of the information: It is a property of the probability distribution of a random variable


and gives a limit on the rate at which data generated by independent samples with the given
distribution can be reliably compressed. It can be said that it is a measure of information in a

H= pi log 2 ( pi )
single random variable. It is calculated as, i

.. (ii)

This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of
base 2, and this base-2 measure of entropy has sometimes been called the "Shannon" in his
honor.

For example, we consider a string 10101. The alphabet of symbols in the string is 0 and 1.
Frequencies of alphabet symbols:

0 = 0.4 1 = 0.6

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 2 of 29

Shannon entropy can be calculated as follows:


H(x) = - (0.4 log20.4 + 0.6 log20.6) = -((-0.529) + (-0.442)) = -(0.97095) =
0.97095 bits per symbol.
Shannon entropy indicates the minimal number of bits per symbol needed to
encode the information in binary form (if log base is 2). Given above
calculated Shannon entropy rounded up, each symbol has to be encoded by 1
bit.

b. What do you understand by Channel model & Channel capacity?

Aim: The aim of this question is to describe the channel models along with
their capacities with the help of diagrams for better understanding of the
concept.
Theory:
Channel capacity: The transfer capability of a channel to transmit information
in the presence of noise is known as channel capacity. It can be formulated as

S
( )
follows: C=B log 2 1+ N b / s (iii)

where, C= channel capacity, B= channel bandwidth in Hz, S= signal power,


N= noise power. This equation is valid only for white Gaussian noise, but for
other noises it can be modified. The channel models and their capacities are
described below:
Channel Models:

Lossless channel:

In this channel, every symbol of input alphabet is one to many output. It is


described by a channel matrix with only one non zero element in each row.

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 3 of 29

Fig.1 Channel diagram and channel matrix of a lossless channel


For a lossless channel, H(X|Y) = 0 and I(X;Y) = H(X)
So, Cs = max H(X) = log2 m; where m = number of symbols in X.

Deterministic channel:

In this channel every symbol of the input is many to one. It is described by a


channel matrix with only one non zero element in each row.


Fig2. Channel diagram and channel matrix of a deterministic channel
For a deterministic channel, H(Y|X) = 0 and I(X;Y) = H(Y)
So, Cs = max H(Y) = log2 n; where n = number of symbols in Y.

Noiseless channel:

In this type of channel, every symbol of input alphabet is one to one. It


satisfies the properties of lossless channel as well as deterministic channel.


Fig3. Channel diagram and channel matrix of a noiseless channel
Since a noiseless channel is both lossless and deterministic, I(X|Y) = H(X) =
H(Y). So Cs = log2 m = log2 n

c. How analog modulations differ from digital modulation techniques?

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 4 of 29

Aim: The aim of this question is to describe modulation and differentiate


between the analog and digital modulation techniques with examples and
diagrams.

Theory:

Modulation is a very important step in communication. The process by which


any carrier wave is reformed in such a way that information is able to be
carried on it is known as modulation. We need modulation for the following
basic purposes:

1. Facilitates multiple access: By translating the baseband spectrum of signals from various
users to different frequency bands, multiple users can be accommodated within a band of
the electromagnetic spectrum.
2. Increase the range of communication: Low frequency baseband signals suffer from
attenuation and hence cannot be transmitted over long distances. So translation to a
higher frequency band results in long distance transmission.
3. Reduction antenna size: The antenna height and aperture is inversely proportional to the
radiated signal frequency and hence high frequency signal radiation result in smaller
antenna size.
The difference between the analog and digital modulation techniques are
given below:

DIGITAL MODULATION ANALOG MODULATION


TECHNIQUES TECHNIQUES
The input has to be in digital The input has to be in analog
format, For ex. 100101 etc. format. For ex. Human voice
etc.
The message signal is digital The message and carrier
but the carrier signal is signals are analog.
analog.

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 5 of 29

Digital instruments are free Analog instruments usually


from observational errors have a scale which is cramped
like parallax and at lower end and give
approximation errors. considerable observational
errors.
Only two values are Any value between the
considered valid; one value maximum and minimum is
to represent 1 and another considered to be valid.
to represent 0.. All other
values are considered noise
and are rejected.
Any noise is virtually Any noise or interference that
eliminated once the receiver falls in the given frequency
discerns whether a 0 or a bandwidth gets mixed with
1 was transmitted, so it is the actual signal, so it is less
well immune to noise. noise immune.
Expensive to implement in Cheaper to implement.
comparison to analog
techniques.
Complex to execute as it Easy to execute as most
needs to pass through an signals we use are analog in
analog-to-digital converter nature.
before transmission and a
digital-to-analog converter
at the receiver to recover the
original signal.
The modulation index The modulation index remains
remains above 1. below 1.
Modulation techniques such Modulation techniques such
as ASK, FSK, PSK, QAM as AM, FM and PM falls
falls under this form of under this form of
modulation. modulation.

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 6 of 29

Application: wireless LANs, Application: computer


RFID and Bluetooth modems, VHF aircraft radio,
communication, amateur and in portable two-way
radio, caller ID and radio, broadcasting music and
emergency broadcasts, speech, magnetic tape
transmit digital data over recording systems, two-way
optical fiber. radio systems and video
transmission systems, signal
generation in al synthesizers,
such as the Yamaha DX7 to
implement FM synthesis.


Table1. Difference between analog and digital modulation techniques

d. What do you understand by channel coding systems? Explain its types?

Aim: The aim of this question is to describe the channel coding theory and its
various types. The methods should be elaborated using algorithms and also it
is required to describe the algorithms along with examples to demonstrate the
coding methods.
Theory: Coding theory is the study of the properties of codes and their fitness
for a specific application. Codes are used for data compression, cryptography,
error-correction and more recently also for network coding. This typically
involves the removal of redundancy and the correction (or detection) of errors
in the transmitted data. There are mainly two types of coding:

1. Shannon-Fano Coding
2. Huffman Coding

3.

Level-3 Asia Pacific Institute of Information Technology 2016


CE00036-3 Data Communication System Individual Assignment Page 7 of 29

Shannon-Fano Coding: In the field of data compression, ShannonFano coding, named


after Claude Shannon and Robert Fano, is a technique for constructing a prefix code
based on a set of symbols and their probabilities (estimated or measured).

4. Lets take an example, with the given signals of respective probabilities:


5. a1= 0.36 a2=0.18 a3 =0.18 a4=0.12 a5 = 0.09
a6=0.07

6. Algorithm for Shannon Fano Coding

1. First of all, the probabilities are written in descending order.


2. Now, we divide them into two groups such that each two group has almost equal sum.so
that gives us two groups here, a1 a2 and a3 a4 a5 a6.
3. Then we assign 0 to upper set and 1 to the lower set.
4. We repeat step 2 and 3 till we get only two probabilities.
5. Then we give the codes for the symbols.

7.
8. Fig4. Shannon Fano Coding method

Huffman coding: It is a lossless data compression algorithm. The idea is to assign


variable-legth codes to input characters, lengths of the assigned codes are based on the
frequencies of corresponding characters. The most frequent character gets the
smallest code and the least frequent character gets the largest code. Lets take an
example. Let the following symbols be with their given probabilities:

9. a1=0.4 a2=0.35 a3=0.2 a4=0.05

Level-3 Asia Pacific Institute of Information Technology 2016


10. Algorithm for Huffman Coding

1. We build this tree from bottom. We arrange the probabilities in descending order.
2. Now we create a leaf node for each unique character and build a min heap of all leaf nodes.
3. Then we extract two nodes with the minimum frequency from the min heap. We then create
a new internal node with frequency equal to the sum of the two nodes frequencies.
4. We make the first extracted node as its left child and the other extracted node as its right
child and add this node to the min heap.
5. We then repeat steps 2 and 3 until the heap contains only one node. Then we have the codes
corresponding each symbol.

11.

12. Fig5. Huffman Coding method

13. Comparision of Shannon Fano and Huffman Coding Methods

14. SY 15. CO 16. ASCI 17. SHANNO 18. HUFFMAN


M U I BIT N BIT BIT COUNT
B NT SIZE SIZE
OL
19. A 20. 14 21. 8 22. 2 23. 1
24. B 25. 7 26. 8 27. 2 28. 3
29. C 30. 5 31. 8 32. 3 33. 3
34. D 35. 5 36. 8 37. 3 38. 3
39. E 40. 4 41. 8 42. 3 43. 3
44. Table2. Comparision of Shannon Fano and Huffman Coding Methods

45. The adjustment in code size from the Shannon-Fano to the Huffman encoding scheme
results in an increase of 7 bits to encode B, but a saving of 14 bits when coding the A
symbol, for a net savings of 7 bits.
46. In general, Shannon-Fano and Huffman coding will always be similar in size. However,
Huffman coding will always at least equal the efficiency of the Shannon-Fano method, and
thus has become the preferred coding method of its type.

e. Explain the applications where digital modulation techniques are used with the help of
diagram.

47. Aim: The aim of this question is to mention various applications of digital modulation
techniques with the help of their diagrams for better understanding of the concept.

48. Voice grade modems: In order to convert the digital data into an analog signal which is
compatible to the telephone line, some sort of modulation is needed.

BFSK is referred for modems where simplicity and economy are of more importance.
DPSK is used for a rate of 2400bits per second (4 phase DPSK) and 4800 bit per second (8
phase DPSK). One disadvantage is that it is susceptible to phase jitter problem.
Phase jitter problems are solved using 16-QAM. The bit rate is extended up to 9600 bits per
second.

49.

50. Fig6. Modulation need in telephone lines

51. Digital radio: The radio transmission is known as analog transmission. It can either be Am
or FM. In digital radio the information obtained from a source is transmitted to the receiver
by using some kind of modulation technique. The microwave links are used for
communication and line of sight path propagation is used. It is described using the figure
below:
52.

53. Fig7. Digital radio uses line of sight transmission.

54. Digital communication by satellite: The broadcast capability of a satellite channel can be
exploited by using multiple access techniques. One of the most common and well known is
the CDMA technology which is used in mobile phones. The TDMA is used for enabling to
access a satellite in different time slots which is described using the figure below:
(Sanjay Sharma, 2015)

55.

56. Fig8. Structure of a TDMA frame

57. CONCLUSION:

58. This question was designed to describe the basic concepts of digital communication system
using various theories such as information, information theory, various channel models and
their capacities, entropy of the information, modulation techniques and difference between
the analog and digital modulation techniques, channel coding systems and its various types
and the most important, the wide application areas of modulation techniques.

59.
60. Q2. Write a MATLAB program to determine the free space path loss and power
received by antenna.

61. Aim: The aim of this question is to demonstrate the theory of free space path loss with the
help of diagrams and draft a MATLAB code for calculating the loss. Also it is required to
plot a graph for showing the degradation in signal strength.

62. Theory: The free space path loss, also known as FSPL is the loss in signal strength that
occurs when an electromagnetic wave travels over a line of sight path in free space. In these
circumstances there are no obstacles that might cause the signal to be reflected refracted, or
that might cause additional attenuation. The free space path loss calculations only look at
the loss of the path itself and do not contain any factors relating to the transmitter power,
antenna gains or the receiver sensitivity levels.

63. To understand the reasons for the free space path loss, it is possible to imagine a signal
spreading out from a transmitter. It will move away from the source spreading out in the
form of a sphere. As it does so, the surface area of the sphere increases. As this will follow
the law of the conservation of energy, as the surface area of the sphere increases, so the
intensity of the signal must decrease.

64.

65. Fig9. Free space path loss depiction

66. As a result of this it is found that the signal decreases in a way that is inversely proportional
to the square of the distance from the source of the radio signal in free space. So it can be

1
signal=
concluded that the signal follows the inverse square law. distance 2

. (iv)
4 d

67. Thus the formula for free space path loss is given by: .. (v)

FSPL=

68. Where: FSPL is the Free space path loss, is the signal wavelength (metres), d is the
distance of the receiver from the transmitter (metres).

69. Decibel Version of Free Space Path Loss Equation

70. Most RF comparisons and measurements are performed in decibels. This gives an easy and
consistent method to compare the signal levels present at various points. Accordingly, it is
very convenient to express the free space path loss formula, FSPL, in terms of decibels.

71. FSPL (dB) = 20 log10 (d) + 20 log10 (f) + 32.44 .. (vi)

72. Where: d is the distance of the receiver from the transmitter (km), f is the signal frequency
(MHz)

73. Algorithm:

74. By following the below mentioned steps we can calculate the free space path loss:

1. We need to define the distance for which we need to transmit the power.
2. Having the frequency and transmitters gain we need to find out the receivers gain.
3. We then transmit the power which was intended to be sent.
4. At the receivers end we calculate the power received, and thus we calculate the loss
suffered.
5. We use the formula given above to find out the Free space path loss.
6. By calculating the loss suffered we can know about the transferability of the signal by using
graphs and charts.

75. MATLAB Code:

76. clc; %clears all the input and output from


the command window display.
77. clear all; % it removes all the variables
from the memory
78. close all; % it closes all the open files in
the window.
79. D=0:0.1:10 %defining distance ratio
80. F=input('Frequency in MHz'); %user input for frequency
of signal
81. Pt=input('Transmitted power'); %user input for
transmitted power
82. Gt=input('Gain of transmitter antenna'); %user input for gain of
transmitter antenna
83. Gs=input('Gain of reciever antenna'); %user input for gain of
receiver antenna
84. FSPL=20*log10(D)+ 20*log10(F)+ 32.44 %formula of FSPL
85. Pr=Pt+Gt+Gs-FSPL %formula for received
power
86. plot(D,Pr) %command for plotting graph
between distance and received power
87. title('Free Signal Path Loss'); %defining the title of the
graph
88. xlabel('D'); %defining the x-axis
89. ylabel('Pr'); %defining the y-axis
90.

91. RESULT:

Frequency in MHz 15
Transmitted power 35
Gain of transmitter antenna 75
Gain of reciever antenna 65
The path loss is: 9.596183e+001 db = 26.085 db
The recieved power is: 8.013412e+007 watts = 28.783 watts.
92.

93. Fig10. Free Space Path Loss curve

94. CONCLUSION: This question has demonstrated that the signal strength reduces when it
travels the prescribed distance. It was concluded from the coding that the transmitted power
of 35 units got reduced by 17.76 units.

95. Q3. Write a MATLAB code for BPSK, QPSK, & 16QAM digital modulation schemes.

96. Aim: The aim of this question is to study in detail about the various digital modulation
schemes along with their MATLAB codes and verify the theoretical results with the
simulation results.

Binary Phase Shift Keying

97. Theory:

98. The simplest PSK technique is called binary phase-shift keying (BPSK). It uses two
opposite signal phases (0 and 180 degrees). The digital signal is broken up timewise into
individual bits (binary digits). The state of each bit is determined according to the state of
the preceding bit. If the phase of the wave does not change, then the signal state stays the
same (0 or 1). If the phase of the wave changes by 180 degrees -- that is, if the phase
reverses -- then the signal state changes (from 0 to 1, or from 1 to 0). Because there are two
possible wave phases, BPSK is sometimes called bi-phase modulation.

99. In this process the phase of the sinusoidal career is changed according to the data bit to be
transmitted.

100. Mathematical expression: s ( t )=b(t ) 2 P. cos ( 2 f c t) . (vii)

101. Here, b(t) = 1 when binary 1 is to be transmitted

102. = -1 when binary 0 is to be transmitted

103.

104. Figure11. PSK waveform for the given signal

105. Algorithm for BPSK:

106. The following steps are needed to generate a BPSK signal:

A binary data sequence is fed to the bipolar NRZ level encoder.


It is them multiplied with a modulated or balanced modulating signal.
This signal is then multiplied to the carrier signal which yield the binary phase shift keying
signal.

107. The below given block diagram will demonstrate the algorithm.

108.
109. Fig12. Generation of BPSK

110. MATLAB Code:

111. clc; %clears all the input and output from


the command window display.
112. clear all; % it removes all the variables
from the memory
113. close all; % it closes all the open files in
the window.
114. set(0,'defaultlinelinewidth',2); %setting line width
115. A=5; %declaring the amplitude of
the signals
116. t=0:.001:1; %declaring the time intervals
for signals
117. f1=input('Carrier Sine wave frequency ='); %user input of
carrier sine wave frequency
118. f2=input('Message frequency ='); %user input of message
frequency
119. x=A.*sin(2*pi*f1*t); %getting the carrier
signal
120. subplot(3,1,1); %declaring the position
of the graph w.r.t. row, column and
position.
121. plot(t,x); %command for plotting graph
between time and carrier signal
122. xlabel('time'); %defining x-axis
123. ylabel('Amplitude'); %defining y-axis
124. title('Carrier'); %defining the title of
graph
125. grid on; %for displaying the grid
126. u=square(2*pi*f2*t); %Message signal
127. subplot(3,1,2); %declaring the position
of the graph w.r.t. row, column and
position.
128. plot(t,u); %command for plotting graph
between time and message signal
129. xlabel('time'); %defining x-axis
130. ylabel('Amplitude'); %defining y-axis
131. title('Message Signal'); %defining the title of graph
132. grid on; %for displaying the grid
133. v=x.*u; %Sine wave multiplied
with square wave
134. subplot(3,1,3); %declaring the position
of the graph w.r.t. row, column and
position.
135. plot(t,v); %command for plotting graph
between time and sine wave multiplied
with square wave.
136. axis([0 1 -6 6]); % for defining the axis
gradients
137. xlabel('t'); %defining x-axis
138. ylabel('y'); %defining y-axis
139. title('PSK'); %defining the title of graph
140. grid on; %for displaying the grid
141.
142. RESULT
The frequency of carrier signal: 10
The frequency of message signal: 5

143.

144. Fig13. Graphs for BPSK

145. CONCLUSION:

146. From this question it was concluded that for BPSK signals, they have very high
system complexity and are very suitable for high bit rates, phases are their variable
characteristic, they have high noise immunity and low probability of error.
Quadrature Phase Shift Keying

147. Theory:

148. QPSK (Quadrature Phase Shift Keying) is type of phase shift keying. Unlike BPSK
which is a DSBCS modulation scheme with digital information for the message, QPSK is
also a DSBCS modulation scheme but it sends two bits of digital information a time
(without the use of another carrier frequency). The amount of radio frequency spectrum
required to transmit QPSK reliably is half that required for BPSK signals, which in turn
makes room for more users on the channel. The figure below shows a QPSK modulated
waveform.

149.

150. Fig14. QPSK waveform

151. Algorithm:

152. The following steps are needed to generate a QPSK signal:

1. A binary data sequence is fed to binary NRZ encoder.


2. This signal is then divided into even and odd parts.
Ps
3. Even part of this signal is multiplied with carrier signal cos ct).
4. Odd part of this signal is multipled with time delay which is again multiplied by the carrier

signal ( Ps sin ct).


5. These two signals are the added to give out the QPSK signal.

153. The below given block diagram will demonstrate the algorithm.
154.

155. Fig15. An offset QPSK transmitter

156. MATLAB Code:

157. clc; %clears all the input and output from


the command window display.
158. clear all; % it removes all the variables
from the memory
159. close all; % it closes all the open files in
the window.
160. msg=round(rand(1,20));
161. data=[];
162. t=0:.01:.99;
163. c=cos(2*pi*10*t); %carrier signal & it's a
matrix of length 1X100(because for each t,
there's a c). Let fc=10hz
164. for i=1:20
165. if msg(i)==0
166. d=-1*ones(1,10);
167. else
168. d=ones(1,10);
169. end;
170. data=[data d]; %data is created only for
plotting.it has no application in the
following for loop
171. end;
172. disp('length of t,c,data');
173. a=[length(t);
174. length(c);
175. length(data);];
176. disp(a);
177. qpsk=[];
178. for i=1:2:20
179. if msg(i)==1 && msg(i+1)== 0
180. qpsk=[qpsk cos(2*pi*10*t+(pi/4))];
181. else if msg(i)==0 && msg(i+1)==0
182. qpsk=[qpsk cos(2*pi*10*t+(3*pi/4))];
183. else if msg(i)==0 && msg(i+1)==1
184. qpsk=[qpsk cos(2*pi*10*t+(5*pi/4))];
185. else if msg(i)==1 && msg(i+1)==1
186. qpsk=[qpsk cos(2*pi*10*t+(7*pi/4))];
187. end;
188. end;
189. end;
190. end;
191. end;
192. %plot(qpsk);
193. modsig=[];
194. for i=1:100:1000
195.
196. for j=1:10
197. p=qpsk(i+j);
198. modsig=[modsig p];
199. end;
200. end;
201. subplot(311); %declaring the position
of the graph w.r.t. row, column and
position.
202. plot(data); %command for plotting the
data
203. axis([0 100 -1.5 1.5]) %defining axis gradient
204. title('Digital Message signal'); %defining title of the
graph
205.
206. subplot(313); %declaring the position
of the graph w.r.t. row, column and position
207. plot(modsig); %command for plotting
the modulating signal
208. axis([0 100 -1.5 1.5]) %defining axis gradient
209. title('QPSK signal'); %defining title of the
graph
210.
211. subplot(312) %declaring the position
of the graph w.r.t. row, column and
position.
212. plot(c); %command for plotting
the data
213. axis([0 100 -1.5 1.5]) %defining axis gradient
214. title('Unmodulated carrier'); %defining title of the
graph

215.

216. RESULT:

217.

218. Fig16. QPSK waveforms

219. CONCLUSION:

220. From this simulation, it could be concluded that for QPSK, phase is the variable
characteristic of the carrier, it is very complex to design and its application areas are suitable
for very high bit rates.

16 Quadrature Amplitude Modulation

221. Theory:
222. Quadrature Amplitude Modulation, QAM is a signal in which two carriers shifted in
phase by 90 degrees are modulated and the resultant output consists of both amplitude and
phase variations. In view of the fact that both amplitude and phase variations are present it
may also be considered as a mixture of amplitude and phase modulation. Quadrature
amplitude modulation, QAM may exist in what may be termed either analogue or digital
formats. The analogue versions of QAM are typically used to allow multiple analogue
signals to be carried on a single carrier. Digital formats of QAM are often referred to as
"Quantised QAM" and they are being increasingly used for data communications often
within radio communications systems.

223.

224. Algorithm:

225. The following steps are needed to generate a 16 QAM signal:

1. We use a single channel as two separate channels that are orthogonal to each other.
2. The information is divided into Inphase and Quadrature components.
3. The outputs of both modulators are algebraically summed and the result is then transmitted.

226. The below given block diagram will demonstrate the algorithm.

227.

228. Fig17. 16 QAM modulator

229. MATLAB Code:

230. clc; %clears all the input and output from


the command window display.
231. clear all; % it removes all the variables
from the memory
232. close all; % it closes all the open files in
the window.
233. M = 16; %possible no. of messages or
symbols
234. k = log2(M); % no of bits per code
word (symbol)
235. no_of_bits = 100000; % total no of bits =
100,000
236. EbNo = 10; %dBs
237. Fs=2; %output message sampling
frequency
238.
239. % The Transmitter%
240. x = randint(no_of_bits,1); % 100,000 random
binary 1's and 0's
241. figure;
242. subplot(211);
243. stem(x(1:40)) ;% a stem of first 60 bits.
244. title('(1st 40 out of 100,000) Message Bits');%defining title of the
graph
245. xlabel('Bits-->'); %defining x-axis
246. ylabel('Bit value'); %defining y-axis
247. % symbol generation
248. r=reshape(x,k,length(x)/k)';
249. xsym=bin2dec(num2str(r));
250. % Stem of first 10 Symbols
251. subplot(212);
252. stem(xsym(1:10));
253. title('(1st 10 out of 25,000)Message Symbols'); %defining title of
the graph
254. xlabel('Symbols-->'); %defining x-axis
255. ylabel('Magnitude'); %defining y-axis
256.
257. % 16-QAM %
258. t_x = dmodce(xsym,1,Fs, 'qask',M); %the transmitted
signal, s(t)
259.
260. %the Channel%
261. SNR = EbNo + 10*log10(k);
262. r_x = awgn(t_x,SNR,'measured'); % the received
Signal, r(t)=s(t)+no_of_bits(t)
263.
264. % Scatter Plot of received signal
265. h = scatterplot(r_x,1,1);
266. grid
267. title('Received Signal Constellation');
268. axis([-5 5 -5 5]);

269. RESULT:

270.

271. Fig18. Graph for message bit and symbols

Constellation Diagram:

272.
273. Fig19. Constellation diagram

274. CONCLUSIONS:

275. In QAM two DSB signals are transmitted using the carrier of the same frequency but
in Phase Quadrature. Both the halves are used, thus the bandwidth efficiency is increased.
With this scheme signals can be transmitted at twice the symbol rate as compared to a
simple base band communication system.

276. Q4. Generate the constant-envelope PSK signal waveform for M=8, for
convenience the signal Amplitude is normal to unity.

277. In the mobile radio, since the multipath fading distorts the amplitude of the carrier,
the signal is sent by modulating the phase or frequency of the carrier, which has no impact
on the amplitude. We call those modulations constant envelope modulations; that is, no
signal is modulated on the amplitude.

278. The distortion of carrier amplitude by other factors such as fading or nonlinear
amplification will not affect the signal. Therefore, it is possible to use a nonlinear amplifier.
Although, the non-linear amplifier can distort the amplitude but not the phase.

279.

280. Algorithm:

281. The following steps are needed to generate a constant envelope PSK signal:

1. First of all, we need to assign a fixed value of amplitude for the signal.
2. Then we input the carrier signal and message signal frequency.
3. Then the message signal is multiplied with a square wave.
4. Then at the receivers site we obtain the modulated signal without any changes in the
amplitude.

282. MATLAB Code:

283. clc; %clears all the input and output from


the command window display.
284. clear all; % it removes all the variables
from the memory
285. close all; % it closes all the open files in
the window.
286. set(0,'defaultlinelinewidth',2); %for setting the line width
287. A=5; %setting the amplitude of the
signal
288. t=0:.001:1; %defining time ratio
289.
290. f1=input('Carrier Sine wave frequency ='); %user input for
frequency of carrier wave
291. f2=input('Message frequency ='); %user input for frequency
of message signal
292. x=A.*sin(2*pi*f1*t %Carrier Sine
293.
294. subplot(3,1,1); %declaring the position
of the graph w.r.t. row, column and
position.
295. plot(t,x); %command for plotting graph
between time and carrier wave
296. xlabel('time'); %defining the x-axis
297. ylabel('Amplitude'); %defining the y-axis
298. title('Carrier'); %defining the title of the
graph
299. grid on; %for displaying the grid
300.
301. u=square(2*pi*f2*t); %Message signal
302. subplot(3,1,2); %declaring the position
of the graph w.r.t. row, column and
position.
303. plot(t,u); %for plotting the graph
between time and message signal
304. xlabel('time'); %defining the x-axis
305. ylabel('Amplitude'); %defining the y-axis
306. title('Message Signal'); %defining the title of the graph
307. grid on; %for displaying the grid
308.
309. v=x.*u; %Sine wave multiplied
with square wave
310. subplot(3,1,3); %declaring the position
of the graph w.r.t. row, column and
position.
311. plot(t,v); %for plotting the graph
between time and message signal
312. axis([0 1 -6 6]); %defining the axis gradient
313. xlabel('t'); %defining the x-axis
314. ylabel('y'); %defining the y-axis
315. title('PSK'); %defining the title of the graph
316. grid on; %for displaying the grid
317.
318. RESULT:

Message frequency: 8Hz


Carrier frequency: 1Hz

319.

320. Fig 20. Constant envelop PSK signal

321. CONCLUSION: It could be concluded that the constant envelop modulation


involves the modulation of phase and frequency, without affecting the amplitude of the
carrier signal.

322.
323. REFERENCES

1. Abu, B. (2014). Analog and Digital Modulation. Available:


http://www.differencebetween.net/technology/communication-technology/difference-
between-analog-and-digital-modulation/. Last accessed 16th Mar 2016.
2. Barnwal, A. (2015). Greedy Algorithms. Available: http://www.geeksforgeeks.org/greedy-
algorithms-set-3-huffman-coding/. Last accessed 20th Mar 2016.
3. Dolors, M. (2013). Information Theory. Available: http://crackingthenutshell.com/what-is-
information-part-2a-information-theory/. Last accessed 6th Mar 2016.
4. Jasmeen, VP. (201414). Phase Shift Keying (PSK). Available:
http://www.circuitsgallery.com/2013/03/psk-matlab-code2.html. Last accessed 14th Mar
2016.
5. Panda, S.K.. (2013). QPSK signal generation. Available:
http://www.mathworks.com/matlabcentral/fileexchange/43698-qpsk-signal-
generation/content/qpskfinal.m. Last accessed 20th Mar 2016
6. Poole, I. (2012). Quadrature Amplitude Modulation. Available: http://www.radio-
electronics.com/info/rf-technology-design/quadrature-amplitude-modulation-qam/what-is-
qam-tutorial.php. Last accessed 14th Mar 2016.
7. Poole, I. (2013). Free Space Path Loss. Available: http://www.radio-
electronics.com/info/propagation/path-loss/free-space-formula-equation.php. Last accessed
11th Mar 2016.
8. Pulickal, N . (2012). New Line Code. Available:
http://newlinecode.blogspot.in/2012/11/qpsk-matlab-2012a.html. Last accessed 14th Mar
2016.
9. Sakshat Virtual Labs. (2012). QPSK Modulation. Available: http://iitg.vlab.co.in/?
sub=59&brch=163&sim=1065&cnt=2404. Last accessed 14th Mar 2016.
10. Sara, V. (2014). 16 QAM simulation in MATLAB. Available:
http://anengineerzdiary.blogspot.in/2014/01/16-qam-simulation-in-matlab.html. Last
accessed 28th Mar 2016.
11. Tone, S. (2012). Difference Between Analog and Digital Modulation Techniques. Available:
http://www.diffen.com/difference/Analog_vs_Digital. Last accessed 6th Mar 2016.
12. William C.Y.. (2015). Constant Envelope Modulation. Available:
http://www.globalspec.com/reference/70996/203279/8-4-constant-envelope-modulation.
Last accessed 20th Mar 2016.
13. Yin Chen, Xuguang Huang. (2015). 16-QAM systems. Available:
http://www.sciencedirect.com/science/article/pii. Last accessed 28th Mar 2016.
14. Roberts, E.. (2012). Data Compressions. Available:
http://cs.stanford.edu/people/eroberts/courses/soco/projects/2000-01/data-
compression/lossless/huffman/comparison.htm. Last accessed 2nd May 2016.
15. Drek, T. (2014). Shannon-Fano versus Huffman. Available:
http://www.binaryessence.com/dct/en000049.htm. Last accessed 2nd May 2016.

324.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy