0% found this document useful (0 votes)
68 views22 pages

Lec41 PDF

Pulse code modulation (PCM) involves three main steps: 1) sampling an analog signal, 2) quantizing the sample amplitudes into discrete levels, and 3) encoding the quantized levels into binary digits. The quantization process introduces quantization noise but allows the signal to be represented digitally. More quantization levels provide better signal approximation but require more bits and higher transmission bandwidth. The required transmission bandwidth for PCM is proportional to the product of the sampling rate and the number of bits used to encode each sample.

Uploaded by

Annapurna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views22 pages

Lec41 PDF

Pulse code modulation (PCM) involves three main steps: 1) sampling an analog signal, 2) quantizing the sample amplitudes into discrete levels, and 3) encoding the quantized levels into binary digits. The quantization process introduces quantization noise but allows the signal to be represented digitally. More quantization levels provide better signal approximation but require more bits and higher transmission bandwidth. The required transmission bandwidth for PCM is proportional to the product of the sampling rate and the number of bits used to encode each sample.

Uploaded by

Annapurna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Communication Engineering

Prof. Surendra Prasad


Department of Electrical Engineering
Indian Institute of Technology, Delhi

Lecture - 41
Pulse Code Modulation (PCM)

So, if you remember we have been talking about digital transmission, transmission of
information in the form of pulses and our reason for doing that was we had discussed
that couple of lectures earlier. We are able to trade of bandwidth with performance
against noise, we can really get very good performance if you do this and our immediate
concern was how to represent our analog information via digital representation of the
same.

We discussed one simple method of doing that, in the previous class namely delta
modulation and it is improved version adaptive delta modulation. The many variance of
these delta modulations and but the purpose is the same to try to achieve a good
representation of a basic analog information into a sequence of binary pulses, two level
pulses. These two levels are typically opposite polarity, positive and negative polarity,
now this is one way of doing things.

Another way of representing or converting to analog information into digital form is


through what is called pulse code modulation, which is essentially a quantization process
followed by an encoding process. So, we will discuss pulse code modulation in the
context of communication theory into this class.
(Refer Slide Time: 02:50)

So, if you look at this second method of doings things, basically we have this block
diagram for a Pulse Code Modulator, you will see where the name comes from in a few
minutes. You start with a lesser signal as before and the first thing you do is sample the
message signal using sampling theorem, that is at a rate sufficient to of the assuming of
course, the message signal will have some band limitation, it will be within some
bandwidth.

And therefore, it should be possible to sample it adequately by sampling at a suitable


rate; if it has a bandwidth of b you can sample it at, either 2 b samples or more per
second. And that will be a good representation or adequate representation of the
continuous signal m t, because of the fact that we can recover the message signal,
continuous message signal from a discrete sample signal.

This is same to a block or a device which we call it quantizer, I will describe the
quantizer in a few minutes and the quantizer output essentially carries out a mapping,
some kind of a manipulation, that if this particular sample value that has come out, it lies
in a certain region in the amplitude range a certain interval. Basically, the amplitude of
the signal is now mapped into certain intervals and then, depending on which interval it
lies in that is the output of the quantizer in terms of which interval it lies in.

And then, we convert our that information or encode that information into a binary form,
so that is encoder and that in fact, is a reason why we call it pulse code modulation, so it
is this encoding pulses they are typically binary in nature. This mapping of the level from
the quantizer output to a binary code, which is the final output, is the job of the encoder
and this is transmitted in the form of these sort of binary pulses. So, the final output is a
sequence of binary pulses that is the final output of the pulse code modulator, this is
PCM output.

Let me discuss the quantizer in a little more detail, so let us say this is a message
waveform, what you are really doing is you are dividing the amplitude range that you
could consider into intervals. And you are now saying, you are sampling the signal at a
certain rate, you have the sample values, you look at each sample value, let us keep some
numbers to these values 0, 1, 2, 3, 4, 5, 6, 7.

So, basically what you do is the following, look at each sample value let us say at this
time instant you have this as a sample value and you find that this lies between the level
4 and level 5 which are drawn in this picture. And right in the middle of this level these
two levels you have a corresponding quantized level, so any sample that falls between
this the 4th and the 5th level will be represented by an amplitude corresponding to this
quantized level.

So, I am going to effectively once again do some kind of an approximation, you are not
really sending the true sample value to the signal, you are sending the sample value
which is close to the true value, close in the sense that it lies in a small interval along the
quantized level. So, basically the process of quantization creates an error, but we decide
to live with this error, that is essential important point to note about PCM.

There is an array just like in delta modulation there was error in the presentation, after all
ultimately you are reconstructing only a staircase approximation to the actual signal.
Similarly, in PCM you are going to create an approximation, you are going to create an
error at this issue is important; in both these cases we are actually introducing a kind of
noise. Both in delta modulation as well as in PCM, in the process of converting this
information into digital form a certain kind of noise is getting introduced, we call it
quantization noise.

So, this is as distinct from the noise that is added by the channel, so you may well ask
what is the use, you are replacing one kind of noise with another kind of noise, the
answer to that is fortunately this kind of noise is under our control. We can reduce it to
the level required by choosing parameters of the modulators with you and channel noise
is not really entirely in our control, so anyway this is a quantized level.

So, this let us say this will be level number 5, so the quantizer output would produce a
mapping which says the level to which it is getting quantized is 5. And the encoder will
produce binary representation of the number 5 and it is output and that is a one which
will be actually transmitted, that is basic idea of... So, you will be transmitting number 5
in a binary form, you will be use an appropriate number of bits to be done, so now it is
important to appreciate that between one sample and the next sample, you will have to
transmit not just one pulse.

Suppose, you are transmitting number 5, suppose the total number of levels that you
have as I have shown here is 8, you have 8 possible levels here, so you will require a
minimum of 3 bits to represent this 8 levels. So, an m s ((Refer Time: 10:01)) 3 bits to
represent just this sample value, so I need to send a sequence of 3 pulses with a particular
1 0 combination to represent the sample value.

So, between one sample and the next sample value I have to transmit 3 pulses, so
therefore the pulse rate at which you will be making the transmission is much larger than
the sampling rate. Therefore, the bandwidth requirement obviously, will be high, so it
has a cost in terms of bandwidth which depends on number of quantization levels.
Example if you have a larger number of quantization levels in a given interval you will
obviously, have a better approximation of a signal by the quantization process which will
be nice.

But, the price that you will have to pay for that is, because you have a larger number of
quantization levels, you will require a larger number of bits to be transmitted between
two successive samples. And the rate of transmission would definitely correspondingly
much higher, so let is look at if this is understood can I proceed further with this.
(Refer Slide Time: 11:37)

Let us look at the bandwidth issue. How much is the bandwidth requirement can we
quantify it. To do that, to quantify this let us note by q, the number of quantization levels,
I showed in quantization levels in the example that I drew in the figure a few minutes
ago. In general, if this much larger, because you want a good approximation of the
signal, approximation has to be rather small.

Typically, for convenience you will choose this q to the power of 2, so that you can
represent it as an exact number of bits every level to be in terms of a finite number of
bits. So, typically it will be chosen to the power of 2, so 4, 8, 16, and 32 etcetera and
therefore, it would imply that you need to transmit n bits per sample, for every sample
that you have you will have to transmit n pulses.

So, to signalize a bandwidth of W, so message bandwidth m t let us say is W, then your


sampling rate 1 by T s should be minimum of 2 W and the pulse rate, so that is the
sampling rate. So, the rate at which you will have to pulses or the pulse transmission rate
would be equal to how much 2 n W, because for every sample you are transmitting and
pulses, so that is the number of pulses that you will have transmit per second.

And it is more or less common sense to electrical engineers that the bandwidth of that is
required to transmit pulses will depend on the duration of the pulses. So that means, if I
transmit 2 n W pulses per second, what is the maximum duration of the half range pulse.
Student: ((Refer Time: 14:04))

The maximum value that you can have is 1 by 2 n W, and the corresponding bandwidth
will be proportional to the reciprocal of this duration, if the pulse duration is tau, the
spectrum of the pulse is a sine spectrum. And what is the first process of the sine
spectrum, proportional to 1 by tau, minus 1 by tau to plus 1 by tau, so roughly of course,
the actual bandwidth is little larger. In fact, it is infinite to strictly speaking, but even if
we assume that most of the energy lies in a certain finite band, that will be proportional
to 1 by tau.

So, the actual bandwidth will be therefore, some constant times 2 n W, where this
constant is appropriately chosen to say that 99 percent of the energy of this signal lies in
that band, so it will be proportional to 2 n W. So, that is the typically this cost in lie
between 1 and 2, so that is the required bandwidth and as the increase in number of
quantization levels, that increase the number of value of n and that will increase the
required bandwidth.

So, the pros and cons are we must use large number of levels, if you want a good
approximation and it is achieved at the use of more bandwidth higher bandwidth, any
questions so far. To be we can just go one level further and write the solution as 2 n W, 2
k W, this n is you can also think of this n as log of q to the base 2, n is log of q to the
base 2, so that is another way of writing the same result.

Next thing we try to understand about this process is we have said that there is an error
involved in the quantization process. So, we need to appreciate what and therefore, this
introduces a kind of noise right at the transmitter itself or otherwise if you have transmit
an analog signal that kind of thing is not happening, you are introducing some noise
deliberately here. Of course, you have no choice if you want to do quantization.
(Refer Slide Time: 16:43)

So, one of the important things to understand is what is the magnitude of this
quantization error as a function of the number of quantization levels that you like to
quantify quantization error, which in turn will allow us to quantify the quantization
noise. Sometimes, it is also called quantization noise, let us calculate what is the signal to
noise ratio that you can expect from a PCM system, which uses a certain number of
quantization levels that is the analysis I like to be feed you.

What you do some notation for that, let me denote by X the value of the message at a
given sampling instant, do not confuse this n with, n that I have used a few minutes ago,
this is the nth sample instant. Denote delta the quantization interval, we will assume that
your signal amplitude varies between some fixed values, let us say from some value a to
some value b and you divide it that interval of values into q intervals, q sub intervals.

So, the mapping is if X falls in the i th quantization interval, then what we do is we


convert this X into a value X sub q, go back to this picture, ((Refer Time: 18:29)) this is
actual value converted to this level which I am calling X sub q. This is what is this value,
this value is the middle value between the two intervals typically m sub i, I will denote it
by m sub i. So, m sub i is the midpoint of the i th interval and therefore, you can say
delta the quantization interval will be equal to b minus a where b is the largest value, a is
the smallest value, you are dividing with q sub intervals. So, what is the value of the
quantization interval, b minus a upon q, where b is the maximum value of X and a is a
minimum value of X, where q is the number of intervals, so this is some notation let me
depict this in the form of a picture.

(Refer Slide Time: 19:48)

Let us see how this level, it is 0, it is 1, it is 2 these are not the levels these are intervals,
so call it x 0, x 1, x 2, x 3 and the midpoints are represented by m 1, m 2, m 3 and so on,
and so forth. So, we said that X sub q is equal to m sub i, when X is lying between X i
minus 1 and X i, you are mapping the value of X, to the value X q which is equal to m
sub i, the value of X q is equal to m sub i, where if X lies between the i th interval, the i
th interval is defined as between X i minus 1 to X i, this is what I have been saying so
far.

Let me write an expression for X sub i, it will be equal to a plus the smallest value is a,
we can denote it as X 0 here, a is nothing but, x 0 this is equal to a, a plus i delta, i delta 1
to q or if you make it 0 to q minus 1 it will be.

Student: ((Refer Time: 21:35))

I think I am defining from x 1 onwards, x 0 we define it equal to a, it does not really


matter that is ok and m sub i is the middle level of these two, so it is x i minus 1 plus x i
upon 2, that is the m sub i. So, with this notation we are able to compute the quantization
error, how will you define the quantization error, you have a sample value X, so you
have to define quantization error and you are converting this into a value X sub q.
So, what is the error X minus X sub q is the error and average quantization error or
average quantization noise power I am denoting the error. I am thinking of the error as
some kind of a noise, it is basically the expected value of, of course when you talk about
error; error can be positive and negative.

When you talk about power we do not have to talk in terms of positive or negative
values, we want to talk about the square value; essentially we are talking about the moon
square value of the error. So, we get the expected value of X minus X q whole square,
that is the definition of average quantization noise or average least error which is there,
so this is a kind of moon square error.

So, how will you compute this, X it is average because X can take any values, we are
taking X as a random variable and as much as X is a random variable at different time
instance for different values of X means different values of the i th depending on what
you get. So, you have to take the average over the distribution function of X and X lies
between a and b with some probability distribution function, the domain of X is a to b,
lower value is a, largest value is b.

So, you have x minus x sub q, whole square into the density function of x which I am
now denoting by f sub x d x, f x d x and that is the value of the least per error, everyone
agree so far. We proceed further please note that this interval from a to b is composed of
a set of sub intervals, so I can carry out this integration with each of these sub intervals
separately and add up the sum. So, directly carrying out the information from a to b, I
can carry out the integration of this interval, then over this interval, then over this
interval, so same thing.
(Refer Slide Time: 25:01)

So, you can also represent to it as sigma i going from 1 to q integral, so the i th integral is
from x i minus 1 to x sub i and when whether you are considering x to lie between this
region what is the corresponding value of X sub q, m sub i, whole square f x d x, any
questions on this. So, let me denote this quantity by N sub q, this is an average noise
quantization noise that you (Refer Time: 25:50)), similarly I can define and have a signal
power between a and c.

So, the signal power of the quantizer output would be expecting a value of the quantizer
produces a value X q, expected value of X q square and what is this equal to, in the same
way as we did for the case of noise. Basically we have to just take the integral of x q
square f x x d x, which again it was spitted into this is q integrals this is from a to b, in
the q th interval what is the value of x square m sub. So, m sub i square and because this
is a constant in that interval, I have taken it also the intervals and this would be x sub i
minus 1 to x sub i of f x d x.

So, these are the two expressions we have 1 and 2, this gives you the average signal
power, this gives you the average noise power and the ratio of these two will give you
measure of the fidelity of the quantizer output to the original signal which you are
quantizing. So, S sub q upon N sub q is a measure of fidelity or accuracy of a quantizer
so far.

Student: Sir
Yes

Student: ((Refer Time: 28:05))

This one, this like in the same way you are taking average value of the quantizer output,
X q square then

Student: How did we separate ((Refer Time: 28:16))

So, ((Refer Time: 28:19)) a integral from a to b could be broken up into integral from x 0
to x 1, then x 1 to x 2, then x 2 to x 3, that is all I am doing and summing them all up. So,
this m i square is actually part of this integral, but since it is it is independent of x in this
integral, I have taken it also in the integral, just like here I cannot pick it out, because it
depends on x.

Student: ((Refer Time: 28:46))

Please speak out the doubt, if you still have the doubt

Student: ((Refer Time: 28:51))

Sure, now to proceed further this unfortunately is not in a form which will make any
sense to us, it is to analytical it is I mean simply does not give us a physical picture. To
get a physical picture we have to assume a certain distribution for the signal samples,
because unless we have a distribution I cannot simplify these expressions any further.
And to keep the picture simple, let us keep this distribution, assume this distribution to
be one of the simplest distributions we can have really for. Here, x could lie between a
and b and let us say x can lie between a and b very uniform distribution, it is a simplest
picture that you can have.
(Refer Slide Time: 29:48)

So, let us take an example that will give us a better picture, so let us assume that, let us
say a is minus a the lowest limit is minus a and the upper limit is plus a, so I am
assuming the signal to lie between minus a and plus a and uniform distribution of 1 by 2
a. So, I have taken the lower limit as some value minus a and upper limit as plus a, this is
again for simplicity, so then your N sub q, please note that in this case you can write here
m sub i.

The i th quantization level to be minus a minus delta by 2, let me rewrite it many mistake
here, m sub i is minus a plus i delta, minus a plus i delta will take you to the next
boundary and the middle level will be delta by 2 lower, so that is we can write this. So,
m sub i can also be written like this, I will substitute for this m sub i in the expression for
N sub q. So, let us now look at the expression for N sub q, which was summation i
varying from 1 to 2 integral x sub i minus 1 to x sub i into x minus m sub i whole square
and this f x is 1 by 2 a d x.

As usual for m sub i, ((Refer Time: 31:45)) this x plus a minus i delta plus delta by 2
whole square into 1 by 2 a into d x, now this is a very easy, it looks cumbersome because
of this constant. But, if you have to do this integration which is a very straightforward
integration and simplify this integral and this evaluate this integral and skip that step if
you permit me.
(Refer Slide Time: 32:24)

You can see that N sub q becomes i is equal to 1 to q 1 by 2 a that basically comes out
outside the integral and the rest of the integral it has to be simply delta cube upon 12, can
I skip that, can you just evaluate this in this boundary, it is straight forward in that. So,
this becomes delta cube by 2 a and therefore, see this is independent of i, we can write
this as q times delta cube upon 12 into 2 a, because basically we have two similar terms.
So, you simply multiply by q, but what is the value of q delta, q times delta is the

Student: ((Refer Time: 33:20))

2 a the total interval in which the signal lies, the dynamic range of the signal in some
sense, so this is 2 a, so you are left with delta square upon 12, because q delta is equal, so
2 a, so very simple neat expression you will get for the quantization noise. Now, you can
see that smaller your quantization interval the smaller the power of the noise process,
which was intuitively expected. Similarly, if you have to compute S sub q because you
have got simpler of course, as for simply in this case order of this integral of ((Refer
Time: 34:15)) I think it is obvious.

Let me just go through that expression this is 1 by 2 a, so what is the value of this
integral between x i minus 1 to x i with has a width of delta, delta by 2 a, that is the
answer. In this case of course, there is supposed to be it seems to become m i, it is again
very simple to check that, if you substitute from this m i and evaluate this summation
you got to have a simple nice closed form. Q square minus 1 upon twelve into delta
square, essentially same delta square by 12 into q square minus 1 is essentially comes
from the summation of these m i square.

Substitute for m i as given to you and you can check it out, again I am leaving that out
for reasons of time and it is something which is straightforward to you, apart brings us to
the value of S sub q upon N sub q to be given by q square minus 1, very simple. This
upon this all or you can say approximately equal to q square if q is large, when q is much
greater than 1, so that is the real term which you should use for finding out what is the
kind of signal to noise, quantization noise ratio that you can expect from you PCM
quantizer.

Of course, remember that the they are certain assumptions that we have made in deriving
this expression, the most important assumption we made is the signal samples I have a
uniform distribution between minus a and plus a or a to b. So, with that assumption this
is the answer, which is useful from a rule of thumb point of view, but very accurate for
all kinds of signals.

(Refer Slide Time: 36:27)

If you express it in the dBs the rule of thumb term becomes something like this, this will
become 20 log q, so let us say double the value of q, what is the value of including the
signal to noise ratio you will get how many dB's, 6 dB's. 6 dB's 20 log 2. So, every
doubling of q you will get a 6 dB increment in a signal and doubling is what, in terms of
number of bits one more bit.
So, you get a 6 dB increment for every additional bit that you introduce in the
representation, so that is the general rule of thumb, so if you want something at 40 or 50
dB's SNR, 40 dB SNR you must use at least 7 bits, rule of thumb. Typically use 7 to 8
bits in typical applications, 7 to 8 bits means number of quantization level is between
128 and 256, so either 128 or 256 are larger.

The rms value of the error if you are interested in more in terms of rms value of error
rather than signal to noise ratio would be simply delta upon root 12, because it is delta
the noise power is delta square by 12, so rms value is delta by root 12, any questions so
far. Now, this picture is good as well as this the example that I have took was somehow
matched to the kind of quantization that I did, let me elaborate what I mean by that.

You have to take uniform distribution that means, signal has equal quality of being there
in any of the intervals and these intervals are uniformly spaced. But, suppose this the
probability of the signal being in let us say one of these one interval is very different
from the point of view of signal lying in some other interval, this can happen.

This the small values of signals let us say sometimes are much more likely than larger
values in this signal, then this kind of quantization that we are discussed is not a very
good idea to use. This kind of quantization that we discussed is called uniform
quantization, so what we have discussed so far is a uniform quantization, uniform
quantization is quite, if a signal samples are uniformly distributed. So, they are for the
uniform distribution that we discussed, but more likely your samples are not going to be
uniformly distributed.

You may have a Gaussian distribution actually speed signals have a very different kind
of distribution, they are Laplacian kind of distribution, very different kind of
distributions. So, in that case you can intuitively expect that is your uniform quantization
process is not the best thing to do, it is probably better to...
(Refer Slide Time: 40:30)

Let us take an example, suppose x is Gaussian that means samples are distributed
according to this bell shaped distribution curve, probability density curve. Now, this
picture it is clear that the smaller values of the x are much more likely than larger values
of x, x lying in the interval here has this kind of probability. X lying in the interval here
has a very large probability. So, if you are going to use your number of bits for
representation efficiently I should crowd the quantization intervals in this neighborhood.

And space out them sparsely for larger values of x, rather than keeping all the
quantization intervals of uniform size, because then although whenever I use a larger
quantization interval the error is going to be large, but fortunately that error will occur
with much lower probability. So, the continuation of that to the average value of noise
power would be much smaller, so that is a reasonable thing to do.

So, whenever you have a thing like this you do not use uniform quantization, you use
what is called non uniform quantization, and non uniform quantization will typically
improve the effective value of S sub q by N sub q for a given number of bits by a, given
value of n. And the drawing is basically to find out how you should, I have not a good
picture. How you should distribute this interval boundaries and the interval and the
quantization intervals all.

How you should space this value of x 0, x 1, x 2, x 3, x 4 etcetera and what should be the
corresponding. Also it is clear that I should not now use the middle of these two for my
quantization level. So, at the end I will use something which is biased towards the region
where the signal samples are more likely to be located, so instead of being on this side it
is slightly to the right of the middle here.

So, this will be m 1, m 2 etcetera, so but this is a difficult problem, final out the set of x
i’s m sub i’s, so that you get the best possible signal to noise ratio, that is a that is the job
of designing a non uniform quantizer, that is one way of designing a non uniform
quantizer. Taking into account the distribution function that you have and using that non-
linear distribution function find out the best pair of values of x sub i and m sub i for all
the values of i, so as to produce the smallest value of error, least rate error.

So, that is a good interesting problem, it has been solved in the literature, but rather
difficultly involved, hence because of the difficulty that is associated with this approach,
there is a simpler approach to carrying out non uniform quantization. And the simple
approach is a one which is actually widely used in practice which I like to discuss.

(Refer Slide Time: 44:43)

So, in practice non uniform quantization is achieved via what is called sample
compression followed by quantization, so followed by uniform quantization, let me
explain what I mean, let us say you have this signal which has non uniform distribution.
So, what I will do is I will pass the signal sample values through a non-linear amplifier,
the signal values are denoted by x and non-linear amplifier outputs are being denoted by
y, it is not a linear amplifier does not produce y equal to t tau x y.
It is a non-linear amplifier in the sense that it will produce response something like that,
so the basic idea of using this non-linearity. That signals sigma sample values which are
going to occur with lower probability, which are typically going to be the higher values
of x, for higher values of x they will be compressed into regions which are smaller, so
that is this region that is compressed into this region.

Whereas the corresponding region here for the same interval only this much, so if I go to
the next value it is going to be still here, because x has much smaller probability of lying
in this interval, I compress this entire interval into a much smaller interval. And in this
domain I do uniform quantization, now I give equal importance to all the values of y that
I get, you are effectively doing the same thing in a different way.

A larger region interval of x is composed into a smaller interval in terms of y, but having
got this having done this compression I will treat y with uniform quantization, so this is
what we mean by sample compression. This we mean by sample compression and
followed by uniform quantization and at the receiver of course, your signal value will get
stocked now, you are not transmitting the actual signal value, you are transmitting some
alternative value.

Student: ((Refer Time: 47:50))

No, but this is what I am finally, transmitting I am not transmitting this value, I am
transmitting some value which lies in this interval, so instead of transmitting this value I
am transmitting this value and there is a non-linear relationship.

Student: ((Refer Time: 48:08))

So, I must convert them back into x values, so what should I do at the receiver, I should
do an inverse operation to this, so I have a compressor of this kind, I have a compressor
amplifier at the transmitter. At the receiver you have an inverse characteristics and an
amplifier with an inverse characteristics which you call the expander, so we carry out a
compression of the transmission and expansion at the receiver. And for various reasons
this operation of compressing and expanding is typically talked about together is called
companding.
It is a short form for compressor plus expander, so this companding is a very common
practice used in PCM pulse code modulation to effectively utilize your number of bits
you allocate in a much better way, that two commonly compression laws in the industry.
One is called the mu law compression, but these laws bit complicated, but they are
definitely easy to implement and they have been arrived at through a lot of empirical
research.

The other is called A law compression, these are industry standards, you all know what is
an industry standard, European simulation process; the other is based on the American
standardization process.

(Refer Slide Time: 49:46)

And this is what they look like, do not ask me too any questions on this, because one we
do not have time for it that I am just giving the expressions just for the sake of
completeness. These basically specify the non-linear characteristics of the amplifier that
you need to have at the compressor, so this is a mu law compressor, basically a log
compression the logarithmic relationship is some kind of a compression relationship, this
is based on that.

And typical value of the parameter mu is taken to be about 100 and the A law is similarly
given by more complicated expression. I am doing this just for the sake of completeness;
let us not worry too much about where these expressions come from. Here A is typically
taken to be in the order of 100, anyway both are some kind of logarithmic compressions,
and I write that independently by different people. The important feature of this
compression characteristic is these are largely independent of the distribution function
that the signal might have, incidentally.

Suppose, I ask you the question that I have some distribution of around in terms of x, that
I want to carry out a transmission y of y equal g of x such that, y has a uniform
distribution. What kind of function will do the job, do you know that, y is equal to, if you
choose this mapping function this will convert y into uniformly a distributed random
variable.

So, ideally I should choose this compression for ((Refer Time: 52:15)) based on the
distribution function of x, what these A law and mu law will do is they make it more or
less independent of this. They are fairly robust to distribution function, they do not of
course, there will be some variation it will not be really an optimum, but they come
pretty close to the optimum.

From most of the kind of distribution that you see, in ways a signal, that you use speech
and music and things like that. Now, one final thing before I finish with the PCM, I have
discussed the transmitter, a few things that you should know about the receiver, I have
not discussed anything about the receiver, what should you have at the receiver, and
what does a receiver look like.

(Refer Slide Time: 53:02)


Before you talk about anything else, please remember that there is already one
complication that we have received for the receiver; we are transmitting groups of bits to
represent one sample. And these groups of bits are following one element and now I
have an additional problem which I never had in any analog system, even pulse
amplitude system that I, pulse modulation system that I discussed, so far.

Namely, I must know which group of bits represents which sample, I must have
synchronization, and I must know precisely, here is a continuous sort of bits coming
along. Whether it is these 3 bits which represents the sample or these 3 bits which
represent the sample, this has to be specified. So, you need to have, you need to
introduce some way to tell the receiver how fairly has been done, so that introduces a
very important problem of synchronizing at the bit level as well as the block level.

You must know precisely the time instance at which bit levels might change and also
how the good bits are grouped, which 7 bits represent the sample, which group of 7 bits
and of course, once you establish the once, then there is no problem. So, one additional
computation is that of synchronization, now let us say we sorted that out, I need to go
through discussion of that.

You will learn more about all these things in a course in digital communication if you
desire to take it, but suppose we have taken this problem, this is an additional problem.
This ((Refer Time: 54:57)) of what you are doing there will be noise coming in all the
directions, you what you really get is not this clean samples that I have drawn here, but
noisy versions of this and what are the jobs that you have to do at the receiver now.

Before you can do the decoding and things like that, you have to simply be able to carry
out saying whether in a particular interval, the signal that you have seen is a pulse of
amplitude some value 5 volts pulse of amplitude 0, it is a positive pulse or 0 pulse. It is
not going to be 5 volts it would be 5 micro volts or 50 micro volts along with noise, does
it suppose to be a 1 level or a 0 level, how do you decide that.

Student: ((Refer Time: 55:36))

No, you cannot do a straightforward quantization, because you can a noise ((Refer Time:
55:41)) easily make you take a wrong decision, so somehow you have to filter at the
receiver which will average out the noise first. Remove the effect of noise within the
pulse and then, you take a decision whether it is 1 or 0, this is very important subject
process.

The decision making process which looks so trivial actually is a fully fledged subject by
itself, in communication theory; we call it detection theory, the subject of detection
theory. And one of the simplest results is you use a filter at the front end which is called
a match filter which will remove the noise, follow this match filter output by a D to A
converter.

Follow this by low pass filter; because now the match filter output will give you these
groups of bits, these groups of bits will convert, the D to A converter will convert this
group of bits into a sample value. The sample values are represent the signal in a pulse
amplitude modulated form, pass that through a low pass filter and you are able to recover
a message signal m t, so you are required to all this plus synchronization, that is your
PCM receiver.

So, in a very short form an over view of pulse code modulation which I normally do in 2
lecture, but I do not I have done it did only first, do not have time, but I think I have still
told you most of the things. What we have missed out on for both delta modulation and
to some extent in pulse code, not too much in pulse code modulation is the quantization
noise performance. For PCM I have done the quantization noise performance, at least for
a simple case of uniform quantization, but for delta modulation it is left as self-reading
exercise.

Thank you very much.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy