0% found this document useful (0 votes)
22 views107 pages

Lecture Note v1.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views107 pages

Lecture Note v1.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

WELCOME TO BEEE 212

DIGITAL COMMUNICATION
SYSTEMS
ABOUT THE COURSE LECTURER

❑KENNETH COKER
• PHD ONGOING, INFORMATION AND COMMUNICATION ENG., UESTC, CHINA
• MENG., INFORMATION AND COMMUNICATION ENG., UESTC, CHINA
• BSC., TELECOMMUNICATIONS ENGINEERING, KNUST, GHANA

❑ELECTRICAL AND ELECTRONIC ENGINEERING DEPT, HTU


• EMAIL: KENNETHCOKER25@GMAIL.COM
• CONTACT: 0246230604/0553252305
• OFFICE: ELECTRICAL LAB 1, ROOM 1
• RESEARCH AREAS: ACOUSTICS, SIGNAL PROCESSING, SPECIAL LASERS AND
MICROCAVITIES
About this Course
• Digital Communication Systems(DCS, B E E E 2 1 2 )

• Important course:

– DCS is an indispensable course in modern Engineering.

– DCS is important both in theory and practice.


Pre-requirements
Syllabus
Signal sources and types.
Random signals and noise: Power spectral density and
autocorrelation
The Gaussian Processes.
Discrimination between finite number of possible signals.
The optimal receiver of known signals embedded in additive
white Gaussian noise.
SYLLABUS

• The correlation receiver, matched filter and the associated prob-


ability of error.
• Modulation methods: ASK, PSK, FSK, MSK.
• Applications to PCM and radar.
• Efficient signal design for binary communication.
• Detection of signals with unknown phase.
Pre-requirements
Related Courses:

1. Analog Communication Systems, very important!


2. Advanced Mathematics, Linear Algebra
3. Stochastic Processes
4. Statistics and Probability
5. Information Theory
Textbook and References

Digital Communications Fundamentals and Applications, Bernard Sklar


Reference Books
• Communication System Engineering(2nd Edition), Proakis

• Communication Systems, Haykins

• Information Transmission, Modulation and Noise

• Digital Communications, Proakis


COURSE ASSESSMENT

• HOMEWORK – 10%
• UNANNOUNCED QUIZZES + ATTENDANCE – 10%
• MIDSEMESTER EXAMS – 20%
• FINAL EXAMS – 60%

• TOTAL – 100%
What is the fundamental
problem of communication?

9
Overview of Information Theory
The roots of modern digital communication stem from the ground-
breaking paper “A Mathematical Theory of Communication" by Claude
Elwood Shannon in 1948.
From Shannon, "The fundamental problem of communication
is that of reproducing at one point either exactly or approximately a
message selected at another point".

In all forms of communication, information is transmitted from a


source through a channel to a receiver.
What is Information? How do we measure it?
Hartley’s Definition
• Consider a random variable S(or an experiment) with discrete
equiprobable events, e.g. throwing a fair die.
• Let the possible outcomes be {s1 , s2 , . . . , si, . . . , sn }.
• The amount of information is proportional to the number of possible
outcomes I ∝ n.
• Therefore, the information content in a message from this
sources is IH = logb S (1)
• Units of I: bits, nat and Hartley, if b = 2, e, or 10, respectively.
Assumptions in Hartley’s Measure
1. All symbols and/or messages are equiprobable.
2. All symbols and/or messages are independent(i.e. the occurrence
of one does not depend on the other).
Shannon factored in the assumptions
I∝ 1 (2)
p
1. Higher probability of occurrence has less information.
2. Lower probability of occurrence has more information.
For a discrete random variable S with outcomes or possibilities
{s1, s2, · · · , si, · · · , sn } each having probability pi, the average in-
formation or entropy associated with the random S is
Shannon's Definition: (3)
Measure of Information
Hartley’s Definition
For a discrete random variable X with outcomes or possibilities
{x1 , x2 , · · · , xi, · · · , xn },(representing the source alphabet), each having
probability pi, the (self ) information associated with the ith outcome is.
(4)
Example (1). In a coin throw, there are two possible outcomes
{H, T }. What is the information associated with an outcome.
Solution:
The probability of an outcome x, assuming a fair coin, is
Hence the information associated with that outcome is

For b = 2, e, or 10, respectively, I(x) = 1bit, 0.69nat or 0.3H artley

What is the outcome associated with an event if the coin is thrown


two times and also three times?
Solution
• For two times, the possible outcomes are {HH,HT,TH,TT}

• For three times, the possible outcomes are


{HHH,HHT,HTH,HTT,THH,THT,TTH,TTT}
Hartley's definition assumed

1. All events(symbols/messages) are equiprobable.

2. All events(symbols/messages) are independent(i.e. The occurrence of


one does not depend on the other.
Shannon’s Definition
Let X be a random variable with alphabet χ, where X ∈ χ,
and probability mass function P(X) = Pr {X = X}, average
information(entropy) associated with the alphabet is
H (X) ≡ E {I (X)} =

=
Example (2). Find the entropy of X
Solution:
The entropy of X is
Example (3).

• Supposing we have a horse race with eight horses taking part.


Assume that the probabilities of winning for the eight horses
are ( 12 , 14 , 18 , 16
1 , 1 , 1 , 1 , 1 ).
64 64 64 64 What is the entropy of the horse
race?
Solution:
Example (4).

• Consider a random variable that has a uniform distribution over


32 outcomes. How many messages do we need and how long
should the messages be?
Solution:

• How many messages? 32

• How long should the messages be?


Shannon used entropy as a measure of how predictable information
might be.
He wanted to find a way for "reliably" transmitting data throughout the
channel at "maximal" possible rate.
Coding Theory
Coding theory is concerned with finding explicit methods for increasing
the efficiency and reducing the error rate of data communication over
noisy channels as near as the Channel capacity.

It can be subdivided into source coding theory and channel coding


theory. Using a statistical description for data, information theory
quantifies the number of bits needed to describe the data, which is
the information entropy of the source.
Digital and Analog Signal Transmission
Communication systems can be analog or digital. The are many
reasons why communication systems are going digital. the primary
advantage is the ease with which digital signals, compared with analog
signals are regenerated.
Why Digital?
Advantages over Analog:

• More reliable, less sensitive to tolerance such as temperature,


noise etc;
• Higher accuracy, can be integrated on a single chip;
• Ease of regeneration, regenerative repeaters are used for this.
• Distortions in analog systems cannot be removed by ampli-
fication.
Ease of regeneration:

• As all transmission lines and circuits have some non-ideal fre-


quency transfer function, the pulse is distorted.
• Unwanted electrical noise or other interferences further distorts
the pulse.
Figure 2: Pulse degradation and regeneration
(1) Programmability

• Analog Systems: You have to modify hardware design.

• Digital Systems: Only modify software.


(2) Precision
• Analog System, by component specification

– Resistors have a tolerance of 5%.

– Capacitors 20% or worse.


• Digital System, by ADC bits, CPU word width(or word-length)
and algorithm;
(3) Stability
• Analog System:

– The characteristics of analog system components(e.g. re-


sistors, capacitors and operational amplifiers) will change
along with temperature, humidity etc.
• Digital System:

– Shows no variation with temperature throughout their guaranteed


operation range.
(4) Anti-noise:
(5) Repeatability
(6) VLSI(Very Large Scale Integrated Circuit)
(7) Error Correcting Codes:
• Data retrieval and transmission systems suffer from a number
of potential forms of error.
• With information in a digital or binary form, we may easily build
into the data stream additional "redundant bits" that are used
to detect when an error has occurred.
• An error-correcting code is an algorithm for expressing a sequence
of numbers such that any errors which are introduced can be
detected and corrected.
(8) Data Transmission and Storage:
• The fidelity of the digital medium is greater than that of the
analog one.
• The Internet, Compact Disc(CD) and Digital Video Disc(DVD)
brought information based on trouble-free high-quality text,
audio and video into offices and homes.
(9) Data Compression
• The information channels cost and transmission bottlenecks
makes compression necessary for real-time processing.
• With analog compression some information is lost. An example
is the bandwidth limiting applied to analog telephone lines, which
limits the bandwidth to 3kHz.
• Digital technology makes lossless compression possible.
(10) Signal Combination and transmission is easier in Digital
• Combining digital signals using TDM is easier.

• All signals are treated as 1s and 0s in digital and requires no


special treatment in transmission.
• Signal encryption is much easier in digital than analog.
Disadvantages of Digital
• Digital systems allocate a significant amount of their resources
to synchronization.
• Signal processing in DCS can be intensive.
• Digital communication systems do not degrade gracefully. Below
a certain S/N ratio the quality of the signal can change from
very good to very bad.
APPLICATIONS OF DCS
Information Transmission
Formatting Block

Figure 3: Transforming the source information into bits.


This transforms the source into bits, thus assuring compatibility
between the information and signal processing within the DCS. From
this point on the signal exists in the form of bit streams.
Source Encoding

The goal here is to remove redundancy; making the message as


small(in bits) as possible.
Encryption
Chanel Encoding
Multiplexing
Pulse Modulation

This an essential step because each symbol to be transmitted must first


be transformed from a binary representation(voltage levels representing
1s and 0s) to a base band waveform.
Bandpass Modulation
• For applications involving RF transmission, the next important
step is bandpass modulation.
• This is required when the transmission medium will not support
the propagation of pulse-like waveforms.
• In such cases the baseband waveform gi(t) is translated to si(t)

r(t) = si (t) ∗ hc(t) + n(t), i = 1, . . . , M


hc is the impulse response of the channel
Frequency Spread
Spread-spectrum techniques are methods by which a signal (e.g.
an electrical, electromagnetic, or acoustic signal) generated with a
particular bandwidth is deliberately spread in the frequency domain,
resulting in a signal with a wider bandwidth.
Multiple Access
Multiple access is a technique that lets multiple users share the allotted spectrum
in the most effective manner.

Depending on the channel type, specific multiple access technique can be used
for communication. The channel types and the associated multiple access
techniques are as follows:

• Frequency Channels [Frequency Division Multiple Access


(FDMA)] - Frequency band split into small frequency channels, and
different channels are assigned to different users. One example is the case
of FM radio where multiple users can transmit simultaneously; however, on
different frequency channels.
• Time-slot Within Frequency Bands [Time Division Mul-
tiple Access (TDMA)] - Every user is permitted to transmit
only in specific time slots using a common frequency band. Var-
ious users can transmit at the same frequency band at different
times.
• Distinct Codes [Code Division Multiple Access (CDMA)]
- Users can transmit simultaneously using the same frequency
band, but with the help of different codes so that they can be
decoded to recognize a specific user.
Information Reception
Demodulation
Demodulation refers to the recovery of a waveform(baseband pulse).
Detection
Detection is a decision-making technique regarding the digital meaning
of a recovered waveform.
Demultiplexing
De-multiplexing is the reverse of the multiplex (MUX) process -
combining multiple unrelated analog or digital signal streams into one
signal over a single shared medium.
Decrypting
Decryption is the process of taking encoded or encrypted text or other
data and converting it back into text that you or the computer can
read and understand.
Outline
▪ Classification of Signals

▪ Fourier Series

▪ Spectral Density

▪ Autocorrelation

▪ Random Signals
Classification of Signals
Deterministic and Random Signals
A signal can be classified as deterministic, meaning that there is no
uncertainty with respect to its value at any time, or as random, meaning that
there is some degree of uncertainty before the signal actually occurs.
Deterministic signals or waveforms are modeled by explicit mathemati-
cal expressions, such as x(t) = 5 sin(8t).

For a random waveform it is not possible to write an explicit expression.


However when examined over a long period, a random waveform, also referred
to as a random process, may exhibit certain regularities that can be
described in terms of probabilities and statistical averages. Such a model, in the
form of probabilistic description of the random process, is particularly useful for
characterizing signals and noise in communication systems.
Analog and Discrete Signals
An analog signal x(t) is a continuous function of time; that is, x(t)
is uniquely defined for all t. An electrical analog signal arises when a
physical waveform(e.g. speech) is converted into an electrical signal by
means of a transducer. An example of a speech waveform is shown in Figure 1.
Figure 1: An example of a speech signal
Figure 2: A continuous time signal
By comparison, discrete-time signals are defined only at discrete
times, and consequently for these signals, the independent variable takes
only a discrete set of values.

The discrete-time independent variable is denoted with n. An illustra-


tion of a discrete time signal x[n] is shown in Figure 3.
A very important class of discrete-time signals arises from sampling
continuous-time signal. In this case, the discrete time signal x[n]
represents successive samples of an underlying phenomenon for which the
independent variable is continuous.
Figure 3: A discrete-time signal
Periodic and Non-Periodic Signals
A signal x(t) is called periodic in time if there exists a constant
T > 0 such that
x(t) = x(t + T ) for − ∞< t < ∞ (1)
where t denotes time. The smallest value of T that satisfies the
condition is called the period of x(t). The period T defines the
duration of one complete cycle of x(t). An example of a periodic
signal is shown in Figure 4.
A signal in which there is no value of T is that satisfies equation(1) is
called a non-periodic signal.
Figure 4: A continuous-time periodic signal
Energy and Power Signals
An electrical signal can be represented as voltage v(t) or a current i(t)
with instantaneous power p(t) across a resistor R defined by
(2)
(3)
The normalization convention allows us to express the instantaneous
power as
(4)
where x(t) is either a voltage or a current signal. The energy dissipated
during the time interval(−T/2, T/2) by a real signal with instantaneous power
expressed by equation(4) can be written as
(5)

and the average power dissipated by the signal during the interval is

(6)
We classify x(t) as an energy signal if and only if, it has non-zero
but finite energy(0 < Ex < ∞) for all time, where
A signal is defined as a power signal if, and only if, it has finite but
nonzero power (0 < Px < ∞) for all time, where

(9)

An energy signal has finite energy but zero average power , whereas
a power signal has finite average power but infinite energy .

A waveform in a system may be constrained in either its power or


energy values.
As a general rule, periodic and random signals are classified as
power signals, while signals that are both deterministic and
non- periodic are classified as energy signals.
The Unit Impulse Function
The impulse function(Dirac delta function δ(t)) is an abstraction;
an infinitely large amplitude pulse, with zero pulse width, and unity
weight(area under the pulse), concentrated at the point where its
argument is zero. The unit impulse is characterized by the following
relationships:
(10)

(11)
Equation(11) is known as the shifting property of the unit impulse
function; the unit impulse multiplier selects a sample of the function
x(t) evaluated at t = t0 .
Fourier Series
Fourier Series Expansion

(12)
Example
Evaluate Fn in the figure above.
Solution
Fourier Series Transform Pair
Example
Find F(jω) if f (t) = A cos(2πf0 t).
Solution
Spectral Density
The spectral density of a signal characterizes the distribution of the signal’s
energy or power in the frequency domain. This concept is particularly
important when considering filtering in communication systems. The energy
spectral density(ESD) or the power spectral density(PSD) is used to
evaluate the signal and noise at the filter output.
Energy Spectral Density
The total energy of a real-valued energy signal x(t), defined over
the interval,(-∞,∞), is described by equation(13). Using Parseval’s
theorem, we can relate the energy of such a signal expressed in the time
domain to the energy expressed in the frequency domain, as
(13)

where X (f ) is the Fourier transform of the non-periodic signal x(t).


Let ψx (f ) denote the squared magnitude spectrum, defined as
The quantity ψx (f ) is the waveform energy spectral density(ESD)
of the signal x(t). Therefore from Equation(13), we can express the
total energy of x(t) by integrating the spectral density with respect to
frequency:
(14)

This equation states that the energy of the signal is equal to the area under the
ψx (f ) versus frequency curve. Energy spectral density de- scribes the signal
energy per unit bandwidth measured in joules/hertz. The energy spectral
density is symmetrical in frequency about the origin, and thus the total
energy of the signal x(t) can be expressed
as:
(15)
Power Spectral Density
The average power Px of a real-valued power signal x(t) is defined in
Equation(16). If x(t) is a periodic signal with period T0 , it is classified as a
power signal. The expression for the average power takes the form of
Equation(17), where the time average is taken over the signal period T0 .

Parseval’s theorem for real-valued power signal is given by

(16)
(17)

where the Cn terms are the complex Fourier series coefficients of the
periodic signal.
The power spectral density(PSD) function Gx (f ) of the periodic signal
x(t) is a real,even and non-negative function of frequency that gives
the distribution of the power x(t) in the frequency domain, defined as:

(18)
The periodic function is a discrete function of frequency.
Using the PSD function in Equation(18), the average normalized power
of a real-valued signal is
(19)

Equation(18) describes the PSD of a periodic(power) signal only. If


x(t) is a non-periodic signal it cannot be expressed by a Fourier series,
and if it is a non-periodic power signal(having infinite energy) it may
not have a Fourier transform.
However, we may still express the power spectral density of such signals
in the limiting sense. If we form a truncated version xT (t) of the non-
periodic power signal x(t) by observing it only in the interval
(-T/2,T/2), then xT (t) has finite energy and has a proper Fourier
transform XT (f ). It can be shown that the power spectral density of
the non-periodic x(t) can then be defined in the limit as

(20)
Example: Average Normalized Power

Find the average normalized power in the waveform,


x(t) = A cos 2πfo t, using time averaging
Solution
Using Equation 16, we have

(21)
Example
Using time averaging find the average normalized power in the wave-
form.
x(t) = 10 cos 10t + 20 cos 20t
Solution
Example

• Determine the average power Px in f(t) = 1 + sin(2πfo t) using


time averaging
Solution
• Time averaging

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy