0% found this document useful (0 votes)
4 views14 pages

FTIC Chapter 3- Part -A

Chapter III discusses channel coding in discrete channels, focusing on the mathematical modeling of noisy channels using statistical models and conditional probabilities. It introduces concepts such as transition matrices, average mutual information, and throughput, emphasizing the impact of noise on information transmission. The chapter also highlights the differences between ideal and real channels, and the importance of understanding channel characteristics for effective communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

FTIC Chapter 3- Part -A

Chapter III discusses channel coding in discrete channels, focusing on the mathematical modeling of noisy channels using statistical models and conditional probabilities. It introduces concepts such as transition matrices, average mutual information, and throughput, emphasizing the impact of noise on information transmission. The chapter also highlights the differences between ideal and real channels, and the importance of understanding channel characteristics for effective communication systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Chapter III: Channel Coding

I. Discrete Channels:
a) Basics:
➢ In Chapter 1, we’ve seen that in an ITS (Information transmission System) , the Channel is generally the support on
which the digital or discrete information is transmitted. Physically, the Channel could be a wire or a cable of different
kinds. Among these, we can use optical fiber based-wire.
➢ Nowadays, most frequently ITS are wireless transmission systems, with the tremendous developments of new radio
technologies for wireless Networks. We call them New Radio NGN (Next Generation Networks)
( like : 2G (GPRS, EDGE), 3G (UMTS, WIMAX), 4G, 4.5G, 4.9 G= 4G LTE, 5G (NSA, SA), near future 6G,…). All of these
Systems are wireless Systems called Radio Mobile Networks.
➢ In Wireless Systems the Channel is a free space the information transmitted after being modulated is sent out from the
transmitter via aerials or antennas which are specific to each wireless ITS System. The information transmission will
then use the electro-magnetic waves properties to transmit the information by waves’ propagation through the free
space channel.
➢ The channel as we´ve seen in chapter 1 can be perturbed by several nature of noises which could be internal or
external to the channel and hence results in big losses of useful information and this loss can be permanent.
➢ In this course, to simplify the channel modelling, we will only use the mathematical model of the channel to help us find
mathematical formulas that best characterize a noisy channel (real channel). In this case, the transmission is random.
➢ As the transmission in the real Channel is randomized because of many sources of noises that can blur the information,
degrade it, and smear it out (make it vanished= this called complete Fading of the information), it is obvious that the
Input of Channel is linked or bonded to its Output by a model based on probabilities like in the source. This
mathematical model guided by probabilities laws is called “Statistical Model”.
b)- Statistical Model of a Channel :

➢ The mathematical Statistical Model of a Channel is described by the Conditional Probability: P[Y/X].
➢ That’s to say:” The probability to receive Y at the Output of the Channel when X is already present at the Input of the
Channel” (Most important principle).
➢ Notation :
𝒀 𝒀𝒋 𝒚𝒋 𝒋
Τ
𝑷 𝑿 =𝑷 ൗ𝑿𝒊 = ൗ𝒙 𝒊 = 𝑷 ൗ𝒊 , For all the discrete memoryless and stationary channels
𝒊 = 𝟏, … , 𝑴

𝒋 = 𝟏, … , 𝑵

c) Definitions:
➢ If, both Input X and Output Y of a Channel are discrete, the Channel is said to be discrete.
➢ A channel is said to be discrete, if the symbols that go through it are discrete themselves. (Equivalent definition)
➢ Input alphabet X is then given by : 𝑿 = 𝒙𝒊 ; 𝒊 = 𝟏, … , 𝑴 ; and let Y be the Output one : 𝒀 = 𝒚𝒋 ; 𝒋 = 𝟏, … , 𝑵.
➢ A channel is said to be a memoryless one i.o.i an output symbol depends only on the corresponding input symbol taken
at the same time.
➢ The Output of the Channel is then linked to the Output by the previous defined conditional probability. These cond.
Probabilities form A chess-set like scheme, or, they are organized in an (NxM) Matrix form. (NxM) denotes the
dimensions of the Matrix, that´s to say for a Matrix of N rows and M columns. Each element of this important Matrix
by calculating the probability of crossing one element of the X alphabet with its corresponding one from the Y
alphabet
This Conditional (NxM )-Matrix , also called Transition Matrix or Noise Matrix, can be written as:

𝒚𝟏 𝒚𝟏
𝑷 Τ𝒙𝟏 . . . . . 𝑷 Τ𝒙𝑴
𝒀𝒋 𝒚𝒋 𝒚
𝑷 𝒀Τ𝑿 = 𝑷 ൗ𝑿𝒊 = ൗ𝒙𝒊 = 𝑷 𝒋ൗ𝒊 = 𝐓 = ⋮ ⋮ 𝒘𝒊𝒕𝒉: σ𝒋 𝑷 𝒋ൗ𝒙𝒊 = 𝟏
𝒚𝑵
𝑷 Τ𝒙𝟏 . . . . . 𝑷 𝒚𝑵Τ𝒙𝑴

➢ The previous matrix is called Transition Matrix of the Channel or the Noise Matrix or the Average Errors Matrix of
the Channel.
➢ A Channel is said to be a “ Symmetric Channel” if all of the rows and the columns are identical relatively to the
permutations.
➢ Because of noises with multiple sources or natures which can be internal to the channel or external to the Channel,
the received symbols yj are generally different from those transmitted (Input of the channel). The Channel is then
said perturbed by noises or is a Noisy Channel ( Many errors can occur since the transmission through this channel).
This kind of Channel is said “Real Channel” Whereas an “Ideal Channel” is a perfect Channel which is not then
perturbed by noise.
➢ From Chapter 1 and Chapter 2, we recall the following Entropies relations:
𝑰 𝑿, 𝒀 = 𝑯 𝑿 ∩ 𝒀 = 𝑯 𝑿 − 𝐇 𝑿/𝒀 = 𝑯 𝒀 − 𝑯 𝒀/𝑿
with : 𝟎 ≤ 𝑰 𝑿, 𝒀 ≤ 𝑯 𝑿 ,
➢ 𝑰 𝑿, 𝒀 is called the Average Mutual Information of the Channel.
➢ 𝑯 𝑿 and 𝑯 𝒀 are respectively : The entropies of the Input Alphabet and Output one of the Channel.
➢ It is obvious that a good transmission in the channel makes the term 𝐇 𝑿/𝒀 negligible in front of 𝑯 𝑿 .
So generally : 𝑯 𝑿/𝒀 ≅ 𝜺 ; and hence 𝑰 𝑿, 𝒀 ≅ 𝑯 𝑿 , The Conditional Entropy 𝑯 𝑿/𝒀 is called the Ambiguity or
the equivocal information expressing the uncertainty that remains on X knowing Y.

II. Average flowrate (Throughput) of a discrete channel


➢ We also call it transmission average speed in a discrete Channel.
➢ In Chapter 2, we have learnt that that the average flowrate of a source in bits per second (bps) was defined by:
➢ 𝑫𝑺 = 𝐇 𝑿 . 𝒓𝒔 , now this flowrate of the source is adapted as this of the Channel´s input and becomes then:
➢ 𝑫𝑿𝒄 = 𝑫𝑺 = 𝐇 𝑿 . 𝒓𝒔 in Symbols/second (where : 𝒓𝒔 = rhythm of the source or source frequency)
➢ We also define in the channel the Transmission flowrate which is given by :
𝑫𝒕 = 𝑯 𝑿 − 𝑯 𝑿/𝒀 . 𝒓𝒔 = 𝑯 𝒀 − 𝑯 𝒀/𝑿 . 𝒓𝒔 (in bits per second)
➢ Where:
➢ 𝑫𝒕 characterizes the channel transmission and 𝑯 𝑿/𝒀 expresses the error that occurs between the output and the
input of the channel.
➢ Important observations:
➢ The previous equality takes under consideration that the Channel is noisy (or noised).
➢ If the level of noise is very high then the channel is highly perturbed and hence its output Y becomes statistically
independent from its input X, we can then show easily that:
➢ 𝑯 𝑿/𝒀 = 𝑯 𝑿 ⟹ 𝑫𝒕 = 𝟎 𝒃. 𝒑. 𝒔. , then no information is transmitted to the receiver or that all of the information
is being lost in the channel. We then speak about information SCRAMBLING= BROUILLAGE de l’information. One
malevolent hacker can use a SCRAMBLER to stop the transmission of an ITS System.
III. Matrix Model of a Discrete Channel. Probabilities and Entropies :
❑ A Matrix model of a discrete channel is given by the followings:
❑ Let 𝑿 = 𝒙𝒊 be the Input alphabet of the cannel for 𝒊 = 𝟏, … , 𝒎, 𝑷 𝑿 is then the transmission probability matrix
and then, we can write :
𝒑𝟏 ⋯ 𝟎
𝑷 𝑿 = 𝑫𝒊𝒂𝒈 𝒑𝒊 = ⋮ ⋱ ⋮ ; with: σ𝒎𝒊=𝟏 𝑷𝒊 = 𝟏
𝟎 ⋯ 𝒑𝒎

❑ And the Matrix Model of the input X with its corresponding probability for each symbol is :
𝒙𝒊
𝑿: 𝒑 i=1,…,m
𝒊

❑ The probability of receiving yj is denoted qj so that : 𝒒𝒋 = 𝑷 𝒚𝒋 and 𝑷 𝒀 is the reception probability matrix
which can be written as P[X] in a diagonal form of Matrix.
❑ The transition Matrix called the Noise Matrix which has been already written P[Y/X]= 𝒒𝒋/𝒊 = 𝐪 𝒋/𝒊
❑ 𝒒𝒋/𝒊 : is the general term of the noise matrix that expresses the value of the uncertainty of receiving 𝒚𝒋 knowing that
𝒙𝒊 has already been received.
❑ The matrix P[Y/X] represents the Stastistical pattern of the channel and is experimentally obtained, so the
transition can be drawn as follows :
𝑿 ⟹ 𝑻𝒓𝒂𝒏𝒔𝒊𝒕𝒊𝒐𝒏 = 𝑪𝒉𝒂𝒏𝒏𝒆𝒍 ⇒ 𝒀

𝑷 𝑿 = 𝑴𝒂𝒕𝒓𝒊𝒙 ⟹ 𝑷 𝒀ൗ𝑿 = 𝑴𝒂𝒕𝒓𝒊𝒙 ⟹ 𝑷 𝒀 = 𝑴𝒂𝒕𝒓𝒊𝒙

Matrix Model of the discrete channel

➢ The following figure represents the general Matrix Model Scheme of the Channel.

𝒙𝟏 𝒒𝟏/𝟏 𝒚𝟏

with: P[Y/X]= 𝒒𝒋/𝒊 = P[j/i]

𝑿 𝒙𝒊 𝒚𝒋 𝒀
𝒒𝒋/𝒊

𝒒𝒋/𝒎 𝒒𝒏/𝒊

𝒙𝒎 𝒚𝒏
𝒒𝒏/𝒎
Input Characterization and Matrix:

𝒙𝒊 𝒑𝟏 ⋯ 𝟎
𝐗 ∶ 𝒑 with : 𝑷 𝑿 = ⋮ ⋱ ⋮ = 𝑫𝒊𝒂𝒈 𝒑𝒊 = Input Probability Matrix of the Channel
𝒊
𝟎 ⋯ 𝒑𝒎

Transition Matrix T or Noise Matrix T= P[Y/X]= 𝒒𝒋/𝒊 = P[j/i]

Τ𝒙𝟏
𝑷 𝒚𝟏
𝑷 𝒚𝟏
Τ𝒙𝟐 … 𝑷 𝒚𝟏
Τ𝒙𝒎 𝒒𝟏/𝟏 𝒒𝟏/𝟐 𝒒𝟏/𝒎
𝑻= ⋮ ⋮ ⋮ = ⋮ ⋮ ⋮ = 𝒒𝒋/𝒊
𝑷 𝒚𝒏Τ𝒙𝟏 𝑷 𝒚𝒏Τ𝒙𝟐 … 𝑷 𝒚𝒏Τ𝒙𝒎 𝒒𝒏/𝟏 𝒒𝒏/𝟐 𝒒𝒏/𝒎

With: 𝒊 = 𝟏, … , 𝒎 𝒂𝒏𝒅 𝒋 = 𝟏, … , 𝒏

Observation: This Matrix is obtained experimentally by realizing the experiment. Generally, the matrix is
given or simply derived from the experiment.
𝒚𝒋 𝒚𝟏 𝒚𝟐 … … … … 𝒚𝒋… … … …… 𝒚𝒏 𝑷𝑿 Marginal Law relatively to X
𝒙𝒊 ⋮

𝒙𝟏 𝑷𝟏,𝟏 𝒑𝟏

𝒙𝟐 𝒑𝟐

⋮ 𝒋𝒐𝒊𝒏𝒕 𝑴𝒂𝒕𝒓𝒊𝒙
⋮ 𝑷 𝑿, 𝒀 = ⋮ 𝑷 𝑿 ∩ 𝒀 =
𝒙𝒊 … 𝑷 𝒊…∩ 𝒋 … = 𝑷
… … 𝒊, 𝒋… = … … … 𝒑𝒊
⋮ ⋮
𝑷𝒊,𝒋


𝒙𝒎 ⋮ 𝒑𝒎

𝒒𝟏 𝒒𝟐 𝒒𝒋 𝒒𝒏 σ𝒎
𝑷𝒀 1 𝒊=𝟏 𝒑𝒊

σ𝒏𝒋=𝟏 𝒒𝒋
Marginal Law relatively to Y
Consequently:

𝑷 𝑿∩𝒀 𝑷 𝒙∩𝒚 𝑷 𝒊∩𝒋


➢ knowing experimentally that: 𝑷 𝒀/𝑿 = 𝑷 𝒚/𝒙 = 𝑷 𝒋/𝒊 = = =
𝑷𝑿 𝑷𝒙 𝑷𝒊
➢ We can derive that: 𝑷 𝒙 ∩ 𝒚 = 𝑷 𝒙 . 𝑷 𝒚/𝒙 , known then as a deduced experimental result.
,

𝑷 𝑿 .𝑷 𝒀/𝑿 ,
➢ Whereas, we can then calculate: 𝑷 𝑿/𝒀 = , that we can also write:
𝑷𝒀

𝑷 𝒙 .𝑷 𝒚/𝒙 𝑷 𝒊 .𝑷 𝒋/𝒊
➢ 𝑷 𝒙/𝒚 = = 𝑷 𝒊/𝒋 = , so by calculation we can obtain the Matrix 𝑷 𝒙/𝒚
𝑷𝒚 𝑷𝒋

𝒙𝟏 𝒙𝟏 𝒙𝟏
Τ𝒚𝟏𝑷 𝑷 Τ𝒚𝟐 … 𝑷 Τ𝒚𝒏
➢ 𝑷 𝒙/𝒚 = ⋮ ⋮ ⋮ , which is always calculated on the basis
𝑷 𝒙𝒎Τ𝒚𝟏 𝑷 𝒙𝒎Τ𝒚𝟐 … 𝑷 𝒙𝒎Τ𝒚𝒏

➢ Of the prior knowledge of the Matrix P[y/x] which itself is given experimentally.
Channel’s Joint Probability Matrix Notation :
➢ The joint Probability Matrix is defined by: (all of the following notations)
➢ 𝑷 𝑿 ∩ 𝒀 = 𝑷 𝒀 ∩ 𝑿 = 𝐏 𝑿, 𝒀 = 𝑷 𝒀, 𝑿 = 𝑷 𝒊 ∩ 𝒋 = 𝑷 𝒋 ∩ 𝒊 = 𝑷 𝒊, 𝒋 = 𝑷 𝒋, 𝒊 = 𝑷𝒊𝒋 = 𝑷𝒋𝒊
➢ We can also simply write
𝑷𝒊𝒋 = 𝑷𝒋𝒊 = 𝑷 𝒊 . 𝑷 𝒋/𝒊 = 𝑷 𝒋 . 𝑷 𝒊/𝒋 = 𝑷 𝒙 . 𝑷 𝒚/𝒙 = 𝑷 𝒚 . 𝑷 𝒙/𝒚 ,
Consequently, in the Channel, the Experimental data given is :
𝑷 𝒙 . 𝑷 𝒚/𝒙 = 𝑷𝒊𝒋 = 𝑷𝒋𝒊 , and the corresponding Matrix Notation will be:

𝒑𝟏 ⋯ 𝟎 𝒒𝟏/𝟏 ⋯ 𝒒𝒏/𝟏 𝒑𝟏𝟏 ⋯ 𝒑𝟏𝒏


⋮ ⋱ ⋮ ⋮ ⋱ ⋮ = ⋮ ⋱ ⋮
𝟎 ⋯ 𝒑𝒎 𝒒𝟏/𝒎 ⋯ 𝒒𝒏/𝒎 𝒑𝒎𝟏 ⋯ 𝒑𝒎𝒏
❑ Channel’s Entropies :
➢ Input Entropy: The Channel Probability Input Matrix leads to a channel’s Input Entropy H[X],
that will be defined as:
➢ 𝑷 𝑿 ⟹ 𝑯𝒄 𝑿 = 𝑯 𝑿 = − σ𝒎 𝒊=𝟏 𝒑𝒊 𝒍𝒐𝒈𝟐 𝒑𝒊
➢ Output Entropy: The Channel Probability Output Matrix calculated experimentally and noted H[Y]
leads to a channel’s output Entropy defined as:
➢ 𝑷 𝒀 ⟹ 𝑯𝒄 𝒀 = 𝑯 𝒀 = − σ𝒏𝒋=𝟏 𝒒𝒋 𝒍𝒐𝒈𝟐 𝒒𝒋
➢ Channel Conditional Entropies
➢ Given the Transition Matrix T that Characterizes the Channel regarding its quality and its level of noise, the
Channel Conditional Entropies can be derived as follows :
▪ Conditional Entropy ( Output/Input= Receiver knowing Transmitter): In this case, Cond. Entropy is
noted H[Y/X] and is then defined by:
𝑯 𝒀/𝑿 = 𝑯 𝒋/𝒊 = − σ𝒎 𝒏
𝒊=𝟏 σ𝒋=𝟏 𝑷 𝒊 ∩ 𝒋 𝒍𝒐𝒈𝟐 𝑷 𝒋Τ𝒊 (𝒔𝒆𝒆 𝑪𝒉𝒂𝒑𝒕𝒆𝒓𝟏 𝒂𝒏𝒅 𝟐)
Should be calculated on the basis of T= Matrix P[Y/X] known by experimentation.
▪ Conditional Entropy ( Input/Output= Transmitter knowing Receiver):
▪ 𝑯 𝑿/𝒀 = 𝑯 𝒊/𝒋 = − σ𝒏𝒋=𝟏 σ𝒎 𝒊=𝟏 𝑷 𝒊 ∩ 𝒋 𝒍𝒐𝒈𝟐 𝑷 𝒊Τ𝒋 (𝒔𝒆𝒆 𝑪𝒉𝒂𝒑𝒕𝒆𝒓𝟏 𝒂𝒏𝒅 𝟐)
➢ Should be calculated on the basis of the knowledge of the matrix P[X/Y] which itself is calculated
on prior knowledge of the Matrix T= P[Y/X] (Noise Matrix). H[X/Y]=(Equivocation Entropy)
➢ Bivariate Joint Entropy : This Entropy is defined as:
➢ 𝑯 𝑿 ∩ 𝒀 = − σ𝒎 σ 𝒏
𝒊=𝟏 𝒋=𝟏 𝑷 𝒊 ∩ 𝒋 𝒍𝒐𝒈𝟐 𝑷 𝒊 ∩ 𝒋 ;
➢ Particular cases : 1-( If the Channel is Noiseless (Sans Bruit)= Ideal or Perfect Channel)
𝟏 ⋯ 𝟎
Then the transition Matrix is given by: 𝑷 𝒀/𝑿 = ⋮ ⋱ ⋮ = 𝑰𝒎=𝒏 𝑰𝒅𝒆𝒏𝒕𝒊𝒕𝒚 𝑴𝒂𝒕𝒓𝒊𝒙
𝟎 ⋯ 𝟏
Consequences : we get : 𝑯 𝑿/𝒀 = 𝑯 𝒀/𝑿 = 𝟎, 𝒏𝒐 𝒊𝒏𝒇𝒐𝒓𝒎𝒂𝒕𝒊𝒐𝒏 𝒊𝒔 𝒍𝒐𝒔𝒕 𝒊𝒏 𝒕𝒉𝒆 𝒄𝒉𝒂𝒏𝒏𝒆𝒍
The source is perfectly adapted to the channel, all of the transmitted information is passed through
the channel: The Channel is said to be of excellent quality.
2- In the case of a very noisy channel:
a)-The Output of the Channel becomes statistically independent from its Input, we say then that the channel
presents no correlation between its input and its output, henceforth we have:
𝑷 𝒊∩𝒋 𝑷 𝒊 .𝑷 𝒋
𝑷 𝒊/𝒋 = = =𝑷𝒊 𝑯 𝑿/𝒀 = 𝑯 𝑿
𝑷𝒋 𝑷𝒋
൞ Consequences: ቊ
𝑷 𝒋/𝒊 =
𝑷 𝒊∩𝒋
=
𝑷 𝒊 .𝑷 𝒋
=𝑷𝒋 𝑯 𝒀/𝑿 = 𝑯 𝒀
𝑷𝒊 𝑷𝒊
b) In the case of a real Channel, noise always exists and we do have:
𝟎 ≤ 𝑯 𝑿/𝒀 ≤ 𝑯 𝑿

𝟎 ≤ 𝑯 𝒀/𝑿 ≤ 𝑯 𝒀
3- Channel Capacity : The Concept of Channel Capacity was introduced by Shannon and Hartley by stating
that : “ The Capacity of a noisy discrete memoryless Channel in bits per second is defined by calculating the
maximum of its average transmission flowrate” (previously defined as :
𝑫𝒕 = 𝑯 𝑿 − 𝑯 𝑿/𝒀 . 𝒓𝒔 = 𝑯 𝒀 − 𝑯 𝒀/𝑿 . 𝒓𝒔 (in bits per second) ), So we get :

𝑪𝒃𝒊𝒕𝒔/𝒔 = 𝒎𝒂𝒙. 𝑫𝒕 = max 𝑯 𝑿 − 𝑯 𝑿/𝒀 . 𝒓𝒔 = max 𝑰 𝑿, 𝒀 . 𝒓𝒔 (in bits per second)


𝑷𝑿 𝑷𝑿

In the case: 𝒓𝒔 = 𝟏 𝒃𝒊𝒕 per second (Source Rythm) then : 𝑪𝒃𝒊𝒕𝒔/𝒔 = max 𝑰 𝑿, 𝒀
𝑷𝑿
4- Properties :
If 𝒓𝒔 = 𝟏 𝒃𝒊𝒕 per second (0r 1 symbol per second or one digit per second ), we can show that :
𝑪 ≥ 𝟎 ; 𝒃𝒆𝒄𝒂𝒖𝒔𝒆 𝑰 𝑿, 𝒀 ≥ 𝟎

𝑪 ≤ 𝒍𝒐𝒈𝟐 𝑴: 𝒃𝒆𝒄𝒂𝒖𝒔𝒆 𝑪 ≤ 𝒎𝒂𝒙 𝑯 𝑿
5- Channel Redundancy : Once we´ve calculated the Channel Capacity in bits per second then we can derive the Channel
redundancy which is then defined as:
𝑹𝒄 = 𝑪 − 𝑰 𝑿, 𝒀 (same unit) and if we divide this equation by C, we then obtain the factor of redundancy in
percentages:
𝑰 𝑿,𝒀
𝝆𝒄 = 𝟏 − , in percentages
𝑪
𝑰 𝑿,𝒀
6- Channel Efficiency : defined as the ratio : 𝜼𝒄 = , 𝒂𝒍𝒔𝒐 𝒊𝒏 𝒑𝒆𝒓𝒄𝒆𝒏𝒕𝒂𝒈𝒆𝒔
𝑪
7- Hartley-Shannon Second formula for the Channel Capacity:
Hartley and Shannon have derived a second formula for assessing the Capacity of a Channel taking into consideration
the physical important parameters that essentially characterize the Channel behavior or variation versus these
parameters, finding out at a time the limits of the channel Capacity values under physical parameters variations.
These important parameters are namely:
- The Bandwidth of the Channel noted B in Hertz
- The Power of the global Signal which goes through the Channel noted Ps
- The Average Power of the noise noted N0
- The total noise Average Power over the Bandwidth noted P0 = N0 .B
This formula has been defined as:

𝑷𝑺 𝑷𝑺
𝑪𝒃𝒊𝒕𝒔/𝒔 = 𝑩. 𝒍𝒐𝒈𝟐 𝟏 + = 𝑩. 𝒍𝒐𝒈𝟐 𝟏 +
𝑵𝟎 .𝑩 𝑷𝟎

Where we can recognize easily the previous defined physical parameters


As Ps and P0 are expressly powers, then, their ration can be physically considered as a “ Signal to Noise Ratio” of
the transmission in the Channel, so we can write:

𝑷𝑺 𝑷𝑺
𝑺𝑵𝑹 𝑺𝒊𝒈𝒏𝒂𝒍 𝒕𝒐 𝑵𝒐𝒊𝒔𝒆 𝑹𝒂𝒕𝒊𝒐 = , 𝐨𝐫 𝐢𝐧 𝐝𝐁 𝐚𝐬 ∶ 𝑺𝑵𝑹 𝒅𝑩 =
𝑷𝟎 𝑷𝟎 𝒅𝑩
𝑷𝑺
so: 𝑺𝑵𝑹 𝒅𝑩 = 𝟐𝟎𝒍𝒐𝒈𝟏𝟎 , so it leads to the formula in dB
𝑷𝟎

𝑪𝒃𝒊𝒕𝒔/𝒔 = 𝐁. 𝒍𝒐𝒈𝟐 𝟏 + 𝑺𝑵𝑹 𝒅𝑩 (final expression)

----------------------------------------- End of Part A of Chapter 3---------------------------------------------------------------------

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy