0% found this document useful (0 votes)
9 views41 pages

Lecture 3 1

Uploaded by

hdthai2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views41 pages

Lecture 3 1

Uploaded by

hdthai2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Lecture 3-1: Decision Theory

Content
▪ Signal space representation
▪ AWGN channel
▪ Receiver roles
▪ Orthonormal basis formulation
▪ Gram-Schmidt algorithm
▪ Signal space based on orthonormal basis (vector presentation)
▪ ML and MAP criterions
▪ Received signals and noise at the receiver side
▪ Decision based on vector formulation
▪ Detection with MAP criterion
▪ Detection with ML criterion
▪ Voronoi region
Signal space representation
AWGN channel (1)

4
AWGN Channel (2)

▪Linear and time-invariant


▪Ideal frequency response H(f)=1
▪Add white Gaussian noise n(t)

5
AWGN Channel (3)

▪White Gaussian noise n(t)


▪Ergodic random process
▪ Each random variable is a Gaussian random variable
with zero average
▪Constant spectral density Gn(f)=N0/2
Gn ( f )

N0 / 2

6
Receiver (1)

▪Transmission
u T ⎯⎯
→ s(t ) ⎯⎯
→ r (t ) = s(t ) + n(t )
▪ PROBLEM
Given r(t) ➔ Recover 𝑢 𝑇

▪Divided in two steps:


▪ Given r(t), recover s(t): (difficult problem)
▪ Given s(t), recover uT: (easy problem: labeling is a
one-to-one mapping)

7
Receiver (2)

u T ⎯⎯
→ s(t ) ⎯⎯
→ r (t ) = s (t ) + n(t )
▪PROBLEM: Given r(t) → Recover s(t)
instead of working with real waveforms

easier to solve by working with VECTORS

8
Receiver (3)

1. Given M, build an orthonormal basis B


2. Work in the signal space S generated by B
3. Each signal of S can be expressed as a linear
combination of the base elements → each signal of
S corresponds to a real vector (= coefficients of the
linear combination)

9
Basis (1)
▪Given the signal constellation M = { s1(t) , … , si(t), …, sm(t) }
▪Build a basis B = { b1(t) , … , bj(t), …, bd(t) } (d  m)
▪B = set of signals T
1. Orthogonal  b (t )b (t )dt = 0
0
j i when j i

2. Unitary energy T

 j (t )dt = 1
2
b
0

3. A number d is minimal and sufficient for writing each


signal of M as a linear combination
d
si (t ) =  sij b j (t ) sij  R
j =1

10
Basis (2)

1. Given M, how to build B ?


2. For simple constellations, it is not difficult to write a
basis B
3. Anyway, remember that there exists an algorithm
which always provides a basis:

Gram-Schmidt algorithm

11
Gram-Schmidt algorithm (1)

M = { s1(t) , … , si(t), …, sm(t) }


Given s1(t) → compute the first vector
1. define
b1* (t ) = s1 (t )

2. compute

b1* (t )
b1 (t ) =
E (b )* ( If b1* (t ) = 0 → b1(t) = 0 )
1

12
Gram-Schmidt algorithm (2)
▪Given s2(t), look for the second versor
STEP 2
▪Compute the projection on the first versor
T
s21 =  s2 (t )b1 (t )dt
0
Define
b2* (t ) = s2 (t ) − s21b1 (t )

Compute

b2* (t )
b2 (t ) = ( If b (t ) = 0 → b2(t) = 0 )
*
* 2
E (b )2 13
Gram-Schmidt algorithm (3)
T
s21 =  s2 (t )b1 (t )dt b2* (t ) = s2 (t ) − s21b1 (t )
0
Note:

• if b (t ) = 0
*
2 (s2 (t) is proportional to b1(t) )
→ b2(t)=0 and no new versor is found

• if b (t ) = 0
*
2 (s2 (t) is not proportional to b1(t) )
→ b2(t) ≠0 and an new versor is found

14
Gram-Schmidt algorithm (4)

Given si(t) 3i  m STEP i

Compute the projection on the previous versors


T
sij =  si (t )b j (t ) dt 1  j  i −1
o
i −1
define bi* (t ) = si (t ) −  sij b j (t )
j =1

compute bi* (t )
bi (t ) = ( If b1* (t ) = 0 → bi(t) = 0 )
E (bi* )
15
Gram-Schmidt algorithm (5)
T
i −1
sij =  si (t )b j (t ) dt bi* (t ) = si (t ) −  sij b j (t )
o j =1
Note:

• if i (t ) = 0 (si (t) is a linear combination of the current


*
b
versors )
→ bi(t)=0 and no new versor is found

• if b (t ) = 0
*
i (si (t) is not a linear combination)
→ bi(t) ≠0 and a new versor is found

16
GRAM-SCHMIDT ALGORITHM (6)

FINAL STEP

• Delete all bi(t) = 0

• Renumber the survived non-zero bi(t)

• We have got the basis

B = { b1(t) , … , bj(t), …, bd(t) } (d  m)

17
Exercise

Given the signal constellation

M = {s1 (t ) = + PT (t ), s2 (t ) = − PT (t )}

Build an orthonormal basis B.

18
Base construction

Remember that, for simple constellations, it is often possible to


find a basis B without applying Gram Schmidt.

It is sufficient to look for d signals satisfying the definition of


orthonormal basis

1. orthogonal

2. with unitary energy

3. their number d is minimal and sufficient for writing each signal of M as a linear
combination

19
Exercise

Given the signal constellation

M = {s1 (t ) = 0, s2 (t ) = + PT (t )}

Build an orthonormal basis B.

20
Exercise

Given the signal constellation

M = {s1 (t ) = + PT (t ) cos(2 f 0t ), s2 (t ) = − PT (t ) cos(2 f 0t )}

Build an orthonormal basis B.

21
Signal space

Given the basis B


B = { b1(t) , … , bj(t), …, bd(t) }

The signal space S generated by B is


 d 
S = a(t ) =  a j b j (t ) a j  R
 j =1 
(set of all signals which can be expressed as linear combination
of the basis signals)

22
Exercise

Given the basis B

 1 
B = b1 (t ) = + PT (t ) 
 T 

What is the signal space S ?

23
Exercise

Given the basis B


 2 
B = b1 (t ) = + PT (t ) cos ( 2 f 0t )
 T 

What is the signal space S ?

24
Vector representation (1)

Fixed B, for each signal a(t)  S we have


d
a(t ) =  a j b j (t )
j =1

The signal a(t) corresponds to a real vector with d components


(the coefficients aj of the linear combination), and vice versa:

a(t )  a = (a1 ,..., a j ,..., ad )

25
Vector representation (2)

1. From vector a to signal a(t)


d
a = (a1 ,..., a j ,..., ad ) a(t ) =  a j b j (t )
j =1

2. From signal a(t) to vector a


Projection on bj(t)
T
a(t ) a j =  a(t )b j (t )dt
0

a = (a1 ,..., a j ,..., ad )

26
Constellation vector representation (1)
We certainly have M S

Each signal si(t)  S corresponds to a real vector with d


components and viceversa:

si (t )  s i = ( si1 ,..., sij ,...si d )


Constellation M as a signal set M = { s1(t) , … , si(t), …, sm(t) }

Constellation M as a vector set M = { s1 , … , si , …, sm }

27
Constellation vector representation (2)

1. From vector si to signal si(t) d


si (t ) =  sij b j (t )
si = ( si1 ,..., sij ,..., sid ) j =1

2. From signal si(t) to vector si Projection on the versor bj(t)


T
si (t ) sij =  si (t )b j (t )dt
0

si = ( si1 ,..., sij ,..., sid )

28
Constellation vector representation (3)

Note that, as an alternative, the vector components can be


computed without computing the projections.
We write
si (t ) = si1b1 (t ) + ... + sij b j (t ) + ...sid bd (t )

The basis signals bi(t) are known.


We look for a set of coefficients sij able to satisfy the equation.
The solution is unique.

29
Constellation vector representation (4)

The signal space S is isomorphic to the Euclidean space Rd


(set of all vectors with d real components)

We can draw it as a Cartesian space

If d=1, S  R and can be drawn as a 1-D line


If d=2, S  R2 and can be drawn as the 2-D plane
If d=3, S  R3 and can be drawn as the 3-D space
We will write M  Rd

( a constellation is a set of m points in the Euclidean space Rd )


30
Example

Example of 1-D constellation

31
Example

Example of 2-D constellations

32
Signal energy (1)

Given a signal a(t)S


T

Its energy is given by E (a) =  a 2 (t )dt


0

Given its vector representation


a(t )  (a1 ,..., a j ,...ad )

It is easy to show that d


E (a) =  a 2j
j =1

33
Signal energy (2)

d
In fact, since a(t ) =  a j b j (t )
j =1

T T d −1 d −1 T d −1
E (a) =  a 2 (t ) dt =  [ a j b j (t )]2 dt =  a 2j  b 2j (t )dt =  a 2j
0 0 j =0 j =0 0 j =0

Where we have used the orthogonality property

 b (t ) b (t )dt = 0
0
j i se i  j

34
Constellation energy

Given a constellation M = s1 ,..., s i ,..., s d   R d


with si = ( si1 ,..., sij ,..., sid )
d
We have: E ( si ) =  sij2
j =1
The (average) constellation energy is equal to:
m
Es =  P ( si ) E ( si )
i =1
where P(si) is the probability of transmitting si

35
Constellation energy

Binary information sequences: ideal random

The binary vectors v  H k are equiprobable

The labeling is a one-to-one mapping e : Hk  M

The constellation signals si M are equiprobable


1
P ( si ) =
m
1 m
The signal constellation is simply: Es =  E (si )
m i =1
36
Energy per information bit

Average energy necessary to transmit an information bit via M

ES
Eb =
k

37
Exercise

Given the constellation

M = {s1 (t ) = + PT (t ), s2 (t ) = − PT (t )}

• Build an orthonormal basis.


• Write the constellation as a vector set.
• Draw it.
• What is the signal space S ?
• Compute Es and Eb.

38
Exercise

Given the constellation


M = {s1 (t ) = 0, s2 (t ) = + PT (t )}

• Build an orthonormal basis.


• Write the constellation as a vector set.
• Draw it.
• What is the signal space S ?
• Compute Es and Eb.

39
Exercise

Given the constellation

M = {s1 (t ) = + PT (t ) cos(2 f 0t ), s2 (t ) = − PT (t ) cos(2 f 0t )}

• Build an orthonormal basis.


• Write the constellation as a vector set.
• Draw it.
• What is the signal space S ?
• Compute Es and Eb.

40
Exercise

Given the constellation

M = {s1 (t ) = + PT (t ) cos(2 f 0t ), s2 (t ) = + PT (t )sin(2 f 0t ),


s3 (t ) = − PT (t ) cos(2 f 0t ), s4 (t ) = − PT (t )sin(2 f 0t )}

• Build an orthonormal basis.


• Write the constellation as a vector set.
• Draw it.
• What is the signal space S ?
• Compute Es and Eb.

Hint: A cos(2 f 0t −  ) = ( A cos  ) cos(2 f 0t ) + ( A sin  ) sin(2 f 0t )

41

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy