0% found this document useful (0 votes)
64 views21 pages

Information Theory: Mohamed Hamada

The document discusses information theory and channel coding/decoding using the Hamming method. It explains that channel coding is used to detect and correct errors during data transmission. The Hamming method represents one of the simplest error-correcting procedures using parity bits. As an example, it describes the (7,4) Hamming code, which can detect all single-bit errors and correct single-bit errors using a generating matrix, parity check matrix, and syndrome decoding.

Uploaded by

Sudesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views21 pages

Information Theory: Mohamed Hamada

The document discusses information theory and channel coding/decoding using the Hamming method. It explains that channel coding is used to detect and correct errors during data transmission. The Hamming method represents one of the simplest error-correcting procedures using parity bits. As an example, it describes the (7,4) Hamming code, which can detect all single-bit errors and correct single-bit errors using a generating matrix, parity check matrix, and syndrome decoding.

Uploaded by

Sudesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Information Theory

Mohamed Hamada
Software Engineering Lab
The University of Aizu

Email: hamada@u-aizu.ac.jp
URL: http://www.u-aizu.ac.jp/~hamada
Today’s Topics

• Channel Coding/Decoding
• Hamming Method:
- Hamming Distance
- Hamming Weight
• Hamming (4, 7)

1
Digital Communication Systems
1. Huffman Code.

2. Two-pass Huffman Code.

3. Lemple-Ziv Code.

Information 4. Fano code.


User of
Source 5. Shannon Code. Information
6. Arithmetic Code.

Source Source
Encoder Decoder

Channel Channel
Encoder Decoder

Modulator De-Modulator

Channel

2
Digital Communication Systems
1. Memoryless

2. Stochastic

3. Markov
Information 4. Ergodic User of
Source Information

Source Source
Encoder Decoder

Channel Channel
Encoder Decoder

Modulator De-Modulator

Channel

3
INFORMATION TRANSFER ACROSS CHANNELS

Sent Received
messages messages
symbols
Information Source Channel Channel Source
Channel receiver
source coding coding decoding decoding

Compression Error Correction Decompression


Source Entropy Channel Capacity

Capacity vs Efficiency

4
Channel Coding/Decoding
Hamming Method

5
INFORMATION TRANSFER ACROSS CHANNELS

Sent Received
messages messages
symbols
Information Source Channel Channel Source
Channel receiver
source coding coding decoding decoding

Compression Error Correction Decompression


Source Entropy Channel Capacity

Capacity vs Efficiency

6
Channel Coding/Decoding

The purpose of channel


coding/decoding is to detect and correct
errors in noisy channels

7
8
Channel Coding/Decoding

Error detection and correction


In a noisy channel errors may occur during the transmission of
data from information source to destination, so we need a method
to detect these errors and then correct them.

9
Channel Coding/Decoding
Hamming Method
- It was the first complete error-detecting and error-correcting procedure.

- It represents one of the simplest and most common method for the
transmission of information ( in the presence of noise ).

- It assumes that the source transmits binary messages ( i.e. The


information source alphabet is { 0, 1 })

- It uses the parity checker method to detect an error

- It assumes that the channel is a binary symmetric channel (BSC)

10
Hamming Codes
• (4,7) Hamming Code detects all one- and two-bit
errors
• Corrects all 1-bit errors
• Magic: Any two different codewords differ in at
least 3 places!
0000000 0001011 0010111 0011100
0100110 0101101 0110001 0111010
1000101 1001110 1010010 1011001
1100011 1101000 1110100 1111111

11
Hamming Distance
• Number of places in which two bit strings
differ
• 1 0 0 0 1 0 1 = Hamming distance 3
1 0 0 1 1 1 0

• Acts like a distance:


a b
d
d ≤ a+b
12
Definitions

• Hamming distance between x and y is


dH := d(x, y) is the # of positions where xi yi

• The minimum distance of a code C is


• dmin = min { d(x, y) | x  C, y  C, x y}

• Hamming weight of a vector x is


- w(x) := d(x, 0 ) is the # of positions where xi 0

13
Example

• Hamming distance d( 1001, 0111) = 3

• Minimum distance (101, 011, 110) = 2

• Hamming weight w(0110101) = 4

14
Performance
A code with minimum distance dmin is capable of
correcting t errors if

dmin  2 t + 1.

Proof: If  t errors occur, then since dmin  2 t + 1


an incorrect code word has at least t+1 differences
with the received word.

15
Hamming codes
We assume that the sequence of symbols generated by
the information source is divided up into blocks of K
symbols.

• Minimum distance 3
• Construction

• G = Im All k-tuples of Hamming weight > 1

k
• where m = 2  k 1
Im is identity matrix
16
Example: Hamming (7, 4) codes
We assume that the sequence of symbols generated by
the information source is divided up into blocks of 4
symbols.

• Generating matrix

• G = I4 P
C1=u2+u3+u4
P is a 4x3 matrix determined by: C2=u1+u3+u4
C3=u1+u2+u4
Where + is modulo 2:
0+0=1+1=0 and
1+0=0+1=1
and ui are I4 elements 17
Example: Hamming (7, 4) codes
We assume that the sequence of symbols generated by
the information source is divided up into blocks of 4
symbols. Codewords have length 7
u4
u2 u3
u1 C1 C2 C3
• Generating matrix
I4 1000 011 P
0100 101
• G = I4 P = 0010 110
0001 111

Where + is modulo 2: C1=u2+u3+u4


0+0=1+1=0 and
C2=u1+u3+u4
1+0=0+1=1
C3=u1+u2+u4
and ui are I4 elements 18
Hamming (7, 4) Syndrome decoding
H is the parity check matrix HT is the Transpose matrix of H
PT is the Transpose matrix of P

Let G = [ Ik P ]
For Hamming(7, 4) code: n=7 and k=4

Step 1. construct H = PT In-k

Step 2. Arrange the columns of H in order of increasing binary values

Step 3. Determine the syndrome S= y. HT (y is the received message)

Step 4. If S=0 then no error occurs during transmisstion of information

Step 5. If S≠0 then S gives a binary representation of the error position


(we assume only one error ocuured)

19
Example: Suppose that y=(1111011) is received

Ik P
1000 011 PT In-k
n=7 and k=4
Step 1 0111 100
0100 101
G= H= 1011 010
0010 110 1101 001
0001 111

Step 2 0001111 Step 3


H= 0110011 S= y. HT = (101) = (5)10
1010101
Step 5
An error had occurred at position 5 in the received message
y =(1111011)

The correct sent message is then = (1111111) 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy