0% found this document useful (0 votes)
23 views31 pages

Intro To ICT 11

This chapter discusses source coding and data compression. It introduces key concepts such as lossless compression, lossy compression, entropy, effective probabilities, and effective file entropy. Lossless compression preserves all original data information, while lossy compression allows small losses for greater compression. Entropy quantifies the average information contained in each symbol from a data source. Effective probabilities are estimated from symbol frequencies in a data file. Effective file entropy calculates the minimum number of bits needed to encode a file based on its entropy.

Uploaded by

nurayjumayeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views31 pages

Intro To ICT 11

This chapter discusses source coding and data compression. It introduces key concepts such as lossless compression, lossy compression, entropy, effective probabilities, and effective file entropy. Lossless compression preserves all original data information, while lossy compression allows small losses for greater compression. Entropy quantifies the average information contained in each symbol from a data source. Effective probabilities are estimated from symbol frequencies in a data file. Effective file entropy calculates the minimum number of bits needed to encode a file based on its entropy.

Uploaded by

nurayjumayeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Introduction to Information

and Communication
The Digital Information Age:
An Introduction To Electrical Engineering
2nd Edition

❖ Professor: SeungGol Lee


❖ E-mail: sglee@inha.ac.kr
11. Source Coding
11.1 Introduction

❖ Questions related to “Source Coding”


“The total number of symbols generated by a source does not in itself imply the amount of
information generated.” in Chapter 10
[Note] The source referred to here is the source of information generation!

➢ “How much information do the symbols generated by the source contain?”


➢ "How can the amount of information contained in symbols be measured?"
➢ “How much can the generated data be compressed without loss of information?”

❖ Topics covered in this chapter


⚫ Data compression
⚫ Entropy
⚫ Encryption (암호화)

3/31
11.2 Data Compression Textbook Sec. 9.2

“Since channel capacity or memory size is finite, data compression technique is very useful.”

❖ Two compression methods


⚫ Lossless compression
: a method to keep all the information that exists in the original data
▪ Used to store or transmit numerical data, computer codes, and
documents such as financial information.
Ex) Huffman coding (similar concept to Morse code)
Morse code
⚫ Lossy compression
: A method of dramatically compressing data by allowing slight information loss
▪ Commonly used for audio and video compression.
▪ The compression rate depends on the quality level required by the user (consumer).
Ex) MPEG-2: Olympic Games video (6 Mbps) and video conferencing (64 Kbps)

4/31
11.2 Data Compression Textbook Sec. 9.2

11.2.1 Modeling Information


“Data does not just mean information itself!”
“Information must be something we do not know, something we cannot predict (expect).”
“How much information is in data?”  the concept of Entropy

❖ Source model
1) Source has a vocabulary consisting of m unique symbols (Xi)
𝑋𝑖 for 1 ≤ 𝑖 ≤ 𝑚
2) Source creates a data file containing a total
of nT symbols
▪ Each symbol comes from the vocabulary.

❖ Symbol probability, P[Xi]


: Probability that the source will generate a symbol
𝑚

𝑃 𝑋𝑖 for 1 ≤ 𝑖 ≤ 𝑚, ෍ 𝑃 𝑋𝑖 = 1
𝑖=1
▪ If all symbols appear (be generated) equally, 𝑃 𝑋1 = 𝑃 𝑋2 = 𝑃 𝑋3 = ⋯ = 𝑃 𝑋𝑚 = 1/𝑚

5/31
11.2 Data Compression Textbook Sec. 9.2

11.2.1 Modeling Information


◆ Example 9.1 Binary source
Binary source models a device that transmits binary data.
➔ Data file : Sequence of 0’s and 1’s generated by binary source!
⚫ Vocabulary of the binary source: composed of two kinds of symbols (m = 2)
𝑋1 = 0 and 𝑋2 = 1
⚫ In general, binary data is evenly distributed. Symbol probabilities?
2
1
𝑃 𝑋1 = 𝑃 𝑋2 = , ෍ 𝑃 𝑋𝑖 = 1
2
𝑖=1

◆ Example 9.2 Digital camera source


The camera’s vocabulary consists of symbols representing 8-bit pixel intensity, m = 28 = 256
𝑋𝑖 = 𝑖 − 1 for 1 ≤ 𝑖 ≤ 256
⚫ When taking a picture, the intensity of one pixel takes a random value from 0 to 255 levels!
Symbol probabilities?
256
1
𝑃 𝑋1 = 𝑃 𝑋2 = 𝑃 𝑋3 = ⋯ = 𝑃 𝑋256 = , ෍ 𝑃 𝑋𝑖 = 1
256 𝑖=1

6/31
11.2 Data Compression Textbook Sec. 9.2

11.2.2 Source Entropy


❖ Definition of Source entropy, HS
In the case where a source creates a data file using m unique symbols (Xi) with a probability of P[Xi]
⚫ Source entropy, HS
𝑚

𝐻𝑆 = − ෍ 𝑃 𝑋𝑖 log2 𝑃 𝑋𝑖 bits/symbol
𝑖=1

 The average number of bits required to convert one of the symbols that make up a data file
into binary!
 Average amount of information per symbol included in the data file generated by the source

◆ Example 9.3 Entropy of a binary source with equal probabilities


Two unique symbols (X1 = 0, X2= 1), m = 2
1 1
Symbol probability of each symbol, 𝑃 𝑋1 = 𝑃 𝑋2 = = ,
𝑚 2
⚫ Source entropy
2

𝐻𝑆 = − ෍ 𝑃 𝑋𝑖 log 2 𝑃 𝑋𝑖 = −𝑃 𝑋1 log 2 𝑃 𝑋1 − 𝑃 𝑋2 log 2 𝑃 𝑋2


𝑖=1
= −0.5 log 2 0.5 − 0.5 log 2 0.5 = 0.5 log2 2 + 0.5 log 2 2 = 1 bits/symbol

7/31
11.2 Data Compression Textbook Sec. 9.2

11.2.2 Source Entropy


❖ For a source that utilizes m unique symbols with equal probability
⚫ Symbol probability of each symbol
1
𝑃 𝑋𝑖 = for 1 ≤ 𝑖 ≤ 𝑚
𝑚
⚫ Source entropy
𝑚 𝑚
1 1
𝐻𝑆 = − ෍ 𝑃 𝑋𝑖 log 2 𝑃 𝑋𝑖 = − ෍ log 2 = log 2 𝑚 bits/symbol
𝑚 𝑚
𝑖=1 𝑖=1

◆ Example 9.4 Entropy of a binary source with unequal probabilities


Two unique symbols (X1 = 0, X2= 1), m = 2
Assume two symbol probabilities, 𝑃 𝑋1 = 0.25 and 𝑃 𝑋2 = 0.75
⚫ Source entropy
2

𝐻𝑆 = − ෍ 𝑃 𝑋𝑖 log 2 𝑃 𝑋𝑖 = −𝑃 𝑋1 log 2 𝑃 𝑋1 − 𝑃 𝑋2 log 2 𝑃 𝑋2


𝑖=1
= −0.25 log2 0.25 − 0.75 log 2 0.75 = 0.81 bits/symbol

𝐻𝑆ȁunequal < 𝐻𝑆ȁequal  Why was this result obtained?


8/31
11.2 Data Compression Textbook Sec. 9.2

11.2.3 Effective Probabilities


❖ Inferring Symbol probability from data file
Since symbol probabilities are usually not known a priori, they are inferred from symbols contained in
data files.

⚫ Creation (generation) of a data file by a source


▪ The source has m unique symbols, but the generated
data file only consists of mx ( m) unique symbols.
▪ Data file size, nT (≫ m)
88keys

⚫ Examples of data files Not all unique symbols are utilized.


▪ database file containing phone numbers or credit card numbers
• Symbols are decimal numbers from 0 to 9 (mx=m=10), File size, nT is an extremely large.
▪ Text file like textbook
• Symbols are 128 ASCII 7-bits such as numbers and characters, but not all are used
(mx< m =128). Total number of symbols in the file (not types of symbols): nT ≒ 106
▪ Manual book with images (expressed as byte values)
• Kinds of unique symbols: mx = m = 256
X1=00000000, X2=00000001, X3=00000010,…, X256=11111111
• Total no. of symbols: nT > 109
9/31
11.2 Data Compression Textbook Sec. 9.2

11.2.3 Effective Probabilities


◆ Example 9.6 Alphanumeric characters in a book
600-page book, 200 words per page (~5 characters per word)
Symbols include numbers, letters, punctuation marks, etc.

⚫ File size: nT = 600  200  5 = 6  105  Total no. of symbols in the book

❖ Effective probability of a symbol, Pe[Xi]


⚫ If symbol Xi appears ni times in the data file (size nT),
𝑛𝑖
effective probability 𝑃𝑒 𝑋𝑖 = for 1 ≤ 𝑖 ≤ 𝑚𝑥
𝑛𝑇

11.2.4 Effective Source Entropy


⚫ Effective source entropy
Replacing probabilities with effective probabilities in the definition of entropy.
𝑚𝑥
෡𝑆 = − ෍ 𝑃𝑒 𝑋𝑖 log 2 𝑃𝑒 𝑋𝑖
𝐻 bits/symbol
𝑖=1

10/31
11.2 Data Compression Textbook Sec. 9.2

11.2.5 Effective File Entropy


⚫ Definition of Effective file entropy
෡𝑓 = 𝑛 𝑇 symbols × 𝐻
𝐻 ෡s bits/symbol

: the minimum no. of bits required to encode (convert to a binary file) a data file

◆ Example 9.9 Effective file entropy


Data file containing 10 symbols: A B A D A E C D B A
mx = 5 (symbols : A, B, C, D, E)
⚫ Effective probability,
4 2 1 2 1
𝑃𝑒 𝐴 = = 0.4 , 𝑃𝑒 𝐵 = = 0.2 , 𝑃𝑒 𝐶 = = 0.1 , 𝑃𝑒 𝐷 = = 0.2 , 𝑃𝑒 𝐸 = = 0.1
10 10 10 10 10
⚫ Effective source entropy
5
෡𝑆 = − ෍ 𝑃𝑒 𝑋𝑖 log 2 𝑃𝑒 𝑋𝑖 = − 0.4 log2 0.4 + 2 × 0.2 log 2 0.2 + 2 × 0.1 log 2 0.1
𝐻
𝑖=1
= 2.12 bits/symbol
⚫ Effective file entropy
෡𝑓 = 10 × 2.12 = 21.2 bits
𝐻

11/31
11.2 Data Compression Textbook Sec. 9.2

11.2.5 Effective File Entropy


◆ Example 9.9 Effective file entropy
⚫ Possibility of data compression?
“There are 5 unique symbols in the data file: A, B, C, D, E”
▪ How many bits are needed to represent each of the 5 different symbols in binary?
if all symbols are represented as binary numbers of the same bit length.

3 bits Fixed-length
code words
Ex) binary representation of symbols
A : 000, B : 001 C : 010 D : 011 E : 100

▪ The data file consists of a total of 10 symbols. What is the size of the binary file?
→ 3 bits/symbols X 10 symbols = 30 bits
⚫ However, the effective file entropy calculated earlier was 21.2 bits.

What if we change the way symbols are converted to code words?

12/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


: As a method of lossless data compression, each symbol is expressed as a binary number with
a different bit length! <-- Variable-length code word

⚫ Key idea!
▪ Represent frequently appearing symbols as binary number of short bit-length.
▪ Represent occasionally appearing symbols as binary number of long bit-length.
→ The average no. of bits in a code word becomes similar to source entropy or effective entropy.

❖ Huffman code
⚫ Implementing variable-length code words using code trees
▪ ○
A leaf in a code tree is a code word for a specific symbol.
• A:1
• B : 01
• C : 001
• D : 000
 variable-length code words
code tree
13/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


◆ Example 9.10 Decoding the Huffman code
Let's convert the following binary sequence to symbols
using the code tree on the right!
0010111000

⚫ Find the complete code word starting from the


leftmost bit of the given binary sequence!
001 0111000 code words
C A:1
⚫ Find the complete code word in the rest of the sequence. B : 01
C : 001
001 01 11000 D : 000
C B

⚫ Continue the above process until the final decoding result is obtained.

14/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


❖ Huffman coding Procedure
Step 1. Calculate the probabilities of all symbols, and create a symbol list by sorting in descending
order from the symbol with the highest probability.  first ordering

Step 2. Assign ‘0’ to the symbol at the bottom of the symbol list and ‘1’ to the symbol directly above
it. (The assigned 0 or 1 is called code bit, and becomes the lowest bit in the code word
expression of the symbol)  first assignment

Step 3. Define a composite symbol by combining two symbols to which code bits have been assigned,
and set the result of summing two symbols probabilities as the probability of the new symbol.
(Ex) Assume that code bits are assigned to two symbols, X2 and X6.
Probability of the composite symbol: P[X2-X6] = P[X2] + P[X6]

Step 4. Rearrange all symbols including the composite symbol like Step 1  second ordering

Step 5. Repeat Step 2 to Step 4 until only two symbols (or composite symbols) remain in the symbol list
 final ordering
Assign code bits to the last two remaining symbols.  final assignment

15/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


❖ How to draw Huffman code tree
Draw the code tree from root (the starting point of the tree) after final ordering
① [Tree branching] Two branches diverge from the root
② [Hanging leaves on branches] Place a symbol (or composite symbol) with a higher probability at
the end of the left branch and assign “1”. Place the remaining symbol at the end of the right
branch and assign “0”.
③ [Continuing to branch] If a (simple) symbol hangs at the end of a branch, there is no need to
branch it anymore. (leaf node)
However, if a composite symbol hangs at the end of a branch, the pruning from the composite
symbol must be repeated.
④ [Continuing leafing] …..

❖ Decoding Huffman code (converting to symbols)


⚫ You need to know the code table or code tree that shows the relationship between symbols and
code words.

16/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


◆ Example 9.11 Huffman coding the sum of two coins
Experiment of tossing two coins at the same time  4 types of outcomes!
00 (sum = 0), 01 (sum = 1), 10 (sum = 1), 11 (sum = 2) ‘1’ for heads and ‘0’ for tails
Three symbols based on the summation result  3 types of outcomes
: X1 = 0 (sum=0), X2 = 1 (sum=1), X3 = 2 (sum=2)
▪ Symbol probabilities : P[X1] = 1/4, P[X2] = 1/2, P[X3] = 1/4,
3
▪ Source entropy :
𝐻𝑆 = − ෍ 𝑃 𝑋𝑖 log 2 𝑃 𝑋𝑖 = −0.25 log 2 0.25 − 0.5 log 2 0.5 − 0.25 log 2 0.25
𝑖=1
= 1.5 bits/symbol
Let's encode a data sequence composed of sum values into code words!
2112102110 (This result was obtained by repeating the above experiment 10 times)

⚫ Using Fixed-length code word


▪ Since there are 3 types of symbols, all symbols can be represented by 2-bit binary numbers.
X1 : 00, X2 : 01, X3 : 10
Total 20 bits
▪ Encoded binary sequence

17/31
11.2 Data Compression Textbook Sec. 9.2

11.2.6 Huffman Code


◆ Example 9.11 Huffman coding the sum of two coins
2112102110
P[X1] = 1/4, P[X2] = 1/2, P[X3] = 1/4
⚫ Huffman code
first ordering second ordering
X2 1/2 X2 1/2 1
X3 1/4 1 X3-X1 1/2 0
X1 1/4 0

▪ Code words
• sum=0: X1 → 00
• sum=1: X2 → 1
• sum=2: X3 → 01 Total 15 bits

▪ Encoded binary sequence

 The result is consistent with file entropy.

18/31
Prob. 9.11 Data Compression

Consider the following data file.


A A A C A A A B A A A C A A A D A A A E
Implement a Huffman code to compress this file. What is the average number of bits per symbol?
[Solution]
Kinds of unique symbols? A B C D E Total

No. of occurrences of each symbol? 15 1 2 1 1 20

Effective probabilities? 15/20 1/20 2/20 1/20 1/20 1

1st ordering 2nd 3rd 4th (final) ordering


A .75 A .75 A .75 A .75 [1]
C 0.1 C 0.1 D-E-B 0.15 [1] D-E-B-C 0.25 [0]
B 0.05 D-E 0.1 [1] C 0.1 [0]
D 0.05 [1] B 0.05 [0] Composite symbol : D-E-B-C
E 0.05 [0] Composite symbol : D-E-B
Composite symbol : D-E

19/31
Prob. 9.11 Data Compression

Consider the following data file.


A A A C A A A B A A A C A A A D A A A E
Implement a Huffman code to compress this file. What is the average number of bits per symbol?
[Solution] Draw code tree!
1st 2nd 3rd 4th (final)
A .75 A .75 A .75 A .75 [1]
C 0.1 C 0.1 D-E-B 0.15 [1] D-E-B-C 0.25 [0]
B 0.05 D-E 0.1 [1] C 0.1 [0]
D 0.05 [1] B 0.05 [0] root
E 0.05 [0] 1 0
DEB
A
C
Code words? 1 0
A 1 DEB C

B 010 1 0
C 00 DE B
D 0111 1 0
E 0110 D E
20/31
Prob. 9.11 Data Compression

Consider the following data file. A 1


A A A C A A A B A A A C A A A D A A A E B 010
Implement a Huffman code to compress this file. What is the average C 00
number of bits per symbol? D 0111

[Solution] Now let’s convert data file into binary file! E 0110

A A A C A A A B A A A C A A A D A A A E
1 1 1 00 1 1 1 010 1 1 1 00 1 1 1 0111 1 1 1 0110

20 symbols (nT = 20), Total number of bits: 30 bits

Average number of bits per symbol


30 / 20 = 1.5 bits/symbols
Effective entropy?
5
෡𝑆 = − ෍ 𝑃𝑒 𝑋𝑖 log 2 𝑃𝑒 𝑋𝑖 = − 0.75 log 2 0.75 + 0.1 log 2 0.1 + 3 × 0.05 log 2 0.05
𝐻
𝑖=1
= 1.29 bits/symbol
21/31
11.3 Encryption Textbook Sec. 9.3

“As wireless wifi and remote data storage access become possible, data security
becomes very important!” -- Commercial cryptography is based on complex mathematical
algorithms, and we will only explore the basic concepts here.

❖ Encryption scheme (암호화)


By Exclusive-OR (ExOR) operation on binary data and random binary sequence (RBS)
RBS is generated by a pseudo-random number generator (PRNG)
⚫ (At source) Data is encrypted by ExOR operation with RBS.
(At destination) The original data is extracted by decrypting the encrypted data with same RBS

❖ Encryption & decryption process


Data sequence Di, Random binary sequence RBSi for 1 < i < nx
➔ Encrypted data Ei (by bitwise ExOR operation)
𝐸𝑖 = 𝐷𝑖 ⨁RBS𝑖
Decryption using the same RBS
𝐷𝑖 = 𝐸𝑖 ⨁RBS𝑖

22/31
11.3 Encryption Textbook Sec. 9.3

◆ Example 9.15 Encrypting data


Data composed of 16 hexadecimal numbers (0, 1, …, 9, A, B, …, F)
Each number can be represented as a 4-bit binary number
Data size: 4  16 = 64 bits

0000
⊕ 1110
-------
1110
23/31
11.3 Encryption Textbook Sec. 9.3

◆ Example 9.18 Encrypting images


Encryption of a color image of a cat
500  500 = 250,000 pixels, 256 levels for red, green, blue
Data sequence Di 는 3  8  (2.5105)= 6  106 bits

Original image Encrypted image Decrypted image

24/31
11.3 Encryption Textbook Sec. 9.3

11.3.1 Simulating Randomness by Computer


“Computers generate random numbers using an algorithm called ‘pseudo-random number
generator (PRNG)”

❖ Good PRNG
1. Even if a deterministic formula is used to generate random numbers, the generation of random
numbers should not be predictable from previous random values.
2. Generated random numbers must be evenly distributed over the interval [0, nmax).
3. Random number generation must be repeatable.  So that you can generate the same sequence of
random numbers regardless of time and space, if desired.

❖ Generating Random Integers


⚫ A sequence of n random integers: X1, X2, X3, ….., Xn
⚫ Formula for generating random integers
𝑋𝑖 = mod 𝛼 𝑋𝑖−1 + 𝛽 , ℎ
▪ α, β : factors (If β is selected incorrectly, 0 may continue to be generated.)
▪ h : divisor in the modulus operation

25/31
11.3 Encryption Textbook Sec. 9.3

11.3.1 Simulating Randomness by Computer


❖ Random number generation using PRNG
⚫ Requires starting value X0 to generate X1  X0 : seed or encryption key
▪ Depending on the seed, different random number sequences are generated.
▪ If you use the same seed, you can generate the same random number sequence.
 The seed is very important because the same random sequence must be used in the source
and destination for encryption.

➔ Range of generated random numbers: 0 ≤ X1 ≤ h-1

[Note] If the generated random number Xi is equal to the previously generated random
number, then the PRNG repeats the same sequence of random numbers.

How are random numbers generated from deterministic formulas Random?


 Since modulo-h operation is used, random numbers with appropriate randomness can
be generated.

26/31
11.3 Encryption Textbook Sec. 9.3

11.3.1 Simulating Randomness by Computer


◆ Example 9.19 A pseudo-random sequence of 4-digit integers
Let's randomly generate a 4-digit decimal number.
Set h = 10,000, α = 97,531, β = 54,321, X0 = 6,789 (seed)
Random number range: [0, 9999]

𝑋1 = mod 97,531 × 6,789 + 54,321 , 10,000


= 2,280
𝑋2 = mod 97,531 × 2,280 + 54,321 , 10,000
= 5,001
𝑋3 = mod 97,531 × 5,001 + 54,321 , 10,000
= 6,852

.
.
.

Result of generating 101 random numbers

27/31
11.3 Encryption Textbook Sec. 9.3

11.3.1 Simulating Randomness by Computer


❖ Generating Random Bits (RBS)
⚫ After generating random integers in the interval [0,h) using the PRNG formula,
⚫ Random binary sequence (RBS) is generated based on the following criteria.
If 0 ≤ 𝑋𝑖 < ℎ/2 → RBS𝑖 = 0
If ℎ/2 ≤ 𝑋𝑖 < ℎ → RBS𝑖 = 1

◆ Example 9.20 A pseudo-random sequence of random bits


Let's generate random bits using the result of Example 9.19
⚫ Generated results
𝑋1 = mod 97,531 × 6,789 + 54,321 , 10,000 → RBS1 = 0
= 2,280
𝑋2 = mod 97,531 × 2,280 + 54,321 , 10,000 → RBS2 = 1
= 5,001
𝑋3 = mod 97,531 × 5,001 + 54,321 , 10,000 → RBS3 = 1
= 6,852

28/31
11.3 Encryption Textbook Sec. 9.3

11.3.2 Transmitting the Key (Almost) Securely


“For encrypted communication, the source encrypts the original data with RBS and the receiver
decodes the encrypted data the same RBS.”

❖ Same RBS required


⚫ PRNG requires a seed (or key) as an initial value to generate RBS.
 Therefore, the key must be safely transmitted from the source to the destination!!

❖ Process of transmitting the key over an unsecure channel

X Y

Y X

29/31
11.3 Encryption Textbook Sec. 9.3

11.3.2 Transmitting the Key (Almost) Securely


❖ Process of transmitting the key over an unsecure channel
It exploits the interesting property of modulus arithmetic.
⚫ Both sides (transmitter T and receiver R)  source and destination
▪ Both sides share the values of a and N in advance (large integers, 128-bit or 256-bit numbers)
▪ Third parties may also know them.

⚫ T secretly chooses an integer x and computes X with the formula below,


R also secretly chooses an integer y and calculates Y with the same formula.
𝑋 = 𝑎𝑥 mod(𝑁) , 𝑌 = 𝑎𝑦 mod(𝑁)

⚫ T transfers the calculation result X to R, and at the same time R transfers Y to T.


T calculates KT from Y value with the formula below, and R calculates KR from X value.
𝐾𝑇 = 𝑌 𝑥 mod(𝑁) , 𝐾𝑅 = 𝑋 𝑦 mod(𝑁)

The two results agree with each other according to


the interesting property of modulus arithmetic!
𝐾𝑇 = 𝐾𝑅

Use this value as a key to generate RBS!


30/31
11.3 Encryption Textbook Sec. 9.3

11.3.2 Transmitting the Key (Almost) Securely


❖ Is there any way for a third party to obtain the key?
⚫ The third party only knows the values of a, N, X, and Y, but not x and y.
Therefore, he needs to find an (x,y) pair that satisfies the following equation.
16𝑥 mod(23) = 9𝑦 mod(23)
▪ The third party will try to find the (x,y) pair that satisfies the above equation, substituting
each possible (x,y) value into the equation.
▪ If x and y are integers in the interval [1,100], there are a total of 104 possible (x,y) values.
Therefore, on average, 5103 operations are required to find the key!
⚫ In general, to find a key, a third party needs to find a pair of (x,y) that satisfies the
following equation.
𝑋 𝑦 mod(𝑁) = 𝑌 𝑥 mod(𝑁)

⚫ If x and y are 128-bit numbers, how long does it take to find a solution?
▪ Number range : 0 ~ 2128 = 0 ~ 2.56  1038
Attempts to find an encryption
▪ Total number of possible (x,y) pairs: (2.56  1038)2 = 6.55  1076 key are futile because it takes
▪ If one operation (checking) takes 1us, the total time required is too long and costs too much
3.3  1076  10-6 s = 3.3  1070 s = 1062 years money to obtain it.

31/31

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy