(EIE529) Assignment 2 Solution
(EIE529) Assignment 2 Solution
(5 marks)
2) Note that 𝑓(𝑥, 𝑦) has a size of 𝑀 × 𝑁 (𝑥 = 0, ⋯ , 𝑀 − 1, 𝑦 = 0, ⋯ , 𝑁 − 1) where 𝑀, 𝑁 are
even numbers, and 𝑔(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) + 2𝑓(𝑥 − 1, 𝑦) + 𝑓(𝑥 − 2, 𝑦) + 𝜂(𝑥, 𝑦):
a) We first obtain the 𝑀 × 𝑁 2D-DFT on both size of 𝑔(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) + 2𝑓(𝑥 −
1, 𝑦) + 𝑓(𝑥 − 2, 𝑦) + 𝜂(𝑥, 𝑦) as follows:
2𝜋 2𝜋
𝐺(𝑢, 𝑣) = 𝐹(𝑢, 𝑣) + 2𝑒 −𝑗 𝑀 𝑢 𝐹(𝑢, 𝑣) + 𝑒 −𝑗 𝑀 2𝑢 𝐹(𝑢, 𝑣) + 𝑁(𝑢, 𝑣)
2𝜋 2𝜋
= 𝐹(𝑢, 𝑣) (1 + 2𝑒 −𝑗 𝑀 𝑢 + 𝑒 −𝑗 𝑀 2𝑢 ) + 𝑁(𝑢, 𝑣).
1
𝐺(𝑢, 𝑣)
𝐹̂1 (𝑢, 𝑣) = 𝐺(𝑢, 𝑣)𝑅1 (𝑢, 𝑣) = .
2𝜋 2𝜋𝑢
−𝑗 𝑢
2𝑒 𝑀 (1 + cos ( 𝑀 ))
(6 marks)
b) The frequencies for which the restored image cannot be estimated using the inverse
𝑀
filtering technique are 𝑢 = and 𝑣 = 0,1, ⋯ , 𝑁 − 1.
2
(4 marks)
c) Since 𝑝(𝑥, 𝑦) ∗ 𝑓(𝑥, 𝑦) = −𝑓(𝑥, 𝑦 − 1) + 2𝑓(𝑥, 𝑦) − 𝑓(𝑥, 𝑦 + 1) (See lecture notes
on Image Enhancement), the frequency response of 𝑝(𝑥, 𝑦) can be derived using
2𝜋 2𝜋
similar methods in 2), which is given by 𝑃(𝑢, 𝑣) = 2 − 𝑒 −𝑗 𝑁 𝑣 − 𝑒 𝑗 𝑁 𝑣 = 2 −
2𝜋𝑣
2cos ( ). As a result, the image restoration filter based on the regularization approach
𝑁
is given as:
𝐻 ∗ (𝑢, 𝑣)
𝑅2 (𝑢, 𝑣) =
|𝐻(𝑢, 𝑣)|2 + 𝜆|𝑃(𝑢, 𝑣)|2
2𝜋 2𝜋𝑢
𝑒 𝑗 𝑀 𝑢 (1 + cos ( 𝑀 ))
= 2 2.
2𝜋𝑢 2𝜋𝑣
2 (1 + cos ( 𝑀 )) + 2𝜆 (1 − cos ( 𝑁 ))
(6 marks)
d) The frequency/frequencies for which the restored image cannot be estimated using the
2𝜋𝑢 2𝜋𝑣
regularization approach are the ones with cos ( ) equals to -1 and cos ( ) equal to
𝑀 𝑁
𝑀
1 (so that the denominator is zero), which is 𝑢 = and 𝑣 = 0.
2
(4 marks)
2
b) To minimize 𝐷 by optimizing the variables {𝑟𝑘 } and {𝑑𝑘 } , we need to take the
derivative of 𝐷 with respect to these variables. In particular, to show the first expression,
𝜕𝐷
we need to derive 𝜕𝑑 and set it to zero, i.e.:
𝑘
𝜕𝐷
= (𝑑𝑘 − 𝑟𝑘 )2 𝑝(𝑑𝑘 ) − (𝑑𝑘 − 𝑟𝑘+1 )2 𝑝(𝑑𝑘 ) = 0 ⇒ −2𝑑𝑘 𝑟𝑘 + 2𝑑𝑘 𝑟𝑘+1 + 𝑟𝑘2 − 𝑟𝑘+1
2
=0
𝜕𝑑𝑘
𝑟𝑘 + 𝑟𝑘+1
⇒ 2𝑑𝑘 (𝑟𝑘+1 − 𝑟𝑘 ) − (𝑟𝑘 + 𝑟𝑘+1 )(𝑟𝑘+1 − 𝑟𝑘 ) = 0 ⇒ 𝑑𝑘 = ,𝑘
2
= 1, ⋯ , 𝐿 − 1.
𝜕𝐷
To obtain the second expression, we need to derive 𝜕𝑟 and set it to zero, i.e.:
𝑘
𝑑𝑘 𝑑𝑘 𝑑𝑘 𝑑𝑘
𝜕𝐷
= ∫ −2(𝑥 − 𝑟𝑘 )𝑝(𝑥)𝑑𝑥 = 0 ⇒ ∫ 𝑥𝑝(𝑥)𝑑𝑥 = ∫ 𝑟𝑘 𝑝(𝑥)𝑑𝑥 ⇒ ∫ 𝑥𝑝(𝑥)𝑑𝑥
𝜕𝑟𝑘 𝑑𝑘−1 𝑑𝑘−1 𝑑𝑘−1 𝑑𝑘−1
𝑑
𝑑𝑘 ∫𝑑 𝑘 𝑥𝑝(𝑥)𝑑𝑥
𝑘−1
= 𝑟𝑘 ∫ 𝑝(𝑥)𝑑𝑥 ⇒ 𝑟𝑘 = 𝑑𝑘 , 𝑘 = 1, ⋯ , 𝐿.
𝑑𝑘−1 ∫𝑑 𝑝(𝑥)𝑑𝑥
𝑘−1
(8 marks)
2) Given that a coding system with an information source that generates 5 symbols
{𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 }, with the probabilities of {0.1,0.15,0.25,0.20,0.30}, respectively.
a) The entropy of the information source can be obtained via the entropy formula:
5
𝐻 = −∑ 𝑃(𝑎5 )log 2 𝑃(𝑎5 )
𝑗=1
= −(0.1 × log 2 0.1 + 0.15 × log 2 0.15 + 0.25 × log 2 0.25
+ 0.2 × log 2 0.2 + 0.3 × log 2 0.3) = 2.2282 (bits).
(3 marks)
b) The Huffman coding process is shown below. The codewords the coding system would
generate if the following input sequence is received {𝑎1 , 𝑎2 , 𝑎4 , 𝑎5 , 𝑎3 }, is given by
101100110001. The average number of bits for coding this input sequence (i.e., the
3+3+2+2+2
average number of bits for coding a symbol) is = 2.4 bits/per symbol.
5
3
(6 marks)
c) The arithmetic coding process is shown below, and the final message symbol in the
input sequence narrows the range to [𝟎. 𝟎𝟏𝟗𝟖𝟐𝟓, 𝟎. 𝟎𝟐𝟎𝟎𝟓), i.e., any number within
this subinterval (e.g., 0.02000) can be used to represent the message.
(6 marks)
For your information, the value having the shortest binary form in the range
[0.019825,0.02005) is “.00000101001” in binary form, which is 0.02 in decimal.