0% found this document useful (0 votes)
29 views2 pages

Homework hmm2

The document discusses two exercises involving hidden Markov models. The first exercise provides an HMM for part-of-speech tagging and asks to calculate forward-backward probabilities and the probability of an observed sequence. The second exercise considers an HMM for ice cream flavors and performs one iteration of forward-backward EM algorithm to estimate new probabilities.

Uploaded by

Rawaa Ben Brahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views2 pages

Homework hmm2

The document discusses two exercises involving hidden Markov models. The first exercise provides an HMM for part-of-speech tagging and asks to calculate forward-backward probabilities and the probability of an observed sequence. The second exercise considers an HMM for ice cream flavors and performs one iteration of forward-backward EM algorithm to estimate new probabilities.

Uploaded by

Rawaa Ben Brahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Machine Learning

Exercises: HMM
Laura Kallmeyer

Summer 2016, Heinrich-Heine-Universität Düsseldorf

Exercise 1 Consider the following HMM for POS tagging:

P (chief |V) = 0
P (talks|V) = 0.4
...

V
0.4
0.2
start 0.3 0.4
0.3

0.5 0.3
0.4 A N

P (chief |N) = 0.2


0.3 0.5 P (talks|N) = 0.3
P (chief |A) = 0.2
P (talks|A) = 0 0.2 ...

...
0.2
end

1. Given this HMM, calculate the forward and backward probabilities α and β for the observation
sequence “chief talks”.
2. What is the probability of this sequence? How can this probability be obtained from the α and β
tables?

Solution:

V 0 4.79 · 10−3 V 2.4 · 10−2 0.3


N 4 · 10−2 1.8 · 10−2 N 6.6 · 10−2 0.2
1. α: β:
A 8 · 10−2 0 A 3 · 10−2 0.2
t 1 2 t 1 2
2. 0 · 2.4 · 10−2 + 4 · 10−2 · 6.6 · 10−2 + 8 · 10−2 · 3 · 10−2 = 5.02 · 10−3
or
4.79 · 10−3 · 0.3 + 1.8 · 10−2 · 0.2 + 0 · 0.2 = 5.02 · 10−3

Exercise 2 Now consider again the ice cream example from the course slides:
0.6 0.4
start end
0.3 0.4
0.3 0.2
0.4
H C
P (1|H) = 0.5 0.4 P (1|C) = 0.5
P (3|H) = 0.5 P (3|C) = 0.5

assume that the observed sequence is 31.


The forward and backward matrices for this input are:
H 0.2 9 · 10−2 H 9 · 10−2 0.4
α: C 0.3 9 · 10−2 β: C 0.12 0.2 P (31) = 5.4 · 10−2
t 1 2 t 1 2

Calculate one iteration of the forward-backward EM algorithm in order to estimate new probabilities.
Solution:
t H C j =H j =C
E-step: γ: 1 0.33 0.67 ξ1 : i =H 0.22 0.11
2 0.67 0.33 i =C 0.44 0.22

M-step:
0.67 0.67
start end
0.22 0.22
0.11
0.33
0.33
H C
P (1|H) = 0.67 0.44 P (1|C) = 0.33
P (3|H) = 0.33 P (3|C) = 0.67
P (31) = 0.11

Further steps (were not asked in the exercise):

t H C j =H j =C
E-step: γ: 1 0.11 0.89 ξ1 : i =H 9.86 · 10−2 1.22 · 10−2
2 0.89 0.11 i =C 0.79 9.86 · 10−2

M-step:
0.89 0.89
start end
9.86 · 10−2 9.86 · 10−2
1.22 · 10−2
0.11
0.11
H C
P (1|H) = 0.89 0.79 P (1|C) = 0.11
P (3|H) = 0.11 P (3|C) = 0.89
P (31) = 0.5
t H C j =H j =C
E-step: γ: 1 1.9 · 10−3 1 ξ1 : i =H 1.9 · 10−3 0
2 1 1.9 · 10−3 i =C 1 1.9 · 10−3

M-step:
1 1
start end
1.9 · 10−3 1.9 · 10−3
0
1.9 · 10−3
1.9 · 10−3
H C
P (1|H) = 1 1 P (1|C) = 1.95 · 10−3
P (3|H) = 1.9 · 10−3 P (3|C) = 1
P (31) = 0.99
t H C j =H j =C
E-step: γ: 1 0 1 ξ1 : i =H 0 0
2 1 0 i =C 1 0

M-step:
1 1
start end
0 0
0
0
0
H C
P (1|H) = 1 1 P (1|C) = 3 · 10−5
P (3|H) = 0 P (3|C) = 1
P (31) = 1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy