You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Go to the `src/` directory and run the script ```python main.py```.
7
-
Adding the command line parameter ```gpu``` additionally executes best path decoding on the GPU.
7
+
Appending the command line parameter ```gpu``` additionally executes best path decoding on the GPU.
8
8
9
9
Expected results:
10
10
```
@@ -42,28 +42,28 @@ BEST PATH GPU : "the fak friend of the fomly hae tC"
42
42
* Prefix Search Decoding: best-first search through tree of labelings. File: `PrefixSearch.py`\[1\]
43
43
* Beam Search Decoding: iteratively searches for best labeling in a tree of labelings, optionally uses a character-level LM. File: `BeamSearch.py`\[2\]\[5\]
44
44
* Token Passing: searches for most probable word sequence. The words are constrained to those contained in a dictionary. Can be extended to use a word-level LM. File: `TokenPassing.py`\[1\]
45
-
* Lexicon Search: use approximation from best path decoding to find similar words in dictionary and return the one with highest score. File: `LexiconSearch.py`\[3\]
45
+
* Lexicon Search: computes approximation with best path decoding to find similar words in dictionary. Returns the one with highest score. File: `LexiconSearch.py`\[3\]
46
46
* Loss: calculates probability and loss of a given text in the RNN output. File: `Loss.py`\[1\]\[6\]
47
47
* Word Beam Search: TensorFlow implementation see repository [CTCWordBeamSearch](https://github.com/githubharald/CTCWordBeamSearch)
48
48
49
49
50
50
## Choosing the right algorithm
51
-
[This paper](./doc/comparison.pdf) compares beam search decoding and tpassing.
51
+
[This paper](./doc/comparison.pdf) compares beam search decoding and token passing.
52
52
It gives suggestions when to use best path decoding, beam search decoding and token passing.
53
53
54
54
55
55
## Testcases
56
56
57
-
Illustration of the **Mini example** testcase: the RNN output matrix contains 2 time-steps (t0 and t1) and 3 labels (a, b and - representing the CTC-blank).
57
+
The RNN output matrix of the **Mini example** testcase contains 2 time-steps (t0 and t1) and 3 labels (a, b and - representing the CTC-blank).
58
58
Best path decoding (see left figure) takes the most probable label per time-step which gives the path "--" and therefore the recognized text "" with probability 0.6\*0.6=0.36.
59
59
Beam search, prefix search and token passing calculate the probability of labelings.
60
60
For the labeling "a" these algorithms sum over the paths "-a", "a-" and "aa" (see right figure) with probability 0.6\*0.4+0.4\*0.6+0.4*0.4=0.64.
61
61
The only path which gives "" still has probability 0.36, therefore "a" is the result returned by beam search, prefix search and token passing.
62
62
63
63

64
64
65
-
The **Word example** testcase contains a single word.
66
-
It is used to test the lexicon search \[3\].
65
+
The **Word example** testcase contains a single word from the IAM Handwriting Database \[4\].
66
+
It is used to test lexicon search \[3\].
67
67
RNN output was generated with the [SimpleHTR](https://github.com/githubharald/SimpleHTR) model.
68
68
Lexicon search first computes an approximation with best path decoding, then searches for similar words in a dictionary, and finally scores them by computing the loss and returning the most probable dictionary word.
69
69
Best path decoding outputs "aircrapt", lexicon search is able to find similar words like "aircraft", "airplane", ... in the dictionary, calculates a score for each of them and finally returns "aircraft", which is the correct result.
0 commit comments