|
32 | 32 | "## CONTENTS\n",
|
33 | 33 | "\n",
|
34 | 34 | "* Overview\n",
|
35 |
| - "* HITS" |
| 35 | + "* HITS\n", |
| 36 | + "* Question Answering" |
36 | 37 | ]
|
37 | 38 | },
|
38 | 39 | {
|
|
52 | 53 | "\n",
|
53 | 54 | "### Overview\n",
|
54 | 55 | "\n",
|
55 |
| - "**Hyperlink-Induced Topic Search** (or HITS for short) is an algorithm for information retrieval and page ranking. You can read more on information retrieval in the [text](https://github.com/aimacode/aima-python/blob/master/text.ipynb) notebook. Essentially, given a collection of documents and a user's query, such systems return to the user the documents most relevant to what the user needs. The HITS algorithm differs from a lot of other similar ranking algorithms (like Google's *Pagerank*) as the page ratings in this algorithm are dependent on the given query. This means that for each new query the result pages must be computed anew. This cost might be prohibitive for many modern search engines, so a lot steer away from this approach.\n", |
| 56 | + "**Hyperlink-Induced Topic Search** (or HITS for short) is an algorithm for information retrieval and page ranking. You can read more on information retrieval in the [text notebook](https://github.com/aimacode/aima-python/blob/master/text.ipynb). Essentially, given a collection of documents and a user's query, such systems return to the user the documents most relevant to what the user needs. The HITS algorithm differs from a lot of other similar ranking algorithms (like Google's *Pagerank*) as the page ratings in this algorithm are dependent on the given query. This means that for each new query the result pages must be computed anew. This cost might be prohibitive for many modern search engines, so a lot steer away from this approach.\n", |
56 | 57 | "\n",
|
57 | 58 | "HITS first finds a list of relevant pages to the query and then adds pages that link to or are linked from these pages. Once the set is built, we define two values for each page. **Authority** on the query, the degree of pages from the relevant set linking to it and **hub** of the query, the degree that it points to authoritative pages in the set. Since we do not want to simply count the number of links from a page to other pages, but we also want to take into account the quality of the linked pages, we update the hub and authority values of a page in the following manner, until convergence:\n",
|
58 | 59 | "\n",
|
|
211 | 212 | "source": [
|
212 | 213 | "The top score is 0.82 by \"C\". This is the most relevant page according to the algorithm. You can see that the pages it links to, \"A\" and \"D\", have the two highest authority scores (therefore \"C\" has a high hub score) and the pages it is linked from, \"B\" and \"E\", have the highest hub scores (so \"C\" has a high authority score). By combining these two facts, we get that \"C\" is the most relevant page. It is worth noting that it does not matter if the given page contains the query words, just that it links and is linked from high-quality pages."
|
213 | 214 | ]
|
| 215 | + }, |
| 216 | + { |
| 217 | + "cell_type": "markdown", |
| 218 | + "metadata": {}, |
| 219 | + "source": [ |
| 220 | + "## QUESTION ANSWERING\n", |
| 221 | + "\n", |
| 222 | + "**Question Answering** is a type of Information Retrieval system, where we have a question instead of a query and instead of relevant documents we want the computer to return a short sentence, phrase or word that answers our question. To better understand the concept of question answering systems, you can first read the \"Text Models\" and \"Information Retrieval\" section from the [text notebook](https://github.com/aimacode/aima-python/blob/master/text.ipynb).\n", |
| 223 | + "\n", |
| 224 | + "A typical example of such a system is `AskMSR` (Banko *et al.*, 2002), a system for question answering that performed admirably against more sophisticated algorithms. The basic idea behind it is that a lot of questions have already been answered in the web numerous times. The system doesn't know a lot about verbs, or concepts or even what a noun is. It knows about 15 different types of questions and how they can be written as queries. It can rewrite [Who was George Washington's second in command?] as the query [\\* was George Washington's second in command] or [George Washington's second in command was \\*].\n", |
| 225 | + "\n", |
| 226 | + "After rewriting the questions, it issues these queries and retrieves the short text around the query terms. It then breaks the result into 1, 2 or 3-grams. Filters are also applied to increase the chances of a correct answer. If the query starts with \"who\", we filter for names, if it starts with \"how many\" we filter for numbers and so on. We can also filter out the words appearing in the query. For the above query, the answer \"George Washington\" is wrong, even though it is quite possible the 2-gram would appear a lot around the query terms.\n", |
| 227 | + "\n", |
| 228 | + "Finally, the different results are weighted by the generality of the queries. The result from the general boolean query [George Washington OR second in command] weighs less that the more specific query [George Washington's second in command was \\*]. As an answer we return the most highly-ranked n-gram." |
| 229 | + ] |
214 | 230 | }
|
215 | 231 | ],
|
216 | 232 | "metadata": {
|
|
0 commit comments