|
20 | 20 | "outputs": [],
|
21 | 21 | "source": [
|
22 | 22 | "import nlp\n",
|
23 |
| - "from nlp import Page, HITS, Lexicon, Rules, Grammar" |
| 23 | + "from nlp import Page, HITS\n", |
| 24 | + "from nlp import Lexicon, Rules, Grammar, ProbLexicon, ProbRules, ProbGrammar" |
24 | 25 | ]
|
25 | 26 | },
|
26 | 27 | {
|
|
151 | 152 | "source": [
|
152 | 153 | "### Implementation\n",
|
153 | 154 | "\n",
|
154 |
| - "In the module we have implemented a `Lexicon` and a `Rules` function, which we can combine to create a `Grammar` object.\n", |
| 155 | + "In the module we have implementation both for probabilistic and non-probabilistic grammars. Both these implementation follow the same format. There are functions for the lexicon and the rules which can be combined to create a grammar object.\n", |
| 156 | + "\n", |
| 157 | + "#### Non-Probabilistic\n", |
155 | 158 | "\n",
|
156 | 159 | "Execute the cells below to view the implemenations:"
|
157 | 160 | ]
|
|
205 | 208 | "name": "stdout",
|
206 | 209 | "output_type": "stream",
|
207 | 210 | "text": [
|
208 |
| - "Lexicon {'Article': ['the', 'a', 'an'], 'Adverb': ['here', 'lightly', 'now'], 'Digit': ['1', '2', '0'], 'Pronoun': ['me', 'you', 'he'], 'Name': ['john', 'mary', 'peter'], 'Adjective': ['good', 'new', 'sad'], 'Conjuction': ['and', 'or', 'but'], 'Preposition': ['to', 'in', 'at'], 'RelPro': ['that', 'who', 'which'], 'Verb': ['is', 'say', 'are'], 'Noun': ['robot', 'sheep', 'fence']}\n", |
| 211 | + "Lexicon {'Verb': ['is', 'say', 'are'], 'RelPro': ['that', 'who', 'which'], 'Conjuction': ['and', 'or', 'but'], 'Digit': ['1', '2', '0'], 'Noun': ['robot', 'sheep', 'fence'], 'Pronoun': ['me', 'you', 'he'], 'Preposition': ['to', 'in', 'at'], 'Name': ['john', 'mary', 'peter'], 'Article': ['the', 'a', 'an'], 'Adjective': ['good', 'new', 'sad'], 'Adverb': ['here', 'lightly', 'now']}\n", |
209 | 212 | "\n",
|
210 |
| - "Rules: {'Adjs': [['Adjective'], ['Adjective', 'Adjs']], 'PP': [['Preposition', 'NP']], 'RelClause': [['RelPro', 'VP']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'S': [['NP', 'VP'], ['S', 'Conjuction', 'S']]}\n" |
| 213 | + "Rules: {'RelClause': [['RelPro', 'VP']], 'S': [['NP', 'VP'], ['S', 'Conjuction', 'S']], 'PP': [['Preposition', 'NP']], 'VP': [['Verb'], ['VP', 'NP'], ['VP', 'Adjective'], ['VP', 'PP'], ['VP', 'Adverb']], 'NP': [['Pronoun'], ['Name'], ['Noun'], ['Article', 'Noun'], ['Article', 'Adjs', 'Noun'], ['Digit'], ['NP', 'PP'], ['NP', 'RelClause']], 'Adjs': [['Adjective'], ['Adjective', 'Adjs']]}\n" |
211 | 214 | ]
|
212 | 215 | }
|
213 | 216 | ],
|
|
287 | 290 | {
|
288 | 291 | "data": {
|
289 | 292 | "text/plain": [
|
290 |
| - "'a robot is to a robot sad but robot say you 0 in me in a robot at the sheep at 1 good an fence in sheep in me that are in john new lightly lightly here a new good new robot lightly new in sheep lightly'" |
| 293 | + "'the fence are or 1 say in john that is here lightly to peter lightly sad good at you good here me good at john in an fence to fence at robot lightly and a robot who is here sad sheep in fence in fence at he sad here lightly to 0 say and fence is good in a sad sheep in a fence but he say here'" |
291 | 294 | ]
|
292 | 295 | },
|
293 | 296 | "execution_count": 7,
|
|
296 | 299 | }
|
297 | 300 | ],
|
298 | 301 | "source": [
|
299 |
| - "from nlp import generate_random\n", |
| 302 | + "grammar.generate_random('S')" |
| 303 | + ] |
| 304 | + }, |
| 305 | + { |
| 306 | + "cell_type": "markdown", |
| 307 | + "metadata": {}, |
| 308 | + "source": [ |
| 309 | + "#### Probabilistic\n", |
| 310 | + "\n", |
| 311 | + "The probabilistic grammars follow the same approach. They take as input a string, are assembled from a grammar and a lexicon and can generate random sentences (giving the probability of the sentence). The main difference is that in the lexicon we have tuples (terminal, probability) instead of strings and for the rules we have a list of tuples (list of non-terminals, probability) instead of list of lists of non-terminals.\n", |
300 | 312 | "\n",
|
301 |
| - "generate_random(grammar)" |
| 313 | + "Execute the cells to read the code:" |
| 314 | + ] |
| 315 | + }, |
| 316 | + { |
| 317 | + "cell_type": "code", |
| 318 | + "execution_count": 2, |
| 319 | + "metadata": { |
| 320 | + "collapsed": true |
| 321 | + }, |
| 322 | + "outputs": [], |
| 323 | + "source": [ |
| 324 | + "%psource ProbLexicon" |
| 325 | + ] |
| 326 | + }, |
| 327 | + { |
| 328 | + "cell_type": "code", |
| 329 | + "execution_count": 3, |
| 330 | + "metadata": { |
| 331 | + "collapsed": true |
| 332 | + }, |
| 333 | + "outputs": [], |
| 334 | + "source": [ |
| 335 | + "%psource ProbRules" |
| 336 | + ] |
| 337 | + }, |
| 338 | + { |
| 339 | + "cell_type": "code", |
| 340 | + "execution_count": 4, |
| 341 | + "metadata": { |
| 342 | + "collapsed": true |
| 343 | + }, |
| 344 | + "outputs": [], |
| 345 | + "source": [ |
| 346 | + "%psource ProbGrammar" |
| 347 | + ] |
| 348 | + }, |
| 349 | + { |
| 350 | + "cell_type": "markdown", |
| 351 | + "metadata": {}, |
| 352 | + "source": [ |
| 353 | + "Let's build a lexicon and rules for the probabilistic grammar:" |
| 354 | + ] |
| 355 | + }, |
| 356 | + { |
| 357 | + "cell_type": "code", |
| 358 | + "execution_count": 2, |
| 359 | + "metadata": {}, |
| 360 | + "outputs": [ |
| 361 | + { |
| 362 | + "name": "stdout", |
| 363 | + "output_type": "stream", |
| 364 | + "text": [ |
| 365 | + "Lexicon {'Verb': [('is', 0.5), ('say', 0.3), ('are', 0.2)], 'Adjective': [('good', 0.5), ('new', 0.2), ('sad', 0.3)], 'Preposition': [('to', 0.4), ('in', 0.3), ('at', 0.3)], 'Pronoun': [('me', 0.3), ('you', 0.4), ('he', 0.3)], 'Conjuction': [('and', 0.5), ('or', 0.2), ('but', 0.3)], 'Adverb': [('here', 0.6), ('lightly', 0.1), ('now', 0.3)], 'Article': [('the', 0.5), ('a', 0.25), ('an', 0.25)], 'Digit': [('0', 0.35), ('1', 0.35), ('2', 0.3)], 'RelPro': [('that', 0.5), ('who', 0.3), ('which', 0.2)], 'Noun': [('robot', 0.4), ('sheep', 0.4), ('fence', 0.2)], 'Name': [('john', 0.4), ('mary', 0.4), ('peter', 0.2)]}\n", |
| 366 | + "\n", |
| 367 | + "Rules: {'RelClause': [(['RelPro', 'VP'], 1.0)], 'Adjs': [(['Adjective'], 0.5), (['Adjective', 'Adjs'], 0.5)], 'PP': [(['Preposition', 'NP'], 1.0)], 'NP': [(['Pronoun'], 0.2), (['Name'], 0.05), (['Noun'], 0.2), (['Article', 'Noun'], 0.15), (['Article', 'Adjs', 'Noun'], 0.1), (['Digit'], 0.05), (['NP', 'PP'], 0.15), (['NP', 'RelClause'], 0.1)], 'S': [(['NP', 'VP'], 0.6), (['S', 'Conjuction', 'S'], 0.4)], 'VP': [(['Verb'], 0.3), (['VP', 'NP'], 0.2), (['VP', 'Adjective'], 0.25), (['VP', 'PP'], 0.15), (['VP', 'Adverb'], 0.1)]}\n" |
| 368 | + ] |
| 369 | + } |
| 370 | + ], |
| 371 | + "source": [ |
| 372 | + "lexicon = ProbLexicon(\n", |
| 373 | + " Verb=\"is [0.5] | say [0.3] | are [0.2]\",\n", |
| 374 | + " Noun=\"robot [0.4] | sheep [0.4] | fence [0.2]\",\n", |
| 375 | + " Adjective=\"good [0.5] | new [0.2] | sad [0.3]\",\n", |
| 376 | + " Adverb=\"here [0.6] | lightly [0.1] | now [0.3]\",\n", |
| 377 | + " Pronoun=\"me [0.3] | you [0.4] | he [0.3]\",\n", |
| 378 | + " RelPro=\"that [0.5] | who [0.3] | which [0.2]\",\n", |
| 379 | + " Name=\"john [0.4] | mary [0.4] | peter [0.2]\",\n", |
| 380 | + " Article=\"the [0.5] | a [0.25] | an [0.25]\",\n", |
| 381 | + " Preposition=\"to [0.4] | in [0.3] | at [0.3]\",\n", |
| 382 | + " Conjuction=\"and [0.5] | or [0.2] | but [0.3]\",\n", |
| 383 | + " Digit=\"0 [0.35] | 1 [0.35] | 2 [0.3]\"\n", |
| 384 | + ")\n", |
| 385 | + "\n", |
| 386 | + "print(\"Lexicon\", lexicon)\n", |
| 387 | + "\n", |
| 388 | + "rules = ProbRules(\n", |
| 389 | + " S=\"NP VP [0.6] | S Conjuction S [0.4]\",\n", |
| 390 | + " NP=\"Pronoun [0.2] | Name [0.05] | Noun [0.2] | Article Noun [0.15] \\\n", |
| 391 | + " | Article Adjs Noun [0.1] | Digit [0.05] | NP PP [0.15] | NP RelClause [0.1]\",\n", |
| 392 | + " VP=\"Verb [0.3] | VP NP [0.2] | VP Adjective [0.25] | VP PP [0.15] | VP Adverb [0.1]\",\n", |
| 393 | + " Adjs=\"Adjective [0.5] | Adjective Adjs [0.5]\",\n", |
| 394 | + " PP=\"Preposition NP [1]\",\n", |
| 395 | + " RelClause=\"RelPro VP [1]\"\n", |
| 396 | + ")\n", |
| 397 | + "\n", |
| 398 | + "print(\"\\nRules:\", rules)" |
| 399 | + ] |
| 400 | + }, |
| 401 | + { |
| 402 | + "cell_type": "markdown", |
| 403 | + "metadata": {}, |
| 404 | + "source": [ |
| 405 | + "Let's use the above to assemble our probabilistic grammar and run some simple queries:" |
| 406 | + ] |
| 407 | + }, |
| 408 | + { |
| 409 | + "cell_type": "code", |
| 410 | + "execution_count": 3, |
| 411 | + "metadata": {}, |
| 412 | + "outputs": [ |
| 413 | + { |
| 414 | + "name": "stdout", |
| 415 | + "output_type": "stream", |
| 416 | + "text": [ |
| 417 | + "How can we rewrite 'VP'? [(['Verb'], 0.3), (['VP', 'NP'], 0.2), (['VP', 'Adjective'], 0.25), (['VP', 'PP'], 0.15), (['VP', 'Adverb'], 0.1)]\n", |
| 418 | + "Is 'the' an article? True\n", |
| 419 | + "Is 'here' a noun? False\n" |
| 420 | + ] |
| 421 | + } |
| 422 | + ], |
| 423 | + "source": [ |
| 424 | + "grammar = ProbGrammar(\"A Simple Probabilistic Grammar\", rules, lexicon)\n", |
| 425 | + "\n", |
| 426 | + "print(\"How can we rewrite 'VP'?\", grammar.rewrites_for('VP'))\n", |
| 427 | + "print(\"Is 'the' an article?\", grammar.isa('the', 'Article'))\n", |
| 428 | + "print(\"Is 'here' a noun?\", grammar.isa('here', 'Noun'))" |
| 429 | + ] |
| 430 | + }, |
| 431 | + { |
| 432 | + "cell_type": "markdown", |
| 433 | + "metadata": {}, |
| 434 | + "source": [ |
| 435 | + "Lastly, we can generate random sentences from this grammar. The function `prob_generation` returns a tuple (sentence, probability)." |
| 436 | + ] |
| 437 | + }, |
| 438 | + { |
| 439 | + "cell_type": "code", |
| 440 | + "execution_count": 5, |
| 441 | + "metadata": {}, |
| 442 | + "outputs": [ |
| 443 | + { |
| 444 | + "name": "stdout", |
| 445 | + "output_type": "stream", |
| 446 | + "text": [ |
| 447 | + "a sheep say at the sad sad robot the good new sheep but john at fence are to me who is to robot the good new fence to robot who is mary in robot to 1 to an sad sad sad robot in fence lightly now at 1 at a new robot here good at john an robot in a fence in john the sheep here 2 to sheep good and you is but sheep is sad a good robot or the fence is robot good lightly at a good robot at 2 now good new or 1 say but he say or peter are in you who is lightly and fence say to john to an robot and sheep say and me is good or a robot is and sheep that say good he new 2 which are sad to an good fence that say 1 good good new lightly are good at he sad here but an sheep who say say sad now lightly sad an sad sad sheep or mary are but a fence at he in 1 say and 2 are\n", |
| 448 | + "5.453065905143236e-226\n" |
| 449 | + ] |
| 450 | + } |
| 451 | + ], |
| 452 | + "source": [ |
| 453 | + "sentence, prob = grammar.generate_random('S')\n", |
| 454 | + "print(sentence)\n", |
| 455 | + "print(prob)" |
| 456 | + ] |
| 457 | + }, |
| 458 | + { |
| 459 | + "cell_type": "markdown", |
| 460 | + "metadata": {}, |
| 461 | + "source": [ |
| 462 | + "As with the non-probabilistic grammars, this one mostly overgenerates. You can also see that the probability is very, very low, which means there are a ton of generateable sentences (in this case infinite, since we have recursion; notice how `VP` can produce another `VP`, for example)." |
302 | 463 | ]
|
303 | 464 | },
|
304 | 465 | {
|
|
0 commit comments