diff --git a/knowledge.ipynb b/knowledge.ipynb index 0155d4f6f..2ffb20362 100644 --- a/knowledge.ipynb +++ b/knowledge.ipynb @@ -19,7 +19,9 @@ }, "outputs": [], "source": [ - "from knowledge import *" + "from knowledge import *\n", + "\n", + "from notebook import pseudocode, psource" ] }, { @@ -70,7 +72,7 @@ "collapsed": true }, "source": [ - "## [CURRENT-BEST LEARNING](https://github.com/aimacode/aima-pseudocode/blob/master/md/Current-Best-Learning.md)\n", + "## CURRENT-BEST LEARNING\n", "\n", "### Overview\n", "\n", @@ -89,46 +91,70 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Implementation\n", - "\n", - "As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n", - "\n", - "We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n", - "\n", - "You can read the source by running the cells below:" + "### Pseudocode" ] }, { "cell_type": "code", "execution_count": 2, - "metadata": { - "collapsed": true - }, - "outputs": [], + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "### AIMA3e\n", + "__function__ Current-Best-Learning(_examples_, _h_) __returns__ a hypothesis or fail \n", + " __if__ _examples_ is empty __then__ \n", + "   __return__ _h_ \n", + " _e_ ← First(_examples_) \n", + " __if__ _e_ is consistent with _h_ __then__ \n", + "   __return__ Current-Best-Learning(Rest(_examples_), _h_) \n", + " __else if__ _e_ is a false positive for _h_ __then__ \n", + "   __for each__ _h'_ __in__ specializations of _h_ consistent with _examples_ seen so far __do__ \n", + "     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n", + "     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n", + " __else if__ _e_ is a false negative for _h_ __then__ \n", + "   __for each__ _h'_ __in__ generalizations of _h_ consistent with _examples_ seen so far __do__ \n", + "     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n", + "     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n", + " __return__ _fail_ \n", + "\n", + "---\n", + "__Figure ??__ The current-best-hypothesis learning algorithm. It searches for a consistent hypothesis that fits all the examples and backtracks when no consistent specialization/generalization can be found. To start the algorithm, any hypothesis can be passed in; it will be specialized or generalized as needed." + ], + "text/plain": [ + "" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "%psource current_best_learning" + "pseudocode('Current-Best-Learning')" ] }, { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "collapsed": true - }, - "outputs": [], + "cell_type": "markdown", + "metadata": {}, "source": [ - "%psource specializations" + "### Implementation\n", + "\n", + "As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n", + "\n", + "We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n", + "\n", + "You can read the source by running the cell below:" ] }, { "cell_type": "code", - "execution_count": 4, - "metadata": { - "collapsed": true - }, + "execution_count": null, + "metadata": {}, "outputs": [], "source": [ - "%psource generalizations" + "psource(current_best_learning, specializations, generalizations)" ] }, { @@ -432,7 +458,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## [VERSION-SPACE LEARNING](https://github.com/aimacode/aima-pseudocode/blob/master/md/Version-Space-Learning.md)\n", + "## VERSION-SPACE LEARNING\n", "\n", "### Overview\n", "\n", @@ -443,83 +469,88 @@ }, { "cell_type": "markdown", - "metadata": { - "collapsed": true - }, - "source": [ - "### Implementation\n", - "\n", - "The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function `version_space_update`. In the end, we return the version-space.\n", - "\n", - "Before we can start updating the version space, we need to generate it. We do that with the `all_hypotheses` function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using `values_table`), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).\n", - "\n", - "You can read the code for all the functions by running the cells below:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "collapsed": true - }, - "outputs": [], + "metadata": {}, "source": [ - "%psource version_space_learning" + "### Pseudocode" ] }, { "cell_type": "code", "execution_count": 3, - "metadata": { - "collapsed": true - }, - "outputs": [], + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "### AIMA3e\n", + "__function__ Version-Space-Learning(_examples_) __returns__ a version space \n", + " __local variables__: _V_, the version space: the set of all hypotheses \n", + "\n", + " _V_ ← the set of all hypotheses \n", + " __for each__ example _e_ in _examples_ __do__ \n", + "   __if__ _V_ is not empty __then__ _V_ ← Version-Space-Update(_V_, _e_) \n", + " __return__ _V_ \n", + "\n", + "---\n", + "__function__ Version-Space-Update(_V_, _e_) __returns__ an updated version space \n", + " _V_ ← \\{_h_ ∈ _V_ : _h_ is consistent with _e_\\} \n", + "\n", + "---\n", + "__Figure ??__ The version space learning algorithm. It finds a subset of _V_ that is consistent with all the _examples_." + ], + "text/plain": [ + "" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "%psource version_space_update" + "pseudocode('Version-Space-Learning')" ] }, { - "cell_type": "code", - "execution_count": 4, + "cell_type": "markdown", "metadata": { "collapsed": true }, - "outputs": [], "source": [ - "%psource all_hypotheses" + "### Implementation\n", + "\n", + "The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function `version_space_update`. In the end, we return the version-space.\n", + "\n", + "Before we can start updating the version space, we need to generate it. We do that with the `all_hypotheses` function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using `values_table`), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).\n", + "\n", + "You can read the code for all the functions by running the cells below:" ] }, { "cell_type": "code", - "execution_count": 5, - "metadata": { - "collapsed": true - }, + "execution_count": null, + "metadata": {}, "outputs": [], "source": [ - "%psource values_table" + "psource(version_space_learning, version_space_update)" ] }, { "cell_type": "code", - "execution_count": 6, - "metadata": { - "collapsed": true - }, + "execution_count": null, + "metadata": {}, "outputs": [], "source": [ - "%psource build_attr_combinations" + "psource(all_hypotheses, values_table)" ] }, { "cell_type": "code", - "execution_count": 7, - "metadata": { - "collapsed": true - }, + "execution_count": null, + "metadata": {}, "outputs": [], "source": [ - "%psource build_h_combinations" + "psource(build_attr_combinations, build_h_combinations)" ] }, { diff --git a/notebook.py b/notebook.py index 2df7b7721..b1e024f60 100644 --- a/notebook.py +++ b/notebook.py @@ -1,8 +1,10 @@ +from inspect import getsource + from utils import argmax, argmin from games import TicTacToe, alphabeta_player, random_player, Fig52Extended, infinity from logic import parse_definite_clause, standardize_variables, unify, subst from learning import DataSet -from IPython.display import HTML, Markdown, display +from IPython.display import HTML, display from collections import Counter import matplotlib.pyplot as plt @@ -15,11 +17,32 @@ #______________________________________________________________________________ +def pseudocode(algorithm): + """Print the pseudocode for the given algorithm.""" + from urllib.request import urlopen + from IPython.display import Markdown + + url = "https://raw.githubusercontent.com/aimacode/aima-pseudocode/master/md/{}.md".format(algorithm) + f = urlopen(url) + md = f.read().decode('utf-8') + md = md.split('\n', 1)[-1].strip() + md = '#' + md + return Markdown(md) + + def psource(*functions): """Print the source code for the given function(s).""" - import inspect + source_code = '\n\n'.join(getsource(fn) for fn in functions) + try: + from pygments.formatters import HtmlFormatter + from pygments.lexers import PythonLexer + from pygments import highlight + + display(HTML(highlight(source_code, PythonLexer(), HtmlFormatter(full=True)))) + + except ImportError: + print(source_code) - print('\n\n'.join(inspect.getsource(fn) for fn in functions)) # ______________________________________________________________________________ pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy