|
1 | 1 | {
|
2 | 2 | "cells": [
|
3 | 3 | {
|
4 |
| - "cell_type": "code", |
5 |
| - "execution_count": null, |
| 4 | + "cell_type": "markdown", |
6 | 5 | "metadata": {
|
7 | 6 | "collapsed": false
|
8 | 7 | },
|
| 8 | + "source": [ |
| 9 | + "# Learning\n", |
| 10 | + "\n", |
| 11 | + "This notebook serves as supporting material for topics covered in **Chapter 18 - Learning from Examples** , **Chapter 19 - Knowledge in Learning**, **Chapter 20 - Learning Probabilistic Models** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py). Let's start by importing everything from learning module." |
| 12 | + ] |
| 13 | + }, |
| 14 | + { |
| 15 | + "cell_type": "code", |
| 16 | + "execution_count": 1, |
| 17 | + "metadata": { |
| 18 | + "collapsed": true |
| 19 | + }, |
9 | 20 | "outputs": [],
|
10 | 21 | "source": [
|
11 |
| - "import learning" |
| 22 | + "from learning import *" |
| 23 | + ] |
| 24 | + }, |
| 25 | + { |
| 26 | + "cell_type": "markdown", |
| 27 | + "metadata": {}, |
| 28 | + "source": [ |
| 29 | + "## Review\n", |
| 30 | + "\n", |
| 31 | + "In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.\n", |
| 32 | + "\n", |
| 33 | + "An agent is **learning** if it improves its performance on future tasks after making observations about the world.\n", |
| 34 | + "\n", |
| 35 | + "There are three types of feedback that determine the three main types of learning:\n", |
| 36 | + "\n", |
| 37 | + "* **Supervised Learning**:\n", |
| 38 | + "\n", |
| 39 | + "In Supervised Learning the agent observeses some example input-output pairs and learns a function that maps from input to output.\n", |
| 40 | + "\n", |
| 41 | + "**Example**: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string \"cat\" or \"dog\" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-\"cat\"}, {dog image-\"dog\"} to the aggent. The agent then learns a function that maps from an input image to one of those strings.\n", |
| 42 | + "\n", |
| 43 | + "* **Unsupervised Learning**:\n", |
| 44 | + "\n", |
| 45 | + "In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is **clustering**: detecting potential useful clusters of input examples.\n", |
| 46 | + "\n", |
| 47 | + "**Example**: A taxi agent would develop a concept of *good traffic days* and *bad traffic days* without ever being given labeled examples.\n", |
| 48 | + "\n", |
| 49 | + "* **Reinforcement Learning**:\n", |
| 50 | + "\n", |
| 51 | + "In Reinforcement Learning the agent from a series of reinforcements—rewards or punishments.\n", |
| 52 | + "\n", |
| 53 | + "**Example**: Let's talk about an agent to play the popular Atari game—[Pong](http://www.ponggame.org). We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it." |
12 | 54 | ]
|
13 | 55 | },
|
14 | 56 | {
|
|
38 | 80 | "nbconvert_exporter": "python",
|
39 | 81 | "pygments_lexer": "ipython3",
|
40 | 82 | "version": "3.5.1"
|
| 83 | + }, |
| 84 | + "widgets": { |
| 85 | + "state": {}, |
| 86 | + "version": "1.1.1" |
41 | 87 | }
|
42 | 88 | },
|
43 | 89 | "nbformat": 4,
|
|
0 commit comments