Skip to content

Commit 6fd1428

Browse files
dmeoliantmarakis
authored andcommitted
added binary and multiclass SVM with tests (#1135)
* changed queue to set in AC3 Changed queue to set in AC3 (as in the pseudocode of the original algorithm) to reduce the number of consistency-check due to the redundancy of the same arcs in queue. For example, on the harder1 configuration of the Sudoku CSP the number consistency-check has been reduced from 40464 to 12562! * re-added test commented by mistake * added the mentioned AC4 algorithm for constraint propagation AC3 algorithm has non-optimal worst case time-complexity O(cd^3 ), while AC4 algorithm runs in O(cd^2) worst case time * added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference * removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py * added map coloring SAT problems * fixed typo errors and removed unnecessary brackets * reformulated the map coloring problem * Revert "reformulated the map coloring problem" This reverts commit 20ab0e5. * Revert "fixed typo errors and removed unnecessary brackets" This reverts commit f743146. * Revert "added map coloring SAT problems" This reverts commit 9e0fa55. * Revert "removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py" This reverts commit b3cd24c. * Revert "added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference" This reverts commit 6986247. * Revert "added the mentioned AC4 algorithm for constraint propagation" This reverts commit 03551fb. * added map coloring SAT problem * fixed build error * Revert "added map coloring SAT problem" This reverts commit 93af259. * Revert "fixed build error" This reverts commit 6641c2c. * added map coloring SAT problem * removed redundant parentheses * added Viterbi algorithm * added monkey & bananas planning problem * simplified condition in search.py * added tests for monkey & bananas planning problem * removed monkey & bananas planning problem * Revert "removed monkey & bananas planning problem" This reverts commit 9d37ae0. * Revert "added tests for monkey & bananas planning problem" This reverts commit 24041e9. * Revert "simplified condition in search.py" This reverts commit 6d229ce. * Revert "added monkey & bananas planning problem" This reverts commit c74933a. * defined the PlanningProblem as a specialization of a search.Problem & fixed typo errors * fixed doctest in logic.py * fixed doctest for cascade_distribution * added ForwardPlanner and tests * added __lt__ implementation for Expr * added more tests * renamed forward planner * Revert "renamed forward planner" This reverts commit c4139e5. * renamed forward planner class & added doc * added backward planner and tests * fixed mdp4e.py doctests * removed ignore_delete_lists_heuristic flag * fixed heuristic for forward and backward planners * added SATPlan and tests * fixed ignore delete lists heuristic in forward and backward planners * fixed backward planner and added tests * updated doc * added nary csp definition and examples * added CSPlan and tests * fixed CSPlan * added book's cryptarithmetic puzzle example * fixed typo errors in test_csp * fixed #1111 * added sortedcontainers to yml and doc to CSPlan * added tests for n-ary csp * fixed utils.extend * updated test_probability.py * converted static methods to functions * added AC3b and AC4 with heuristic and tests * added conflict-driven clause learning sat solver * added tests for cdcl and heuristics * fixed probability.py * fixed import * fixed kakuro * added Martelli and Montanari rule-based unification algorithm * removed duplicate standardize_variables * renamed variables known as built-in functions * fixed typos in learning.py * renamed some files and fixed typos * fixed typos * fixed typos * fixed tests * removed unify_mm * remove unnecessary brackets * fixed tests * moved utility functions to utils.py * fixed typos * moved utils function to utils.py, separated probability learning classes from learning.py, fixed typos and fixed imports in .ipynb files * added missing learners * fixed Travis build * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos in agents files * fixed imports in agent files * fixed deep learning .ipynb imports * fixed typos * added SVM * added .ipynb and fixed typos * adapted code for .ipynb * fixed typos * updated .ipynb * updated .ipynb * updated logic.py * updated .ipynb * updated .ipynb * updated planning.py * updated inf definition * fixed typos * fixed typos * fixed typos * fixed typos * Revert "fixed typos" This reverts commit 658309d. * Revert "fixed typos" This reverts commit 08ad660. * fixed typos * fixed typos * fixed typos * fixed typos * fixed typos and utils imports in *4e.py files * fixed typos * fixed typos * fixed typos * fixed typos * fixed import * fixed typos * fixed typos * fixd typos * fixed typos * fixed typos * updated SVM * added svm test * fixed SVM and tests * fixed some definitions and typos * fixed svm and tests * added SVMs also in learning4e.py * fixed inf definition * fixed .travis.yml * fixed .travis.yml * fixed import * fixed inf definition * replaced cvxopt with qpsolvers * replaced cvxopt with quadprog * fixed some definitions * fixed typos and removed unnecessary tests * replaced quadprog with qpsolvers * fixed extend in utils * specified error type in try-catch block * fixed extend in utils * fixed typos * fixed learning.py * fixed doctest errors * added comments * removed unnecessary if condition * updated learning.py * fixed imports * removed unnecessary imports * fixed keras imports * fixed typos * fixed learning_curve * added comments
1 parent 04fa465 commit 6fd1428

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+1081
-1039
lines changed

.travis.yml

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,31 @@
1-
language:
2-
- python
1+
language: python
32

43
python:
5-
- "3.4"
4+
- 3.4
5+
- 3.5
6+
- 3.6
7+
- 3.7
68

79
before_install:
810
- git submodule update --remote
911

1012
install:
11-
- pip install six
1213
- pip install flake8
1314
- pip install ipython
14-
- pip install matplotlib
15-
- pip install networkx
16-
- pip install ipywidgets
17-
- pip install Pillow
18-
- pip install pytest-cov
1915
- pip install ipythonblocks
16+
- pip install ipywidgets
2017
- pip install keras
18+
- pip install matplotlib
19+
- pip install networkx
2120
- pip install numpy
22-
- pip install tensorflow
2321
- pip install opencv-python
22+
- pip install Pillow
23+
- pip install pytest-cov
24+
- pip install qpsolvers
25+
- pip install quadprog
26+
- pip install six
2427
- pip install sortedcontainers
25-
28+
- pip install tensorflow
2629

2730
script:
2831
- py.test --cov=./

agents.py

Lines changed: 31 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
"""Implement Agents and Environments (Chapters 1-2).
1+
"""
2+
Implement Agents and Environments. (Chapters 1-2)
23
34
The class hierarchies are as follows:
45
@@ -23,16 +24,14 @@
2324
EnvToolbar ## contains buttons for controlling EnvGUI
2425
2526
EnvCanvas ## Canvas to display the environment of an EnvGUI
26-
2727
"""
2828

29-
# TO DO:
29+
# TODO
3030
# Implement grabbing correctly.
3131
# When an object is grabbed, does it still have a location?
3232
# What if it is released?
3333
# What if the grabbed or the grabber is deleted?
3434
# What if the grabber moves?
35-
#
3635
# Speed control in GUI does not have any effect -- fix it.
3736

3837
from utils import distance_squared, turn_heading
@@ -90,8 +89,7 @@ def __init__(self, program=None):
9089
self.holding = []
9190
self.performance = 0
9291
if program is None or not isinstance(program, collections.Callable):
93-
print("Can't find a valid program for {}, falling back to default.".format(
94-
self.__class__.__name__))
92+
print("Can't find a valid program for {}, falling back to default.".format(self.__class__.__name__))
9593

9694
def program(percept):
9795
return eval(input('Percept={}; action? '.format(percept)))
@@ -122,10 +120,13 @@ def new_program(percept):
122120

123121

124122
def TableDrivenAgentProgram(table):
125-
"""This agent selects an action based on the percept sequence.
123+
"""
124+
[Figure 2.7]
125+
This agent selects an action based on the percept sequence.
126126
It is practical only for tiny domains.
127127
To customize it, provide as table a dictionary of all
128-
{percept_sequence:action} pairs. [Figure 2.7]"""
128+
{percept_sequence:action} pairs.
129+
"""
129130
percepts = []
130131

131132
def program(percept):
@@ -154,7 +155,10 @@ def RandomAgentProgram(actions):
154155

155156

156157
def SimpleReflexAgentProgram(rules, interpret_input):
157-
"""This agent takes action based solely on the percept. [Figure 2.10]"""
158+
"""
159+
[Figure 2.10]
160+
This agent takes action based solely on the percept.
161+
"""
158162

159163
def program(percept):
160164
state = interpret_input(percept)
@@ -166,7 +170,10 @@ def program(percept):
166170

167171

168172
def ModelBasedReflexAgentProgram(rules, update_state, model):
169-
"""This agent takes action based on the percept and state. [Figure 2.12]"""
173+
"""
174+
[Figure 2.12]
175+
This agent takes action based on the percept and state.
176+
"""
170177

171178
def program(percept):
172179
program.state = update_state(program.state, program.action, percept, model)
@@ -219,7 +226,9 @@ def TableDrivenVacuumAgent():
219226

220227

221228
def ReflexVacuumAgent():
222-
"""A reflex agent for the two-state vacuum environment. [Figure 2.8]
229+
"""
230+
[Figure 2.8]
231+
A reflex agent for the two-state vacuum environment.
223232
>>> agent = ReflexVacuumAgent()
224233
>>> environment = TrivialVacuumEnvironment()
225234
>>> environment.add_thing(agent)
@@ -436,13 +445,13 @@ def move_forward(self, from_location):
436445
"""
437446
x, y = from_location
438447
if self.direction == self.R:
439-
return (x + 1, y)
448+
return x + 1, y
440449
elif self.direction == self.L:
441-
return (x - 1, y)
450+
return x - 1, y
442451
elif self.direction == self.U:
443-
return (x, y - 1)
452+
return x, y - 1
444453
elif self.direction == self.D:
445-
return (x, y + 1)
454+
return x, y + 1
446455

447456

448457
class XYEnvironment(Environment):
@@ -497,7 +506,7 @@ def execute_action(self, agent, action):
497506
agent.holding.pop()
498507

499508
def default_location(self, thing):
500-
return (random.choice(self.width), random.choice(self.height))
509+
return random.choice(self.width), random.choice(self.height)
501510

502511
def move_to(self, thing, destination):
503512
"""Move a thing to a new location. Returns True on success or False if there is an Obstacle.
@@ -525,7 +534,7 @@ def add_thing(self, thing, location=(1, 1), exclude_duplicate_class_items=False)
525534
def is_inbounds(self, location):
526535
"""Checks to make sure that the location is inbounds (within walls if we have walls)"""
527536
x, y = location
528-
return not (x < self.x_start or x >= self.x_end or y < self.y_start or y >= self.y_end)
537+
return not (x < self.x_start or x > self.x_end or y < self.y_start or y > self.y_end)
529538

530539
def random_location_inbounds(self, exclude=None):
531540
"""Returns a random location that is inbounds (within walls if we have walls)"""
@@ -723,7 +732,7 @@ def percept(self, agent):
723732
status = ('Dirty' if self.some_things_at(
724733
agent.location, Dirt) else 'Clean')
725734
bump = ('Bump' if agent.bump else 'None')
726-
return (status, bump)
735+
return status, bump
727736

728737
def execute_action(self, agent, action):
729738
agent.bump = False
@@ -752,12 +761,11 @@ def __init__(self):
752761
loc_B: random.choice(['Clean', 'Dirty'])}
753762

754763
def thing_classes(self):
755-
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent,
756-
TableDrivenVacuumAgent, ModelBasedVacuumAgent]
764+
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent, TableDrivenVacuumAgent, ModelBasedVacuumAgent]
757765

758766
def percept(self, agent):
759767
"""Returns the agent's location, and the location status (Dirty/Clean)."""
760-
return (agent.location, self.status[agent.location])
768+
return agent.location, self.status[agent.location]
761769

762770
def execute_action(self, agent, action):
763771
"""Change agent's location and/or location's status; track performance.
@@ -992,8 +1000,8 @@ def is_done(self):
9921000
else:
9931001
print("Death by {} [-1000].".format(explorer[0].killed_by))
9941002
else:
995-
print("Explorer climbed out {}.".format("with Gold [+1000]!"
996-
if Gold() not in self.things else "without Gold [+0]"))
1003+
print("Explorer climbed out {}."
1004+
.format("with Gold [+1000]!" if Gold() not in self.things else "without Gold [+0]"))
9971005
return True
9981006

9991007
# TODO: Arrow needs to be implemented

agents4e.py

Lines changed: 30 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
"""Implement Agents and Environments (Chapters 1-2).
1+
"""
2+
Implement Agents and Environments. (Chapters 1-2)
23
34
The class hierarchies are as follows:
45
@@ -23,16 +24,14 @@
2324
EnvToolbar ## contains buttons for controlling EnvGUI
2425
2526
EnvCanvas ## Canvas to display the environment of an EnvGUI
26-
2727
"""
2828

29-
# TO DO:
29+
# TODO
3030
# Implement grabbing correctly.
3131
# When an object is grabbed, does it still have a location?
3232
# What if it is released?
3333
# What if the grabbed or the grabber is deleted?
3434
# What if the grabber moves?
35-
#
3635
# Speed control in GUI does not have any effect -- fix it.
3736

3837
from utils4e import distance_squared, turn_heading
@@ -90,8 +89,7 @@ def __init__(self, program=None):
9089
self.holding = []
9190
self.performance = 0
9291
if program is None or not isinstance(program, collections.Callable):
93-
print("Can't find a valid program for {}, falling back to default.".format(
94-
self.__class__.__name__))
92+
print("Can't find a valid program for {}, falling back to default.".format(self.__class__.__name__))
9593

9694
def program(percept):
9795
return eval(input('Percept={}; action? '.format(percept)))
@@ -122,10 +120,13 @@ def new_program(percept):
122120

123121

124122
def TableDrivenAgentProgram(table):
125-
"""This agent selects an action based on the percept sequence.
123+
"""
124+
[Figure 2.7]
125+
This agent selects an action based on the percept sequence.
126126
It is practical only for tiny domains.
127127
To customize it, provide as table a dictionary of all
128-
{percept_sequence:action} pairs. [Figure 2.7]"""
128+
{percept_sequence:action} pairs.
129+
"""
129130
percepts = []
130131

131132
def program(percept):
@@ -154,7 +155,10 @@ def RandomAgentProgram(actions):
154155

155156

156157
def SimpleReflexAgentProgram(rules, interpret_input):
157-
"""This agent takes action based solely on the percept. [Figure 2.10]"""
158+
"""
159+
[Figure 2.10]
160+
This agent takes action based solely on the percept.
161+
"""
158162

159163
def program(percept):
160164
state = interpret_input(percept)
@@ -166,7 +170,10 @@ def program(percept):
166170

167171

168172
def ModelBasedReflexAgentProgram(rules, update_state, trainsition_model, sensor_model):
169-
"""This agent takes action based on the percept and state. [Figure 2.12]"""
173+
"""
174+
[Figure 2.12]
175+
This agent takes action based on the percept and state.
176+
"""
170177

171178
def program(percept):
172179
program.state = update_state(program.state, program.action, percept, trainsition_model, sensor_model)
@@ -219,7 +226,9 @@ def TableDrivenVacuumAgent():
219226

220227

221228
def ReflexVacuumAgent():
222-
"""A reflex agent for the two-state vacuum environment. [Figure 2.8]
229+
"""
230+
[Figure 2.8]
231+
A reflex agent for the two-state vacuum environment.
223232
>>> agent = ReflexVacuumAgent()
224233
>>> environment = TrivialVacuumEnvironment()
225234
>>> environment.add_thing(agent)
@@ -333,8 +342,7 @@ def run(self, steps=1000):
333342

334343
def list_things_at(self, location, tclass=Thing):
335344
"""Return all things exactly at a given location."""
336-
return [thing for thing in self.things
337-
if thing.location == location and isinstance(thing, tclass)]
345+
return [thing for thing in self.things if thing.location == location and isinstance(thing, tclass)]
338346

339347
def some_things_at(self, location, tclass=Thing):
340348
"""Return true if at least one of the things at location
@@ -437,13 +445,13 @@ def move_forward(self, from_location):
437445
"""
438446
x, y = from_location
439447
if self.direction == self.R:
440-
return (x + 1, y)
448+
return x + 1, y
441449
elif self.direction == self.L:
442-
return (x - 1, y)
450+
return x - 1, y
443451
elif self.direction == self.U:
444-
return (x, y - 1)
452+
return x, y - 1
445453
elif self.direction == self.D:
446-
return (x, y + 1)
454+
return x, y + 1
447455

448456

449457
class XYEnvironment(Environment):
@@ -498,7 +506,7 @@ def execute_action(self, agent, action):
498506
agent.holding.pop()
499507

500508
def default_location(self, thing):
501-
return (random.choice(self.width), random.choice(self.height))
509+
return random.choice(self.width), random.choice(self.height)
502510

503511
def move_to(self, thing, destination):
504512
"""Move a thing to a new location. Returns True on success or False if there is an Obstacle.
@@ -724,7 +732,7 @@ def percept(self, agent):
724732
status = ('Dirty' if self.some_things_at(
725733
agent.location, Dirt) else 'Clean')
726734
bump = ('Bump' if agent.bump else 'None')
727-
return (status, bump)
735+
return status, bump
728736

729737
def execute_action(self, agent, action):
730738
agent.bump = False
@@ -753,12 +761,11 @@ def __init__(self):
753761
loc_B: random.choice(['Clean', 'Dirty'])}
754762

755763
def thing_classes(self):
756-
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent,
757-
TableDrivenVacuumAgent, ModelBasedVacuumAgent]
764+
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent, TableDrivenVacuumAgent, ModelBasedVacuumAgent]
758765

759766
def percept(self, agent):
760767
"""Returns the agent's location, and the location status (Dirty/Clean)."""
761-
return (agent.location, self.status[agent.location])
768+
return agent.location, self.status[agent.location]
762769

763770
def execute_action(self, agent, action):
764771
"""Change agent's location and/or location's status; track performance.
@@ -994,8 +1001,7 @@ def is_done(self):
9941001
print("Death by {} [-1000].".format(explorer[0].killed_by))
9951002
else:
9961003
print("Explorer climbed out {}."
997-
.format(
998-
"with Gold [+1000]!" if Gold() not in self.things else "without Gold [+0]"))
1004+
.format("with Gold [+1000]!" if Gold() not in self.things else "without Gold [+0]"))
9991005
return True
10001006

10011007
# TODO: Arrow needs to be implemented

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy