at (0, 0)\n",
+ " from list: []\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Delete the previously added simple reflex agent.\n",
+ "trivial_vacuum_env.delete_thing(simple_reflex_agent)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We need a another function UPDATE-STATE which will be reponsible for creating a new state description."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 140,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "ModelBasedVacuumAgent is located at (0, 0).\n"
+ ]
+ }
+ ],
+ "source": [
+ "# TODO: Implement this function for the two-dimensional environment.\n",
+ "def update_state(state, action, percept, model):\n",
+ " pass\n",
+ "\n",
+ "# Create a model-based reflex agent.\n",
+ "model_based_reflex_agent = ModelBasedVacuumAgent()\n",
+ "\n",
+ "# Add the agent to the environment.\n",
+ "trivial_vacuum_env.add_thing(model_based_reflex_agent)\n",
+ "\n",
+ "print(\"ModelBasedVacuumAgent is located at {}.\".format(model_based_reflex_agent.location))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 143,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "State of the Environment: {(1, 0): 'Clean', (0, 0): 'Clean'}.\n",
+ "ModelBasedVacuumAgent is located at (1, 0).\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Run the environment.\n",
+ "trivial_vacuum_env.step()\n",
+ "\n",
+ "# Check the current state of the environment.\n",
+ "print(\"State of the Environment: {}.\".format(trivial_vacuum_env.status))\n",
+ "\n",
+ "print(\"ModelBasedVacuumAgent is located at {}.\".format(model_based_reflex_agent.location))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Goal-Based Agent Program \n",
+ "\n",
+ "A goal-based agent needs some sort of goal information that describes situations that are desirable, apart from the current state description. \n",
+ "Figure 2.13 of the book shows a model-based, goal-based agent: \n",
+ "
\n",
+ "\n",
+ "Search (Chapters 3 to 5) and Planning (Chapters 10 to 11) are the subfields of AI devoted to finding action sequences that achieve the agent's goals.\n",
+ "\n",
+ "## Utility-Based Agent Program\n",
+ "\n",
+ "A utility-based agent maximizes its utility using the agent's utility function, which is essentially an internalization of the agent's performance measure. \n",
+ "Figure 2.14 of the book shows a model-based, utility-based agent:\n",
+ "
\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.5.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
pFad - Phonifier reborn
Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies:
Alternative Proxy
pFad Proxy
pFad v3 Proxy
pFad v4 Proxy