Agent Environment in AI
Agent Environment in AI
Example:
A robot vacuum cleaner uses sensors to detect walls, obstacles, and dirt. Based on this,
it decides where to move and when to clean. It acts through motors and wheels
(actuators).
What is an Environment?
The Environment is everything that lies outside the agent but interacts with it. It's the world the
agent lives in and tries to operate in.
Example:
For the vacuum cleaner agent, the environment is the entire house, including the floors,
carpets, walls, and furniture.
Features of Environment
As per Russell and Norvig, the environment in which an AI agent works can have several
characteristics. These are essential to consider while designing or choosing the right agent.
Fully Observable: The agent has access to the complete state of the environment at
any given time.
o Example: A chess game where all pieces and positions are visible to both players.
The AI knows everything it needs to make the next move.
Partially Observable: The agent can only see part of the environment due to sensor
limitations or hidden information.
o Example: Driving a car in fog – the driver (agent) can’t see everything on the
road.
2. Deterministic vs Stochastic
Deterministic: The next state of the environment is completely predictable from the
current state and the chosen action.
o Example: Using a calculator – pressing "2 + 2 =" will always give you 4.
Stochastic: The outcome includes some randomness or unpredictability.
o Example: Rolling a dice – the result can be any number from 1 to 6 regardless of
intent.
3. Episodic vs Sequential
Episodic: Agent's actions are based only on the current percept (input) and don’t
depend on previous actions.
o Example: Face recognition software – identifies each image independently.
Sequential: Current decisions depend on previous actions and experiences.
o Example: Playing chess – each move depends on the earlier moves made in the
game.
4. Single-Agent vs Multi-Agent
5. Static vs Dynamic
Static: The environment does not change while the agent is deciding on an action.
o Example: A crossword puzzle – the grid doesn’t change while you think.
Dynamic: The environment can change independently of the agent’s actions.
o Example: Traffic environment while driving – cars, pedestrians, and lights
change constantly.
6. Discrete vs Continuous
Discrete: The environment has a limited number of clearly defined states or actions.
o Example: Board games like chess or checkers – limited pieces and moves.
Continuous: The environment involves a range of values, often infinite.
o Example: Controlling a drone – continuous adjustment of direction, speed, and
height.
7. Known vs Unknown
Known: The agent knows the rules of the environment and the outcomes of its actions.
o Example: Playing a well-defined game like tic-tac-toe – the agent knows what
happens with each move.
Unknown: The agent has to learn the rules by exploring or experimenting.
o Example: Learning to navigate a new city without a map – the agent must
explore and learn from trial and error.
8. Accessible vs Inaccessible
Accessible: The agent can get complete and accurate information about the
environment's state.
o Example: Temperature control system in a room – the thermostat knows the
room temperature exactly.
Inaccessible: The agent gets limited or noisy information.
o Example: Predicting an earthquake – you don’t have full access to all seismic
data or underground movements.