An AI agent is any entity that perceives its environment through sensors and acts upon that environment through actuators to maximise its performance measure. Russell & Norvig define five progressively powerful agent architectures — simple reflex, model-based reflex, goal-based, utility-based, and learning — each suited to different environment types. The PEAS framework (Performance, Environment, Actuators, Sensors) is the standard tool for specifying any agent design. Core GATE DS&AI topic.
Real-life analogy: Five levels of driving autonomy
The five agent types map to self-driving car levels: Simple reflex = emergency brake (if obstacle, stop — no memory). Model-based reflex = lane-assist (maintains an internal model of the road). Goal-based = GPS navigation (plans a route to a destination). Utility-based = full self-driving (balances speed, safety, fuel — no single goal). Learning agent = Tesla FSD (improves from millions of real-world trips).
| Agent type | Driving analogy | Classic AI example |
|---|---|---|
| Simple reflex | Emergency brake sensor | Thermostat, keyword spam filter |
| Model-based reflex | Lane-assist with road model | Robot vacuum with floor map |
| Goal-based | GPS navigation | A* path planner, chess opening book |
| Utility-based | Full self-driving (trade-offs) | Atari DQN, trading bots |
| Learning agent | Tesla FSD learning fleet | GPT, AlphaGo, recommendation engines |
PEAS framework — defining any agent task
PEAS (Performance measure, Environment, Actuators, Sensors) is the standard specification framework for AI agents. Every GATE question about agent design can be answered systematically using PEAS.
| PEAS | Taxi driver agent | Email spam filter | Chess AI |
|---|---|---|---|
| Performance | Safe trips, profit, comfort | Spam blocked, no false positives | Win games |
| Environment | Roads, traffic, weather | Inbox: text, headers, senders | Chess board (8x8) |
| Actuators | Steering, brakes, accelerator | Block / allow / folder-route | Choose and play a move |
| Sensors | Camera, GPS, speedometer | Email content, sender reputation | Board state (fully observable) |
Environment types — the GATE taxonomy
| Property | Easier | Harder | Example |
|---|---|---|---|
| Observability | Fully observable | Partially observable | Chess (full) vs Poker (partial) |
| Determinism | Deterministic | Stochastic | Rubik's cube vs weather forecasting |
| Episodicity | Episodic | Sequential | Image classifier vs chess |
| Dynamism | Static | Dynamic | Crossword vs taxi driving |
| Discreteness | Discrete | Continuous | Chess vs robot arm control |
| Agent count | Single-agent | Multi-agent | Maze solver vs RTS game |
GATE answer: hardest real-world environment
The hardest AI environment is: partially observable + stochastic + sequential + dynamic + continuous + multi-agent. Real autonomous driving fits this description. The easiest (fully observable, deterministic, episodic, static, discrete, single-agent) describes simple puzzles like Sudoku.
Practice questions (GATE-style)
- A thermostat turns heating ON when temperature < 18C. What agent type is this? (Answer: Simple reflex agent — it acts only on the current percept with no internal model or goal representation.)
- What is the key difference between goal-based and utility-based agents? (Answer: Goal-based: binary satisfied/not-satisfied. Utility-based: continuous preference score — needed when multiple goals conflict or require trade-offs.)
- A poker-playing AI. Classify its environment completely. (Answer: Partially observable, stochastic, sequential, dynamic, discrete, multi-agent.)
- Name the four components of a learning agent. (Answer: Performance element, Learning element, Critic, Problem generator.)
- Can a simple reflex agent be rational? (Answer: Yes — if the environment is fully observable and the condition-action rules correctly implement the optimal action for each percept.)
On LumiChats
LumiChats AI Agent mode is a utility-based learning agent: it perceives your task, selects tools (web search, code execution, file I/O), and maximises task completion. The agent improves its suggestions based on your feedback within a session.
Try it free