Faiml Unit 1
Faiml Unit 1
Artificial Intelligence (AI) is commonly defined as the capability of computational systems to perform
tasks typically associated with human intelligence, such as learning, reasoning, problem-solving,
perception, and decision-making 1 . It is an interdisciplinary field of computer science that seeks to
develop methods and software enabling machines to perceive their environment, learn from data, and
take actions to achieve specified goals 1 2 . In practice, AI research aims to build systems that either
think like humans or act like humans (the human-centered approaches), or that think rationally or
act rationally (the rationalist approaches) 3 4 . (For example, Russell and Norvig identify four AI
goals: systems that think like humans, think rationally, act like humans, and act rationally 3 .) In
pursuit of these goals, AI combines ideas from logic, probability, learning theory, linguistics, psychology,
neuroscience and other fields. The underlying assumption is that intelligent behavior can be studied and
recreated by computational means.
• Natural Language Processing (NLP): the ability to understand and generate human language
in writing or speech.
• Knowledge Representation: a way to store and organize what the system “knows” about the
world (facts, concepts, relationships).
• Automated Reasoning: the ability to draw logical inferences or conclusions from the stored
knowledge.
• Machine Learning: the capacity to learn from new information or experiences and improve over
time.
These components allow a system to answer questions, maintain a dialogue, and learn new facts – all in
a human-like manner 6 . Note that Turing’s original test did not involve seeing or touching the
machine; it was purely a conversational test. A so-called Total Turing Test (sometimes proposed) would
also require physical embodiment, adding components like computer vision and robotic control so
the machine can perceive and act in the physical world 7 . In summary, the Turing approach (the
acting humanly approach) evaluates AI by comparing its behavior to that of humans under conditions
of indistinguishable output 5 6 .
1
computational models from AI with experimental psychology data to build precise, testable models of
human cognition 8 . For example, cognitive architectures (such as ACT-R or SOAR) simulate human
problem-solving steps; psychological experiments on memory or attention guide the design of these
models. In this approach, intelligence is studied as a phenomenon of human mind, and AI systems are
judged by how closely their internal reasoning resembles human reasoning. Essentially, the system
must incorporate theories of human memory, perception, learning and decision-making. (Empirical
validation is important: a cognitive model should explain experimental results about human behavior.)
The cognitive approach is concerned with modeling the way the brain solves problems, not just the
external output.
However, there are practical obstacles to the pure laws-of-thought approach. As noted by AI
researchers, it can be very difficult to take the messy, uncertain knowledge of the real world and encode
it in precise logical terms 10 . Furthermore, solving complex problems “in principle” (with logic) may be
computationally intractable or impossible in practice. Real-world knowledge often has exceptions and
uncertainties that logic alone cannot easily handle. Thus, while logical reasoning remains a core AI
technique, relying solely on perfect logical inference has limitations. AI systems usually combine logic
with other methods to handle uncertainty and complexity 9 10 .
Rational agents are more general than the laws-of-thought approach, because “rationality” can be
precisely defined with mathematics (e.g. probability and utility theory) 4 11 . Building an AI under the
rational-agent paradigm often means specifying an agent’s performance measure, environment,
actions, and actuators/sensors (the PEAS model). For example, a chess-playing AI is rational if it plays
to win; an autonomous car is rational if it drives safely and efficiently. Modern AI, especially in areas like
robotics and decision-making, largely follows the rational-agent viewpoint. (Today’s AI systems
concentrate on designing general principles of rational agents rather than trying to achieve perfect
rationality in every situation 12 11 .)
2
Underlying Assumptions about Intelligence
AI research rests on several key assumptions about intelligence. Broadly, these include:
• Learnability: Machines can learn from experience. AI assumes that, like people, machines can
use data or feedback to improve their behavior over time. That is, algorithms can detect patterns
in data and refine their knowledge or models 15 .
• Rationality: Given enough information and resources, an intelligent agent will perform “correct”
reasoning or decisions to achieve its goals. In other words, it’s assumed that “rational” thinking
(as defined mathematically) is a standard for intelligence 4 13 .
• Knowledge-level equivalence: The human brain serves as a model. Many AI theories implicitly
assume the human brain is the best model of intelligence, so understanding human
cognition can inform AI design 16 .
These assumptions have guided AI: e.g. believing that knowledge and reasoning can be encoded in
rules or neural networks, and that statistical learning can replicate aspects of perception. They imply
that studying logic, probability, neural networks, and psychology can yield intelligent machines 13 14 .
(However, some assumptions are still debated, such as whether every aspect of human thought can
truly be formalized.)
• Search and Optimization: Algorithms that systematically explore possible solutions. Examples
include uninformed search (breadth-first, depth-first), heuristic search (A*, greedy), local search
(hill climbing, simulated annealing), and optimization methods (genetic algorithms, gradient
descent). These are used for planning, path-finding, scheduling, and many other problems.
• Logical Reasoning and Knowledge Representation: Methods from formal logic (propositional,
first-order logic, description logics) and rule-based systems allow AI to represent knowledge in
declarative form and make inferences. Languages like Prolog implement such logical reasoning
for tasks like expert systems or configuration.
• Machine Learning (Statistical Methods): Techniques that enable computers to learn from data,
such as decision trees, support vector machines, Bayesian models, clustering, and regression.
These methods use statistics and probability to deal with uncertainty and to generalize from
examples.
• Artificial Neural Networks and Deep Learning: Bio-inspired networks of simple units
(neurons) that can learn complex functions. Modern deep learning (convolutional networks,
recurrent networks, transformers) has achieved breakthroughs in vision, speech, and language
tasks. Neural methods excel at pattern recognition and function approximation.
3
• Planning and Decision-Making: Algorithms for sequential decision problems, including
classical planning (STRIPS, GraphPlan, planning graphs) and stochastic planning (Markov
decision processes, reinforcement learning). These address how an agent should act over time.
• Natural Language Processing (NLP): Techniques for processing human language, including
syntax parsing, semantic analysis, and dialogue systems. Early methods used grammar-based
rules; modern NLP often uses statistical and neural approaches (n-grams, word embeddings,
transformers) for tasks like translation or question answering.
• Perception (Computer Vision, Speech Recognition): Methods for processing sensory data. This
includes image recognition, object detection, and speech-to-text, primarily tackled by statistical
and learning methods (e.g., convolutional neural networks for vision).
• Others: Additional AI techniques include evolutionary computation, fuzzy logic (for handling
vague or imprecise data), constraint satisfaction, and hybrid systems that combine symbolic and
subsymbolic approaches.
In short, AI draws on many fields and methods – search, logic, statistics, economics, neuroscience,
etc. – to address different goals 2 17 . No single technique solves all AI problems; instead, intelligent
systems often integrate multiple techniques. For example, a robotics system might use logic for high-
level planning, control theory for low-level motion, and machine learning for perception. As AI matures,
researchers continue to refine these methods and combine them (e.g. neuro-symbolic systems that
merge logic and learning) 2 .
Researchers often distinguish between problems that ordinary computers can solve easily (e.g.,
numerical calculation, simple data retrieval) and those that are difficult without AI techniques (e.g.,
understanding speech, proving theorems). The first class needs no special AI; the second class requires
AI methods 18 . Thus, to model human intelligence in machines, AI typically targets that second class
of “hard” problems and abstracts away from neurobiological details. For example, cognitive
architectures model memory and reasoning steps (knowledge-level modeling), while deep neural
networks abstract away from the specifics of actual brain neurons but capture learning behavior.
Ultimately, the level of detail depends on goals and feasibility. A highly detailed brain simulation (like a
full neural model of the cortex) is usually impractical, so AI chooses higher-level models that capture the
essence of intelligent behavior. This approach is akin to Newell’s “knowledge level”: modeling what the
system knows and does, rather than how the transistors operate. By focusing on the right abstraction
level, AI systems can efficiently simulate aspects of human intelligence without unnecessary complexity
18 2 .
4
For general AI, the Turing Test is one benchmark: we consider a machine intelligent if an interrogator
cannot distinguish it from a human in conversation 5 19 . However, for specific problems, success is
measured by task-specific metrics (for example, the accuracy of diagnoses, the score in a game, or the
time to complete a task).
As one AI review notes, in any scientific or engineering project – including AI – we ask “How will we
know if we have succeeded?” 20 . In practice, we design the AI agent’s performance measure: for a
chess program it is winning games, for a delivery drone it is timely and safe delivery, etc. Once the goals
are set, we use appropriate evaluation (test sets, simulations, competitions) to assess the system. For
instance, the success of vision systems is often measured by recognition rates on benchmark datasets,
and the success of language systems by BLEU scores or human evaluations.
Thus, successfully building an AI system requires (1) a clear statement of the intelligent behavior to
produce, and (2) measurable criteria for that behavior. These may be formal (like the Turing Test for
general intelligence 5 ) or empirical (task performance). By quantifying success, researchers can
iterate on design and training until the system meets or exceeds the desired performance.
History of AI
The field of AI has evolved through several phases since the mid-20th century. Early theoretical
foundations were laid in the 1940s and 1950s. For example, in 1950 Alan Turing published “Computing
Machinery and Intelligence,” introducing the Turing Test as a thought experiment on machine
intelligence 5 . In 1956, John McCarthy coined the term “Artificial Intelligence” at the Dartmouth
Summer Research Project, which is generally considered the founding event of AI as a research
discipline 21 .
In the following decades, early AI programs showed promise: the Logic Theorist (1956) proved
mathematical theorems, and computer chess and problem solvers demonstrated basic reasoning.
These successes led to optimism and funding in the 1960s. However, the limits of early methods
became apparent by the 1970s and 1980s (e.g. difficulty in acquiring commonsense knowledge),
leading to periods called “AI winters” when progress and funding slowed 21 .
Interest revived in the 1980s with expert systems, and again in the late 1990s and 2000s with advances
in probabilistic methods and the advent of practical machine learning. A major resurgence occurred
after 2012: using powerful graphics processors (GPUs), AI researchers achieved breakthroughs in deep
learning that surpassed previous methods in vision and speech 21 . Since the 2010s, rapid progress
(sometimes called the “AI boom”) has continued, driven by massive data, improved algorithms, and
computational power 21 . Today, AI applications are pervasive (from search engines and virtual
assistants to autonomous vehicles) and have reached or exceeded human performance in many
domains (e.g. game playing, image recognition). Throughout its history, AI has oscillated between high
expectations and critical reassessment, but it remains one of the most active areas of computer science.
References:
• Russell & Norvig, Artificial Intelligence: A Modern Approach (AI definitions and approaches) 3 4 .
• GeeksforGeeks, Underlying Assumptions in AI (assumptions about intelligence) 22 .
• Turing, A., Computing Machinery and Intelligence, 1950 (Turing Test) 5 .
• Wikipedia, Artificial Intelligence (techniques, history) 2 1 .
• Cilable CS188 slides (Turing Test, Cognitive Modeling, Laws of Thought) 6 9 .
• Nilsson, Physical Symbol System Hypothesis (PSSH) 14 .
5
• Gopichandrakesan Blog, Rational Agent Approach (rational agent definition) 12 .
• AI survey materials (success criteria, levels of modeling) 20 18 .
12 Day 20 - Acting Rationally: The rational agent approach - Artificial Intelligence - IT Consultant - SAP,
Artificial Intelligence and Machine Learning
https://www.gopichandrakesan.com/day-20-acting-rationally-the-rational-agent-approach-artificial-intelligence/
14 ai.stanford.edu
https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/PublishedPapers/pssh.pdf