R.K Class 9 Artificial Intelligence
R.K Class 9 Artificial Intelligence
SESSION-1.1 AI REFLECTION
LEARNING OUTCOMES
• Identify and appreciate Artificial Intelligence and describe its applications in daily life.
• Recognize, engage and relate with the three realms of AI : Computer Vision, Data Statistics and
Natural Language Processing.
DATA
ARTIFICIALLY
+ INTELLIGENT
MACHINE
ALGORITHM
The three domains of AI—data, computer vision, and natural language processing (NLP)—reveals their close
interconnections. Data serves as the foundation for both computer vision and NLP. Visual data is essential for
computer vision, while audio data is crucial for NLP. When data is presented in audiovisual formats, both computer
vision and NLP collaborate to process and interpret the information.
Communication can be categorized into visual communication and language communication. We
communicate through speech, gestures, signs, signals, expressions, text, and writing, all of which fall under either
visual or language categories. Similarly, AI systems utilize visuals and language for their operations.
LEARNING OUTCOMES
• Identify the AI Project Cycle framework.
1. Problem Scoping: Identifying and defining the problem that the AI system will address. This involves
understanding the requirements, objectives, and constraints of the project.
2. Data Acquisition: Gathering and collecting relevant data that will be used to train and validate the AI
model. This data could include images, text, videos, or any other form of structured or unstructured data.
LEARNING OUTCOMES
• Learn problem scoping and ways to set goals for an AI project.
• Identify stakeholders involved in the problem scoped. Brainstorm on the ethical issues involved around
the problem selected.
• Understand the iterative nature of problem scoping for in the AI project cycle. Foresee the kind of data
required and the kind of analysis to be done.
Identifying Identifying
Identifying Setting of Identifying the
Existing the Ethical
the problem Goals Stakeholders
Measures Measures
Sub Topic 1: Sub Topic 2 Sub Topic 3 Sub Topic 4 Sub Topic 5
Obesity
Problem: Accurate and early prediction of obesity to enable timely interventions and improve public health
outcomes.
2. Choosing a Topic
Once you have selected a theme, narrow down your focus to a specific topic. This should be an area within
the theme where you see a significant problem or opportunity for AI to make a positive impact.
For example, within healthcare, chronic diseases represent a significant challenge and opportunity for AI
solutions.
3. Identifying Subtopics and Problems
Further refine your focus by selecting a subtopic within the chosen topic. This will help you define the specific
problem you want to address. For example, Obesity is a prevalent and complex chronic disease with far-
reaching health implications.
Where Why
• Analyze context: • Assess impact: Evaluate the
Understand the environment in consequences of the problem on
which the problem occurs. individuals and society.
• Identify locations: • Define benefits:
Determine the geographical Articulate the potential value of a
distribution of the problem. solution.
Who
• Who are the stakeholders?
• What do you know about them?
What
• What is the problem?
• How do you know that it is a problem (is there any evidence)?
Where
• What is the context/ situation the stakeholders experience regarding the problem?
• Where is the problem located?
Why
• Why will this solution be of value to the stakeholders?
• How will the solution improve their situation?
After filling the 4Ws Problem canvas, you now need to summarise all the answers into one template known as
the problem statement template. A problem statement template is a structured framework that
summarises the essence of a problem, including its impact, context, and potential solutions. It serves
as a foundation for further analysis and development. The format of the template is
LEARNING OUTCOMES
• Identify data requirements and find reliable sources to obtain relevant data
• Understand the purpose of Data Visualisation
• Use various types of graphs to visualise acquired data
Data acquisition is the second stage of the AI project cycle. It involves collecting, cleaning, and preparing data
essential for model development. Data comprises information, facts, or statistics that are crucial for training and
validating AI models. The quality and reliability of this data are critical to ensure the effectiveness and accuracy of the
AI system.
Data visualisation tools help the designers to create visual representation of the data sets in an easy manner
which then can be easily understood. Some of these tools are as follows—
1. Spreadsheet Package. Spreadsheet Packages such as MS-Excel, LibreofficeCalc, Google Sheets provide
a variety of graphical tools to represent data. These tools re suitable for working with simple data only.
2. Tableau. Tableau is the widely used data visualisation tool. It is simple to use but powerful tool for creating
interactive graphs and charts in the form of dashboards and worksheets to gain business insights. It is
capable of handling enormous, frequently updated data sets.
3. Candela. Candela is an open source tool to create rich data visualisations. It provides a library of charts
graphs and plots to integrate with your data sets.
4. Google Data Studio. It is a dashboard and reporting tool that allows us to create appealing and
informative reports using the data sets available. It is easy to use, customise and even allows to share the
work done.
5. Quick View. Quick View is used to analyse data for decision making. This tool is known for its capability of
customisation and extensive features.
250
200
Count
150
100
50
0
Insufficient_Weight Normal_Weight Overweight_Level_I Overweight_Level_II Obesity_Type_I Obesity_Type_II Obesity_Type_III
NObeyesdad
LEARNING OUTCOMES
• Understand modelling (Rule based and Learning-based)
Classification
algorithm
Supervised
learning
Regression
algorithm
Machine
Learning Unsupervised Clustering
Learning based learning algorithm
AI systems are primarily categorized into two types: rule-based and learning-based systems, each with distinct
characteristics and applications.
A. Traditional Rule-Based Systems operate on a set of predefined rules and logic, similar to following a
detailed instruction manual. When these systems encounter a problem, they consult their established rules
to determine the appropriate decision.
• One of the main strengths of rule-based systems is their transparency and explainability; the decision-
making process is clear and straightforward, making it easy to understand how conclusions are reached.
Additionally, these systems are highly efficient in domains where rules are well-defined and consistent.
• However, rule-based systems also have notable limitations. They are rigid and inflexible, struggling to
adapt to new or unexpected situations. This rigidity makes them less effective when dealing with
complex problems involving multiple variables.
B. Learning based systems learn from data and improve their performance over time. They identify
patterns and relationships within data to make predictions or decisions. They can be further divided into two
main types: machine learning and deep learning. Machine learning systems learn from data without
needing explicit programming for each task, while deep learning, a subset of machine learning, utilizes
artificial neural networks with multiple layers to analyze complex patterns in data.
The strengths of learning-based systems include –
• Adaptability: Can handle complex and evolving problems.
• High accuracy in pattern recognition tasks.
• Continuous improvement through learning.
However, they also come with limitations. Learning-based systems often require large amounts of data to
function effectively and can be computationally expensive due to the resources needed for processing and learning.
Furthermore, these systems can lack transparency in their decision-making processes, making it harder to
understand how decisions are reached.
Supervised Learning
Supervised learning is a core concept in machine
learning where a model is trained on labeled data,
which means the data comes with predefined correct
answers. This labeled data is used to teach the model
to make predictions on new, unseen data. Labels
assist in linking input data to the correct output.
Supervised learning models learn to associate these
input data points with their respective output labels.
There are two main types of problems that utilize
supervised learning algorithms: classification and
regression. Regression algorithms predict a
continuous numerical value, while classification
algorithms predict a categorical value. Common supervised learning algorithms include linear regression, K-Nearest
Neighbors (KNN), decision trees, support vector machines (SVM), random forests, and neural networks.
The following are some of the real-world scenarios where classification is used:
• Email Spam Filtering: Determining whether an email is spam or not.
• Medical Diagnosis: Classifying diseases based on patient symptoms and test results.
• Customer Churn Prediction: Predicting whether a customer will discontinue a service.
The following are some of the real-world scenarios where regression is used:
• Housing Price Prediction: Estimating the price of a house based on features like size, location, and
number of bedrooms.
• Sales Forecasting: Predicting future sales based on historical data and market trends.
Unsupervised Learning
In Unsupervised Learning the data used to train the machine are neither labelled nor classified. The machine
learning model (algorithm) discover the patterns on its own. The goal is to organize unstructured information based
Reinforcement Learning
Reinforcement learning (RL) is a machine learning
approach where an agent learns to make decisions by
taking actions in an environment to maximize
cumulative reward. The agent learns from the
outcomes of its actions through trial and error without
being given examples of correct input-output pairs. For
instance, in a game where a robot collects a diamond
while avoiding obstacles, the robot tries different paths
and learns to choose the path that leads to the diamond
with the fewest obstacles, accumulating points for
correct steps and losing points for incorrect ones.
Common applications of RL include gaming,
autonomous vehicles, and personalized treatment
plans, with popular algorithms such as Deep Q-
networks (DQN).
For our Obesity prediction model, we have labeled data (patients with their obesity status), and so supervised
learning is the most suitable approach. Algorithms such as Logistic Regression/Decision trees could be used.
LEARNING OUTCOMES
• Understand various evaluation techniques.
• Understand the importance of deploying AI models into real-world applications.
Data Evaluation is a critical step in the AI project lifecycle that determines the effectiveness of a model. It
involves assessing the model's performance on a separate dataset called the testing dataset. This dataset was not
used during the training phase, ensuring an unbiased evaluation.
Key Evaluation Metrics
To measure model performance, several metrics are used:
• Accuracy: The proportion of correct predictions out of the total predictions.
• Precision: The proportion of positive predictions that were actually correct.
• Recall: The proportion of actual positives that were correctly identified.
• F1-score: A harmonic mean of precision and recall, providing a balanced measure.
Model Evaluation Terminologies
Model evaluation is an important step in the machine learning process. It involves checking how well a model
works by comparing its predictions to real-world data. The goal is to see how accurate, reliable, and useful the model
is for a specific task. Two key ideas are involved in this process: prediction and reality.
Prediction: This is what the model guesses or estimates when it is given new data it hasn't seen before. The
model makes these predictions based on the patterns it learned during its training.
Reality: This is the actual or correct outcome for the data, also known as the ground truth. It serves as the
standard to which the model's predictions are compared. By comparing the model's predictions to the actual results,
we can measure how well the model is performing and identify areas where it might need improvement. This
comparison is the foundation for various methods used to evaluate the quality of the model.
There are various new terminologies which come into the picture when we work on evaluating our model.
Let us explore them with an example.
TRUE POSITIVE
True Positive (TP): The model correctly predicts school closure on a rainy day.
PREDICTION : NO REALITY : NO
TRUE NEGATIVE
True Negative (TN): The model correctly predicts and the school remains open on a clear day.
FALSE POSITIVE
False Positive (FP): The model incorrectly predicts school closure on a clear day.
FALSE NEGATIVE
False Negative (FN): The model incorrectly predicts and the school remains open on a rainy day
• False Positive
FP • Prediction and Reality matches? No (False)
• Prediction is Yes (Positive)
• False Negative
FN • Prediction and Reality matches? No (False)
• Prediction is No (Negative)
Once the models are ready, it's crucial to assess their performance. To
determine which algorithm yields the most accurate predictions, we will
evaluate each model's performance. One key metric for evaluating
classification models is the Receiver Operating Characteristic (ROC) curve.
The ROC curve graphically represents a model's ability to distinguish
between positive and negative classes. By comparing the ROC curves of our
different algorithms, we can identify the model that demonstrates the best
overall performance. The accompanying figure provides a visual
comparison of the accuracy of the three algorithm samples under
consideration.
Our Obesity prediction models’ performance can be measured using
metrics like accuracy, precision, recall, and F1-score. It can then be fine-
tuned based on evaluation results.
Deployment
Deployment is the final stage in an AI project, where the developed model is integrated into a real-world
environment. Deployed AI project can be used on Mobile Applications, Website Applications, etc., Some of the key
steps in deployment process include:
Integration Monitoring
Testing and
with existing and
Validation
systems Maintenance
Data Data
Modelling Evaluation Deployment
Acquisition Exploration
Collect a Analyze the Develop and train Evaluate model Integrate the
dataset dataset to machine learning performance using model into a
containing understand models (e.g., metrics like healthcare
patient records correlations logistic regression, accuracy, system or
with attributes between obesity decision trees, precision, recall, develop a user-
like, age, and different random forest) and F1-score. friendly
gender, height, factors. Identify to predict obesity Fine-tune the application for
weight, potential features risk based on model based on obesity risk
blood pressure, for model the collected data. evaluation assessment.
cholesterol training. Handle results. Provide
levels, and missing values recommendations
dietary habits, and outliers. for users based
on the
prediction.
LEARNING OUTCOMES
• Differentiate between morality and ethics.
• Identify ethical concerns related to personal data.
• Understand ethical challenges and principles in AI.
• Analyse ethical implications of AI in real-world scenarios.
Ethics vs Morals
Morals Ethics
Morals primarily focus on the individual. Ethics are concerned with societal norms.
They are derived from personal beliefs, upbringing, They stem from shared values and community
and culture. agreements.
They tend to be subjective, reflecting personal They are considered more objective, based on broader
perspectives. societal standards.
They function as an internal compass guiding They provide an external framework for conduct.
individual behaviour.
Examples of morals include honesty, fairness, and Examples of ethics encompass professional codes,
compassion. laws, and human rights.
Moral Machine
Ethical dilemmas arise when making choices becomes difficult due to conflicting moral principles. In the
world of Artificial Intelligence (AI), these dilemmas become especially complex due to the powerful capabilities and
potential impacts of AI systems on individuals, society, and the environment. Addressing these ethical issues is
critical for responsible development and deployment of AI. The Moral Machine (https://www.moralmachine.net/),
created by researchers at MIT, offers an interactive platform to explore ethical dilemmas in AI.
The Moral Machine presents users with hypothetical scenarios where autonomous vehicles must make split-
second decisions that could impact human lives. These scenarios often involve moral conflicts, such as deciding
whether to prioritize the safety of passengers or pedestrians, elderly individuals versus children, or obeying traffic
laws versus avoiding greater harm.
Principles in AI Ethics
Principles of AI are a set of guidelines that govern the design, development, and Human
Inclusion
Rights
deployment of artificial intelligence systems. These principles aim to ensure that AI is
developed and used ethically, responsibly, and beneficially for society. To create AI
Privacy Bias
systems that benefit society, we must prioritize ethical considerations. Four key
principles guide the development and deployment of responsible AI:
• Human Rights • Bias • Privacy • Inclusion
RECAP
• Artificial Intelligence (AI) is the branch of computer science that involves creating machines capable of
performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and
decision-making.
• AI has become a part of our daily lives. We see a lot of applications of AI in everyday life.
• Based on the input data fed into AI, it can be classified into three domains—
1. Data 2. Computer Vision 3. Natural Language Processing (NLP)
• An AI project undergoes a set of 6 stages namely Problem scoping, data acquisition, data exploration,
modelling, evaluation and deployment.
• Problem scoping refers to the identification of a problem and the vision to solve it.
• The 4W problem canvas is a framework that can help you in identifying the key elements related to a problem.
The 4W here means Who? What? Where? and Why?
7. Rule based approach (g) guiding principles to decide what is good or bad
Ans. 1. (b), 2. (d), 3. (f), 4. (h), 5. ( j), 6. (a), 7. (c), 8. (e), 9. (g), 10. (i).
Case Based Questions
1. Imagine two friends, Ashwat and Anurag, are deciding who will go first in a game they’re about to play. They
want a quick and fair way to make the decision. Ashwat suggests a simple game where they each choose one
of three options: A fist, An open hand, A fist with the index and middle fingers extended. They reveal their
choices simultaneously. They follow these outcome rules to determine the winner:
• A fist crushes an open hand
• An open hand covers a fist with the index and middle fingers extended
• A fist with the index and middle fingers extended cuts a fist
Ans. Rock, Paper, Scissors.
2. After the pandemic, it has been essential for everyone to wear a mask. However, you see many people not
wearing masks when in public places. Which domain of AI can be used to build a system to detect people not
wearing masks? [CBSE]
Ans. Computer Vision.
3. At this stage, you try to interpret some useful information out of the data you have acquired. For this, you
explore the data and try to put it uniformly for a better understanding. Which stage of AI Project cycle is
spoken about?
Ans. Data Exploration stage.
4. Imagine you are building a website for a local bakery. Instead of designing the entire website at once, you
break it down into separate parts: one for the menu, one for online orders, and another for customer
reviews, making it easier to develop and update each part independently. Name the term used for breaking
the process into smaller parts.
Ans. Modularity.
5. A team is trying to understand why their product sales have declined. They brainstorm various factors that
might have contributed to this issue. What diagram would best visualize the cause and effect relationships
between the data features?
10. Name a few government websites from where you can get open-source data. [CBSE]
Ans. data.gov.in, india.gov.in
11. Why we need to explore data?
Ans. We need to explore data to
• Get a sense of trends, relationships and patterns present in data
• Decide which model to use in our AI Project cycle
• Easier to comprehend information
• Easier to communicate the story with others
12. How is Machine Learning related to Artificial Intelligence? [CBSE]
Ans. Machine Learning (ML) is a subset of AI that specifically focuses on enabling machines to learn from data
which may or may not involve explicit programming.
13. What is Evaluation? [CBSE]
Ans. Evaluation is the process of understanding the reliability of any AI model, based on outputs by feeding test
dataset into the model and comparing with actual answers.
14. What are various Model evaluation techniques? [CBSE]
Ans. TP (True Positive): The model correctly predicted a positive class.
5. Identify A, B and C in the following diagram (Hint: How AI, ML &DL related to each other) [CBSE]
c
b
a
Morals Morals
Morals primarily focus on the individual. Ethics are concerned with societal norms.
They are derived from personal beliefs, upbringing, They stem from shared values and community
and culture. agreements.
They function as an internal compass guiding They provide an external framework for conduct.
individual behaviour.
Examples of morals include honesty, fairness, and Examples of ethics encompass professional codes,
compassion. laws, and human rights.
8. A company had been working on a secret AI recruiting tool. The machine-learning specialists uncovered a
big problem: their new recruiting engine did not like women chefs. The system taught itself that male
candidates are preferable. It penalised resumes that included the word “women chef". This led to the failure
of the tool.
(a) What aspect of AI ethics is illustrated in the given scenario?
(b) What could be the possible reasons for the ethical concern identified? [CBSE]
Ans. (a) Bias is illustrated in the given scenario.
(b) • The dataset used to train the AI might have contained historical biases, reflecting societal biases
where male chefs were more common or preferred. This bias in the data would have been learned
and replicated by the AI model.
• If the training data lacked sufficient representation of female chefs or women in similar roles, the AI
system would not have learned to value or recognize female candidates appropriately.
Lab Activities
1. Microsoft Copilot is a tool that uses generative AI to serve as a helpful assistant in the field of
education. Give examples of ways in which Microsoft Copilot can be used.
Sol. • Personalized learning: Copilot can support personalized learning by helping you create content,
tailored feedback, and guidance for students based on their individual needs and learning styles.
• Brainstorming: You can use Copilot to brainstorm new ideas for activities, lesson plans, supporting
materials, and assignments.
• Lesson planning: Copilot can help you plan lessons by suggesting or drafting activities, resources, and
assessments that align with learning objectives. You can also use Copilot to start a rubric for the lessons.
• Provide feedback: Copilot can help you draft initial feedback and ideas for students on their work,
which you can edit and personalize for your students.
• Get quick answers: Copilot can help you get quick answers to your questions without having to read
through multiple search results. Also, Copilot provides links to content sources so you can assess the
source or dive deeper into the original content.
4. Write down the steps to play the Computer Vision based application Autodraw
https://www.autodraw.com/?
Ans. • Navigate to experiments.withgoogle.com/autodraw using your preferred web browser.
• Select the "AutoDraw" tool and commence free hand drawing. Use either your mouse or touchpad as
your creative instrument, focusing on expressing your artistic vision without concern for technical
perfection.
• As your drawing progresses, AutoDraw actively analyzes the lines and shapes you create. It aims to
6. Create a system map illustrating the relationship between automation, profits, job loss, stress, and
frustration.
Sol.
(vii) Download pictures (a sample of 5 each) of oranges and lemons in another tab.
(viii)Select upload option, upload pictures of Oranges in Class 1 and rename it as “Oranges”
(ix) Next, Select upload option, upload pictures of Lemons in Class 2 and rename it as “Lemons”
(x) Click on Train Model.
(xi) Once the model is trained, change the Input from Webcam to File and upload a sample picture that is
distinct from what was uploaded under Oranges/Lemons earlier.
(xii) You will find that the model is predicting the given picture under one of the categories.
Data Data
Modelling Evaluation Deployment
Acquisition Exploration
Collect a Analyze the Develop and train Evaluate model Integrate the
dataset dataset to machine learning performance using model into a
containing understand models (e.g., metrics like healthcare
patient records correlations logistic regression, accuracy, system or
with attributes between obesity decision trees, precision, recall, develop a user-
like, age, and different random forest) and F1-score. friendly
gender, height, factors. Identify to predict obesity Fine-tune the application for
weight, potential features risk based on model based on obesity risk
blood pressure, for model the collected data. evaluation assessment.
cholesterol training. Handle results. Provide
levels, and missing values recommendations
dietary habits, and outliers. for users based
on the
prediction.
10. Use the Moral Machine platform to analyze the ethical dilemmas faced by autonomous
vehicles. What factors influence your decision-making when confronted with life-or-death
choices? How do your choices compare to societal norms and preferences as reflected in the
platform's data?
Sol. The Moral Machine presents users with hypothetical scenarios where autonomous vehicles must make split-
second decisions that could impact human lives. These scenarios often involve moral conflicts, such as
deciding whether to prioritize the safety of passengers or pedestrians, elderly individuals versus children, or
obeying traffic laws versus avoiding greater harm.