AI - Book 10 - Part B - Answer Key (New Version)
AI - Book 10 - Part B - Answer Key (New Version)
(Part B)
Class 10
Unit 1: Introduction to AI
A. Short answer type questions.
1. Intelligence can be defined as the ability to solve complex problems and make decisions
and it enables living beings to adapt to different environments for their survival. It gives
humans the abilities to learn from experience, adapt to new situations, understand and
handle abstract concepts and control their environment.
2. People who possess this intelligence are skilled with words. They enjoy reading, express
themselves well in writing and are able to easily recognise and understand the meaning
and sounds of words.
3. Data bias arises when the data used to train an AI system is faulty or contains in-built
biases. For example, if an AI system is trained to recognise faces but the training data
primarily consists of lighter-skinned individuals, it may find it difficult to identify or
categorise persons with darker skin tones.
4. Machine Learning is a field of AI that enables machines to learn on their own and
improve with time through experience. In ML, machines learn from data fed to them
during the training phase and use this knowledge to improve their performance in
making accurate predictions.
2. Every person has different ways of learning and everyone uses different intelligences in
their daily lives. People possess different amounts and types of intelligence. These
intelligences are located in different areas of the brain and can either work
independently or together. For example, some people are good at understanding
rhythms and sound, some are good at physical activity like sports, while others are good
at logical and mathematical thinking. These multiple intelligences include the use of
words, numbers, pictures, music, logical thinking, the importance of social interactions,
introspection, physical movement and being in tune with nature. This difference in
intelligences is reflected in the theory of multiple intelligences. The theory of multiple
intelligences describes the different ways in which people learn and acquire
information.
3. a. Streaming Platforms like Netflix, Amazon, Sony live use AI powered recommendation
systems to suggest content based on our viewing history.
b. Navigation apps like Google maps use AI to provide voice-guided instructions on how
to arrive at a given destination as well as suggest the best route to avoid traffic.
4. Bias is the tendency to be partial to one person or thing over another. AI bias occurs
when an algorithm produces results that are biased because it is trained on biased data.
AI cannot think on its own and, hence, cannot have biases of its own. Bias can transfer
from the developer to the machine while the algorithm is being developed. The data fed
into an AI algorithm could cause bias for the following three reasons:
• The data does not reflect the main population.
• The data has been unethically manipulated.
• It is based on historic data, which itself is biased.
5. a. AI for kids: Young children today are tech-savvy and well-versed with technology.
Consider the scenario of a young child given an assignment to write an essay. In this
scenario, the child uses the AI powered ChatGPT application to automatically generate
and write an essay. This definitely raises some concerns. Though the child may seem
smart and skilled at using technology, getting the essay written by AI will cause the child
to lose the opportunity to think and learn.
b. Data privacy: We avail of many free services on the internet, leaving behind a trail of
data, but we are often not made aware of it. Companies such as Amazon, Alphabet,
Microsoft, Apple, Meta and others use AI to collect this data to gain, maintain, and direct
our attention. We can even say that these AI algorithms may know us better than we
know ourselves. Our data can be used to manipulate our behaviour by using it for
marketing and earning profits.
6. a. Price comparison websites: Websites like Compare India use Big Data to provide a
comparison of the prices of products from multiple vendors in one place.
b. Search engines: Search engines like Google collect massive amounts of data from
various sources, including search queries, web pages, and user behaviour, analyses this
data to provide better search results to users.
7. Artificial Intelligence (AI), Machine learning (ML), and Deep Learning (DL) are different
concepts:
AI refers to the field of computer science that can mimic human intelligence. The AI
machines are capable of learning on their own without human intervention. AI is a broad
term that includes both Machine Learning and Deep Learning.
Machine Learning enables machines to learn on their own and improve with time
through experience.
Deep Learning enables machines to learn and perform tasks on a large amount of data or
Big data. Due to the large amount of data, the system learns on its own by using multiple
machine learning algorithms working together to perform a specific task.
Machine Learning is a sub-category of AI, and Deep Learning is a sub-set of Machine
Learning, as it includes multiple machine learning modules. Deep Learning is the most
advanced form of Artificial Intelligence among these three. Next in line is Machine
Learning that demonstrates intermediate intelligent. Artificial Intelligence includes all the
concepts and algorithms that mimic human intelligence.
8. A machine that is trained with data, can think and make predictions on its own is an AI
machine. Not all devices which are termed as "smart" are AI enabled. Some of these
machines, equipped with IoT technology can connect to the internet and be operated
from remote distances but are not trained to think and take decisions on their own.
For example, an automatic washing machine can run on its own but it requires a human
to do the relevant settings everytime before washing. Hence it cannot be termed as an AI
machine. IoT based machines like remotely operated A/Cs that can be switched on and
off via the internet need humans to operate them, so they cannot be considered as AI
machines.
3. Artificial Neural Networks (ANNs) are computational networks that are at the heart of
deep learning algorithms, a subfield of Artificial Intelligence. They are designed to mimic
the structure of the human brain and are inspired by how the human brain interprets and
processes information.
4. When it comes to large datasets, neural networks perform much better than traditional
machine learning algorithms. Unlike traditional machine-learning algorithms that reach a
saturation point and stop improving, large neural networks show better performance
with large amounts of data.
5. The output layer receives the data from the last hidden layer and gives it as the final
output to the user. Similar to the input layer, output layer too does not process the data.
It serves as a user-interface, presenting the final outcome of the network's computations
to the user.
6. This stage involves the exploration and analysis of the collected data to interpret
patterns, trends, and relationships. The data is in large quantities. In order to easily
understand the patterns, you can use different visual representations such as graphs,
databases, flowcharts, and maps.
8. Hidden layers are the layers where all the processing occurs. Each node in the hidden
layers has its own machine learning algorithm which processes the data received from
the input layer. The last hidden layer passes the final processed data to the output layer.
B. Long answer type questions.
1. Following are the differences between the rule-based approach and the learning-based
approach:
6. The Problem Statement Template aids in summarising all the key points in the 4Ws
problem canvas into a single template, which enables us to quickly get back the ideas as
needed in the future. For the purpose of further analysis and decision-making, this
template makes it simple to understand and remember the important aspects of the
problem.
8. Neural Networks are computational networks that are at the heart of deep learning
algorithms, a subfield of Artificial Intelligence. Similar to how our brains learn from
experiences, neural networks learn from examples to understand new situations. A
neural network is initially trained on large amounts of input data. The network recognises
the patterns in this data, learns from it using machine learning techniques and can then
make predictions on a new dataset. It is a fast and efficient way to solve problems for
which the dataset is very large, such as in images and vidoes.
Features of neural networks:
Artificial neural networks are extremely powerful computational algorithms or models.
Neural Network systems are modeled on the structure and function of the human
brain and nervous system.
The most powerful feature of neural networks is that once trained, they can
independently process new data, take decisions and make predictions without human
intervention.
Unit 3: Advance Python
A. Short answer type questions.
1. Anaconda distribution is a powerful and widely used open source distribution of Python
language for scientific computations, machine learning and data science tasks. It is an
essential tool for data scientists, researchers and developers as it includes essential pre-
istalled libraries. It simplifies the process of managing software packages and
dependencies.
2. Once you have launched Jupyter Notebook within your virtual environment, you can
execute commands by creating and running Python code cells within a notebook.
Create a New Notebook or Open an Existing One.
Once you have a notebook open, you'll see an empty code cell where you can
enter Python code. Click on the cell to select it, and then type or paste your
Python code into the cell.
After entering your Python code in a cell, you can execute it by either pressing
"Shift + Enter" or clicking the "Run" button in the toolbar. This will run the code in
the selected cell and display the output directly below the cell.
3. Venv module are tools that allow users to create virtual environments. These virtual
environments contain their own Python interpreter and package installation directories.
Thus, each project can have its own set of libraries Python versions to avoid conflicts
between different projects.
4. Membership operators 'in' and 'not in' are used to check if a value exists in a list or
sequence or not.
5. In some cases, a condition in a 'for' or 'while' loop does not ever become false, hence
the statements between the loop keep repeating indefinitely. This is called an infinite
loop.
6. The else statement works with single conditions whereas the elif statement is used to
test multiple conditions.
4. A division operator (/) performs division and returns a floating-point result, even if the
operands are integers. If both operands are integers, the result will be a floating-point
number, including the fractional part.
Example:
result = 7 / 2
print(result) # Output: 3.5
The floor division operator (//) performs division and returns the quotient of the division,
rounded down to the nearest integer. If returns only whole numbers.
Example:
result = 7 // 2 print(result) # Output: 3
5. The input( ) function is used to take input from user. It accepts input from the console.
Example:
name = input(“Enter your name”)
age = int(input("Enter your age: "))
The print( ) function prints a message or value. It converts a value into string before
displaying it.
Example:
print("Hello, ", name) #name is a variable in which a string has been accepted
6. The 'for' loop is used when you are sure about the number of times a loop body will be
executed. It is also known as a definite loop. Whereas, the 'while' loop in Python
executes a set of statements based on a condition. If the test expression evaluates to
true, then the body of the loop gets executed. Otherwise, the loop stops iterating and the
control comes out of the body of the loop.
7. Nested if statements in Python refer to if statements that are placed inside other if
statements. The inner if statement gets executed only if the outer is true. An example to
check if a number is non-zero and odd or even:
num = int(input("Enter a number: "))
# Check if the number is positive
if num >= 0:
# Check if the number is even
if num % 2 == 0:
print("The number is even.")
else:
print("The number is odd."
9. The program to display numbers divisible by 7 and multiples of 5 between 1200 and 2200
is:
start = 1200
end = 2200
# Iterate through the range and display numbers meeting the criteria
print("Numbers divisible by 7 and multiples of 5 between 1200 and 2200:")
for num in range(start, end + 1):
if num % 7 == 0 and num % 5 == 0:
print(num)
10. The program to enter the monthly income of an employee between 40 and 60 years and
calculate the annual income tax is:
1. 200
2. Numbers 0 to 99
3. Numbers 1 to 6
4. FALSE
TRUE
5. 11.0
6. 36
8. 2
9. [1, 2, 3, 5, 7]
10. 3
Unit 4: Data Science
A. Short answer type questions.
1. NumPy stands for Numerical Python and is the fundamental package for mathematical
and logical operations on arrays in Python. NumPy is a commonly used package that
offers a wide range of arithmetic operations that make it easy to work with numbers as
well as arrays.
2. Data Science applications study the link between DNA and our health and find the
biological connection between genetics, diseases, and response to drugs or medicines.
This enables doctors to offer personalised treatment to people based on the research of
genetics and genomics.
5. Histograms are used to accurately represent continuous data. They are particularly suited
for plotting the variation in a value over a period of time.
6. There was a time when finance companies were facing large amounts of bad debts. Using
data science, the companies analysed the customer profile, past expenditures, and other
essential variables and then analysed the possibilities of risk and default to decide whom
to give loans and how much. Based on this, they were able to reduce losses.
2. Data science can help identify the areas of improvement in order to keep airline
companies profitable. Some of the insights provided by data science are:
Predict flight delays
Analyse which flight routes are in demand
Decide which class of airplanes to buy
Plan the route – decide if it will be more cost effective to directly land at the
destination or take a halt in between
Help design strategies to encourage and manage customer loyalty
3. a. CSV: CSV is a simple file format used to store tabular data. Each line of this file is a data
record and each record consists of one or more fields which are separated by commas.
Hence, the name is CSV, i.e., Comma Separated Values.
b. SQL: SQL or Structured Query Language is a specialised programming language used
for designing, programming and managing data within Database Management Systems
(DBMS). It is especially useful in handling structured data.
4. Using data science, finance companies analyse the customer profile, past expenditures,
and other essential variables and then analysed the possibilities of risk and default to
decide whom to give loans to and how much. Based on this, they are able to reduce
losses and it also helped them promote their banking products based on customers'
purchasing power. Real-time data analyses also help detect any fraudulent online
transactions or illegal activity and enable fraud detection and prevention.
4. The features like corners are easy to find as their exact location can be pinpointed in the
image. Thus, corners are always good features to extract from an image followed by the
edges.
5. The word "pixel" stands for "picture element". Every digital photograph is made up of
tiny elements called pixels. A pixel is the smallest unit of information that makes up a
text, image or video on a computer. Even a small image can contain millions of pixels of
different colours. Pixels are usually arranged in a 2-dimensional grid and are often in
round or square shape.
6. The objective of computer vision is to replicate both the way humans see and the way
humans make sense of what they see.
7. Computer vision models are trained on massive amounts of visual data. Once a large
amount of data is fed through the model, the computer will "look" at the data and teach
itself to differentiate one image from other using deep learning algorithms.
8. In image processing, the image can have features like a blob, an edge or a corner. These
features help us to perform certain tasks and analysis. Feature extraction refers to the
process of automatically extracting relevant and meaningful features from raw input
images. The features like corners are easy to find as their exact location can be
pinpointed in the image, whereas the patches that are spread over a line or an edge look
the same all along.
2. The Computer Vision domain of artificial intelligence enables machines to interpret visual
data, process it and analyse it using algorithms and methods to interpret real-world
phenomena. It helps machines derive meaningful information from digital images, videos
and other visual inputs and take actions based on that information.
Applications of Computer Vision are:
Face filters: This is one of the popular applications used in apps like Instagram and
Snapchat. A face filter is a filter applied to photographs, or videos in real time, to make
the face look more attractive. You can also use it to combine a face with animal features
to give it a funny appearance.
Facial recognition: With smart homes becoming more popular, computer vision is being
used for making homes more secure. Computer Vision facial recognition is used to verify
the identity of the visitors and guests and to maintain a log of the visitors. This
technology is also used in social networking applications for detecting faces and tagging
friends.
3. Each pixel in a digital image on a computer has a pixel value which determines its
brightness or colour. The most common pixel format is the byte image, where this value
is stored as an 8-bit integer having a range of possible values from 0 to 255. Typically,
zero is considered as no colour or black and 255 is considered to be full colour or white.
4. The CV tasks for a single object in an image are:
Image Classification: This task involves assigning a label to the entire image based on its
content.
Image Classification plus Localisation: This is the task which involves both processes of
identifying what object is present in the image and at the same time identifying at what
location that object is present in that image.
5. Humans see an image with the help of their eyes, and then the brain processes and
identifies the image through learning and experience. In computer vision, AI first
perceives the image with a sensing device, and then computer vision and other AI
algorithms identify and classify the elements in the image to recognise it.
6. The face-lock feature on smartphones uses computer vision to analyse and identify facial
features. When a user activates the face-lock feature, the smartphone's CV system
compares the facial features with pre-registered photographs stored on the device. If the
facial characteristics match, the device grants access2 to the user. This authentication
method offers convenience and security for the user.
7. Image classification involves identifying the main object category in a photo, while image
classification with localisation determines both the object's category and its precise
location within the image, often by drawing a bounding box around it.
For example, in an image showing a cat, the image classification algorithm will identify
and label the image as a cat. Whereas, the image classification with localisation algorithm
will not only identify the cat, but will also draw a box to indicate the location of the cat in
the image.
9. There are two types of pooling which can be performed on an image. They are:
a. Max Pooling: This returns the maximum value from the portion of the image covered
by the Kernel.
b. Average Pooling: This returns the average value from the portion of the image covered
by the Kernel.
10. Visual search algorithms in search engines use computer vision technology to help you
search for different objects using real world images. CV compares different features of
the input image to its database of images, analyses the image features and gives us the
search result. Computer vision, combined with machine learning allows the device not
only to see the image, but also to interpret what is in the picture, helping make decisions
based on it.
12. Autonomous driving involves identifying objects, getting navigational routes and
monitoring the surroundings. Automated cars from companies like Tesla can detect the
360-degree movements of pedestrians, vehicles, road signs and traffic lights and create
3D maps. CV helps them detect and analyse objects in real-time and take decisions like
breaking, stopping or keep driving.
4. Script bots are used for simple functions like answering frequently asked questions,
setting appointments and on messaging apps to give predefined responses.
5. Example:
"The bat is hanging upside down on the tree."
"Anju bought a new bat for the cricket match finale."
In the first sentence, "bat" refers to a mammal hanging upside down. In the second, it is
cricket equipment used for hitting balls.
6. Stem: studi
Lemma: study
7. The name "bag" symbolises that the algorithm is not concerned with where the words
occur in the corpus, i.e., the sequence of tokens, but aims at getting unique words from
the corpus and the frequency of their occurrence.
2. Sometimes, a sentence can have a correct syntax but it does not mean anything.
For example, "Purple elephants dance gracefully on my ceiling."
This statement is correct grammatically but does not make any sense.
3. Text normalisation is a process that reduces the randomness and complexity of text by
converting the text data into a standard form. The text is normalised to a lower or
simplified level hence improving the efficiency of the model.
hema is learning about ai asked the smart robot kibo explained basic concepts
1 1 1 1 1 0 0 0 0 0 0 0 0
hema is learning about ai asked the smart robot kibo explained basic concepts
1 1 1 1 1 0 0 0 0 0 0 0 0
1 0 0 1 1 1 1 1 1 1 0 0 0
0 0 0 0 0 0 1 0 0 1 1 1 1
7. In text processing we pay special attention to the frequency of words occurring in the
text, since it gives us valuable insights into the content of the document. Based on the
frequency of words that occur in the graph, we can see three categories of words. The
words that have the highest occurrence across all the documents of the corpus are
considered to have negligible value. These words, termed as stop words, do not add
much meaning to the text and are usually removed at the pre-processing stage. The
words that have moderate occurrence in the corpus are called frequent words. These
words are valuable since they relate to subject or topic of the documents and occur in
sufficient number throughout the documents. The less common words are termed as
rare words. These words appear in the least frequently but contribute greatly to the
corpus’ meaning. When processing text, we only take frequent and rare words into
consideration.
Unit 7: Evaluation
A. Short answer type questions.
1. Recall considers True Positive and False Negative cases.
2. Precision calculates the percentage of true positive cases versus all the cases where the
prediction is true.
4. F1 score can be defined as the measure of balance between Precision and Recall. F1 score
combines both Precision and Recall into a single number to give a better overall picture
of how well the model is performing.
5. True Positive in model evaluation is a case where predictions and reality match and
prediction is true (Positive). True Negative in model evaluation is a case where
predictions and reality match and prediction is true (Negative).
6. Recall is defined as the fraction of positive cases that are correctly identified.
2. A confusion matrix is a summarised table used to anlayse and assess the performance of
an AI model. The matrix compares the actual target values with those predicted by the
model. This allows us to visualise how well our classification model is performing and
what kinds of errors it is making.
3. Evaluating model behaviour means checking how well the model "fits" the data. A good
fit means the model has identified the patterns and relationships in the training data
correctly and can make accurate predictions when it is tested with new, unseen data,
while a poor fit means it cannot make reliable predictions.
When a model’s output does not match the true function at all, the model is said to be
underfitting and its accuracy is lower.
When a model’s performance matches well with the true function, i.e., the model has
optimum accuracy, the model is called a perfect fit.
When a model’s model performance tries to cover all the data samples even if they are
out of alignment to the true function, the model is said to be overfitting and has a lower
accuracy.
4. Automated trade industry has developed an AI model which predicts the selling and
purchasing of automobiles. During testing, the AI model came up with the following
predictions:
The Confusion Matrix Reality: 1 Reality: 0
a. How many total tests have been performed in the above scenario?
Ans: Total tests performed: (55+10+12+20) = 97
b. Accuracy, Precision, Recall and F1 Score for the above predictions are:
Accuracy: [(TP + TN) / Total tests] * 100 = [75 / 97] * 100 = 77.32%
Precision: [TP / (TP + FP)] * 100 = [55 / 67] * 100 = 0.8209 * 100 OR 82.09%
Recall: [TP / (TP + FN)] * 100 = [55 / 65] * 100 = 0.8462 * 100 OR 84.62%
F1 Score: [2 * (Precision * Recall) / (Precision + Recall)] * 100 = [2 * (0.8209 *
0.8462) / (0.8209 + 0.8462)] * 100 ≈ 0.83
5. In order to assess if the performance of a model is good, we need two measures: Recall
and Precision. In some cases, you may have a high Precision but low Recall and in others,
low Precision but high Recall. But since both the measures are important, there is a need
of a metric which takes both Precision and Recall into account. The metric that takes into
account both these parameters is F1 Score. F1 score can be defined as the measure of
balance between Precision and Recall. F1 score combines both Precision and Recall into a
single number to give a better overall picture of how well the model is performing.
6. Recently the country was shaken up by a series of earthquakes which has done a huge
damage to the people as well as the infrastructure. To address this issue, an AI model has
been created which can predict if there is a chance of earthquake or not. The confusion
matrix for the same is:
a. How many total cases are True Negative in the above scenario?
Ans: 20
b. Precision, recall and F1 score of the above predictions are: