AIML II Test Scheme and Soluion 2023
AIML II Test Scheme and Soluion 2023
b) Explain Bayesian belief network and conditional independence with an example. 4 L3 CO4
7 a) Explain Radial Basis Function Networks with an example and mention the two
6 L2 CO5
stage process used to train the RBF.
b) Write short notes on CADET System. 4 L2 CO5
OR
8 a) Apply K nearest neighbor classifier to predict the patient is a diabetic patient or not
with the given features BMI,Age. Assume K=3 Test Example BMI=43.6, Age=40, 5 L3 CO5
Sugar =?. Training examples are as follows:
Bloom’s Cognitive Levels (BCL): L1: Remember, L2: Understand, L3: Apply, L4: Analyze, L5: Evaluate, L6: Create
BMI Age Diabetes
33.6 50 1
26.6 30 0
23.4 40 0
43.1 67 0
35.3 23 1
35.9 67 1
36.7 45 1
25.7 46 0
23.3 29 0
31 56 1
b) Derive Locally Weighted Linear Regression local approximation for the target
5 L3 CO5
function
9 a) What is Reinforcement Learning? Explain the aspects which make Reinforcement L1,
2+4 CO6
Learning different from other. L2
b) Explain Q learning algorithm assuming deterministic rewards and actions. 4 L3 CO6
OR
10 a) Explain the elements of reinforcement learning with a neat diagram. 6 L2 CO6
b) Explain Q Function w.r.t Reinforcement Learning 4 L2 CO6
1. a) Bayesian methods provide a coherent and principled way to handle uncertainty and make predictions. 1+4
Bayesian learning is a statistical framework for updating beliefs or probabilities based on new evidence or
data.
Computational Complexity: Bayesian inference often involves complex mathematical calculations,
especially when dealing with high-dimensional data or complex models. Computing the exact posterior
distribution can be computationally intensive and, in some cases, analytically intractable.
Data-Driven Priors: Bayesian learning heavily relies on the choice of priors, which represent beliefs
about the parameters before observing data.
Model Specification: The accuracy of Bayesian inference depends on the correctness of the chosen model.
If the model does not capture the underlying data-generating process accurately, the results may be
unreliable.
Interpretability and Communication: Bayesian results are often expressed in terms of probability
distributions, which may be more challenging for non-experts to interpret compared to point estimates
provided by frequentist methods.
b) Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the parameters of a 5
model. In some cases we will assume that every hypothesis in H is equally probable a priori (P(hi) = P(hj)
for all hi in H ) In this case we can further simplify and need only consider the term P(D|h) to find the most
probable hypothesis. P(D|h) is often called the likelihood of the data D given h and any hypothesis that
maximizes P(D|h) is called a Maximum likelihood (ML) hypothesis hML
Least Squares Error (LSE) is a method used for estimating the parameters of a model by minimizing the
sum of squared differences between the observed and predicted values.
Training examples <xi, di>, where di is noisy training value di = f (xi) + ei
ei is random variable (noise) drawn independently for each xi according to some Gaussian distribution with
mean=0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors.
2. a)
SCHEM E AND SOLUTION
QNO MARKS
6
b) A Bayesian Belief Network (BBN), also known as a Bayesian Network or a Probabilistic Graphical 4
Model, is a graphical representation of probabilistic relationships among a set of variables.
X is conditionally independent of Y given Z if the probability distribution governing X is independent of
the value of Y given the value of Z; that is, if
(xi, yj, zk) P(X= xi|Y= yj, Z= zk) = P(X= xi|Z= zk)
P(X|Y, Z) = P(X|Z)
Example: Thunder is conditionally independent of Rain, given Lightning
P(Thunder|Rain, Lightning) = P(Thunder|Lightning)
SCHEM E AND SOLUTION
QNO MARKS
3 a) In Bayesian learning, the MDL principle is applied to model selection, where one aims to choose the
most appropriate model from a set of candidate models. The MDL principle is closely related to Occam's 2+4= 6
razor, which suggests that among competing hypotheses that explain the observed data equally well,
one should prefer the simpler one.
4
b) For each hypothesis h in H, calculate the posterior probability.
.
SCHEM E AND SOLUTION
QNO MARKS
4
5
a) .
b)
.
SCHEM E AND SOLUTION
QNO MARKS
5 a) 1. The sample space is initially partitioned into K clusters and the observations are randomly 5
assigned to the clusters.
2. For each sample:
Calculate the distance from the observation to the centroid of the cluster.
IF the sample is closest to its own cluster THEN leave it ELSE select another cluster.
3. Repeat steps 1 and 2 untill no observations are moved from one cluster to another
b) EM Algorithm
2+3=5
Essence of EM Approach
.
SCHEM E AND SOLUTION
QNO MARKS
6 a) 5
b) 5
.
SCHEM E AND SOLUTION
QNO MARKS
7 2+4
a)
• The CADET system employs case- based reasoning to assist in the conceptual design of simple 4
mechanical devices such as water faucets. It uses a library containing approximately 75 previous
designs and design fragments to suggest conceptual designs to meet the specifications of new
design problems. Each instance stored in memory (e.g., a water pipe) is represented by describing
both its structure and its qualitative function. New design problems are then presented by specifying
the desired function and requesting the corresponding structure.
.
SCHEM E AND SOLUTION
QNO MARKS
8 a) 5
b)
5
.
SCHEM E AND SOLUTION
QNO MARKS
9 a) 5
Reinforcement learning addresses the question of how an autonomous agent that senses and acts in its
environment can learn to choose optimal actions to achieve its goals. The reinforcement learning problem
differs from other function approximation tasks in several important respects.
• Delayed reward: Determining which of the actions in its sequence are to be credited with producing
the eventual rewards
• Exploration : The learner faces a tradeoff in choosing whether to favor exploration of unknown states
and actions (to gather new information), or exploitation of states and actions that it has already
learned will yield high reward (to maximize its cumulative reward).
• Partially observable states: in many practical situation's sensors provide only partial information.
• Life-long learning
b) 5
10 a)
2+3= 5
• An agent interacting with its environment. The agent exists in an environment described by some set of
possible states S.
• It can perform any of a set of possible actions A. Each time it performs an action a, in some state st the
agent receives a real-valued reward r, that indicates the immediate value of this state-action transition.
This produces a sequence of states si, actions ai, and immediate rewards ri as shown in the figure.
• The agent's task is to learn a control policy, π : S -> A, that maximizes the expected sum of these
. rewards, with future rewards discounted exponentially by their delay
SCHEM E AND SOLUTION
QNO MARKS
10 b) 5
------------------------------------------------********************-----------------------------------------------------
AIML QUIZ-II