0% found this document useful (0 votes)
31 views114 pages

UNCERTIANITY

The document discusses uncertainty in artificial intelligence, focusing on concepts such as probability notation, joint distributions, and decision theory. It explains how to act under uncertainty using utility theory and the principle of Maximum Expected Utility (MEU) to make rational decisions. Additionally, it covers Bayes' rule, independence, and methods for probabilistic inference, including marginalization and conditioning.

Uploaded by

Adam Michelle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views114 pages

UNCERTIANITY

The document discusses uncertainty in artificial intelligence, focusing on concepts such as probability notation, joint distributions, and decision theory. It explains how to act under uncertainty using utility theory and the principle of Maximum Expected Utility (MEU) to make rational decisions. Additionally, it covers Bayes' rule, independence, and methods for probabilistic inference, including marginalization and conditioning.

Uploaded by

Adam Michelle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 114

Uncertainity

ECS 302 | ARTIFICIAL INTELLIGENCE

Dr. Srinivasa L.Chakravarthy


ASSISTANT PROFESSOR
Dept.Of CSE, GIT,
GITAM UNIVERSITY
[Deemed to be University]
Uncertain Knowledge: Uncertainty:
⮚ Acting under uncertainty
⮚ basic probability notation
⮚ the axioms of Probability
⮚ Inference using full joint distributions
⮚ Independence
⮚ bayes' rule and its use
⮚ the wumpus world revisited

Department of CSE, GIT ECS302: AI 2


13.1 Acting under Uncertainty

Department of CSE, GIT ECS302: AI 3


4
5
6
7
8
9
10
11
12
Basic Probability Notation

1. Propositions:
▪ Degrees of belief are always applied to propositions.
▪ Probability theory typically uses a language that is slightly more expressive than propositional logic.
(This section describes that language.)

13
14
15
Continued...
2. Atomic events

• An atomic event is a complete specification of the state of the world about which the agent is uncertain.
• It can be thought of as an assignment of particular values to all the variables of which the world is composed.
Ex: If my world consists of only the Boolean variables Cavity and Toothache,
then there are just four distinct atomic events;
the proposition Cavity =false Ʌ Toothache = true is one such event.

Properties of Atomic Events:

(1) They are mutually exclusive-at most one can actually be the case.
Ex: cavity Ʌ toothache and cavity Ʌ ¬toothache cannot both be the case.

(2) The set of all possible atomic events is exhaustive-at least one must be the case.
-- That is, the disjunction of all atomic events is logically equivalent to true.

Department of CSE, GIT ECS302: AI 16


17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Continued...
Uncertainty and Rational Decisions:

❑ Utility theory is used to represent and reason with preferences.


❑ The term utility is used here in the sense of "the quality of being useful”, not in the sense of the electric company
or water works.
❑ Utility theory says that every state has a degree of usefulness, or utility, to an agent and that the agent will prefer
states with higher utility.

❑ Preferences, as expressed by utilities, are combined with probabilities in the general theory of rational decisions
called decision theory:
Decision theory = Probability theory + Utility theory

The fundamental idea of decision theory is that an agent is rational if and only if it chooses the action that yields the
highest expected utility, averaged over all the possible outcomes of the action.

This is called the principle of Maximum Expected Utility (MEU).

Department of CSE, GIT ECS302: AI 40


Continued...
Design for a decision-theoretic agent:

Figure 13.1 sketches the structure of an agent that uses decision theory to select actions.

Department of CSE, GIT ECS302: AI 41


Continued...

Joint Probability Distribution:


⮚ The expression such as P( Weather, Cavity) -- denotes the probabilities of all combinations of the values of a set
of random variables.
So, P( Weather, Cavity) can be represented by a 4 x 2 table of probabilities.
(Weather – 4 values, Cavity -- True of False)
This is called the joint probability distribution of Weather and Cavity.

Full Joint Probability Distribution:

• A joint probability distribution that covers the complete set of random variables is called the full joint probability
distribution.
Ex: if the world consists of just the variables Cavity, Toothache, and Weather,
then the full joint distribution is given by P (Cavity, Toothache, Weather).
⮚ This joint distribution can be represented as a 2 x 2 x 4 table with 16 entries.

⮚ So, any probabilistic query can be answered from the full joint distribution.

Department of CSE, GIT ECS302: AI 42


Continued...

Probability Density Functions:

• For continuous variables, it is not possible to write out the entire distribution as a table, because there are
infinitely many values.

• Instead, one usually defines the probability that a random variable takes on some value x as a parameterized
function of x.
Ex: Let the random variable X denote tomorrow's maximum temperature in Berkeley.

Then the sentence P(X = x) = U [18, 26] (x)

expresses the belief that X is distributed uniformly between 18 and 26 degrees Celsius.

• Probability distributions for continuous variables are called probability density functions.

Department of CSE, GIT ECS302: AI 43


Continued...

❑ We can also use the P notation for conditional distributions.


❑ P(X / Y) gives the values of P(X = xi / Y = yj) for each possible i, j.

❑ As an example consider applying the product rule to each case where the propositions a and b assert particular
values of X and Y respectively.
❑ We obtain the following equations:

❑ We can combine all these into the single equation:


P(X, Y) = P(X | Y) P(Y)

Department of CSE, GIT ECS302: AI 44


Continued...

Using the axioms of Probability:


❖ We can derive a variety of useful facts from the basic axioms.
❖ For example, the familiar rule for negation follows by substituting ¬a for b in axiom 3, giving us:
P(a V ¬a) = P(a) + P(¬a) – P(a Ʌ ¬a) by axiom 3 with b = ¬a
P(true) = P(a) + P(¬a) – P(false) by logical equivalence
1 = P(a) + P(¬a) by axiom 2
P(¬a) = 1 – P(a) by algebra

❖ The third line of this derivation is itself a useful fact and can be extended from the Boolean case to the general
discrete case.
Let the discrete variable D have the domain < d1 , ……… ,dn >

• Then it is easy to show that

• That is, any probability distribution on a single variable must sum to 1.

Department of CSE, GIT ECS302: AI 45


13.4 Inference using Full Joint Distributions
• It is also true that any joint probability distribution on any set of variables must sum to 1.
• The probability of a proposition is equal to the sum of the probabilities of the atomic events in which it holds.

13.4 Inference using Full Joint Distributions

A simple method for probabilistic inference:

--that is, the computation from observed evidence of posterior probabilities for query propositions.

We will use the full joint distribution as the "knowledge base" from which answers to all questions may be derived.

Department of CSE, GIT ECS302: AI 46


Continued...

Ex: A domain consisting of just the three Boolean variables


Toothache, Cavity, and Catch (the dentist's nasty steel probe catches in my tooth).
The full joint distribution is a 2 x 2 x 2 table as shown in Figure 13.3.

Now identify those atomic events in which the proposition is true and add up their probabilities.
Ex: There are six atomic events in which cavity V toothache holds: (i.e., cavity V toothache is True)
P(cavity V toothache) = 0.108 + 0.012 + 0.072 + 0.008 + 0.016 + 0.064 = 0.28

Department of CSE, GIT ECS302: AI 47


Continued...

One common task is to extract the distribution over some subset of variables or a single variable.

Ex: Adding the entries in the first row gives the unconditional or marginal probability of cavity:

P(cavity) = 0.108 + 0.012 + 0.072 + 0.008 = 0.2

This process is called marginalization, or summing out-because the variables other than Cavity are summed out.

The general marginalization rule for any sets of variables Y and Z:

• That is, a distribution over Y can be obtained by summing out all the other variables
from any joint distribution containing Y.

Department of CSE, GIT ECS302: AI 48


Continued...

A variant of this rule involves conditional probabilities instead of joint probabilities, using the product rule:

This rule is called Conditioning.


Conditional Probabilities of variables:
Ex: Computing the probability of a cavity, given evidence of a toothache

Department of CSE, GIT ECS302: AI 49


Continued...

Ex: Computing the probability that there is no cavity, given a toothache

In these two calculations, the term 1/P(toothache) remains constant,


no matter which value of Cavity we calculate.

In fact, it can be viewed as a normalization constant for the distribution P( Cavity | toothache),
ensuring that it adds up to 1.

We will use a to denote such constants.

Department of CSE, GIT ECS302: AI 50


Continued...

We can write the two preceding equations in one:

A general inference procedure:


Let X be the query variable (Cavity in the example).
E be the set of evidence variables (just Toothache in the example).
e be the observed values for them, and let
Y be the remaining unobserved variables (just Catch in the example).
The query is P(X |e) and can be evaluated as

where the summation is over all possible y’s


(i.e., all possible combinations of values of the unobserved variables Y)
Department of CSE, GIT ECS302: AI 51
Continued...

An algorithm for Probabilistic Inference:

It loops over the values of X and the values of Y to enumerate all possible atomic events with e fixed,
adds up their probabilities from the joint table, and normalizes the results.

Department of CSE, GIT ECS302: AI 52


13.5 Independence

❑ Let us expand the full joint distribution in Figure 13.3 by adding a fourth variable, Weather.

❑ The full joint distribution then becomes P(Toothache, Catch, Cavity, Weather),
which has 32 entries (because Weather has four values).

❑ It contains four "editions" of the table shown in Figure 13.3, one for each kind of weather.

❑ It seems natural to ask what relationship these editions have to each other and to the original three-variable table.
For example, how are
P(toothache, catch, cavity, Weather = cloudy) and P(toothache, catch, cavity) related?

One way to answer this question is to use the product rule: P (a Ʌ b) = P (a | b) P (b)

Department of CSE, GIT ECS302: AI 53


Continued...

❖ One should not imagine that one's dental problems influence the weather.

⮚ Therefore, the following assertion seems reasonable:


P( Weather = cloudy | toothache, catch, cavity) = P ( Weather = cloudy) --- (1)

❖ From this, we can deduce


P(toothache, catch, cavity, Weather = cloudy) = P( Weather = cloudy) P(toothache, catch, cavity).

⮚ A similar equation exists for every entry in P(Toothache, Catch, Cavity, Weather).
❖ In fact, we can write the general equation
P(Toothache, Catch, Cavity, Weather) = P(Toothache, Catch, Cavity) P( Weather) .

⮚ Thus, the 32-element table for four variables can be constructed from one 8-element table and
one four-element table.
❖ This decomposition is illustrated schematically in Figure 13.5(a).

Department of CSE, GIT ECS302: AI 54


Continued...

The property we used in writing Equation (1) is called independence (also marginal independence and absolute
independence).

Department of CSE, GIT ECS302: AI 55


Continued...

⮚ In particular, the weather is independent of one's dental problems.

⮚ Independence between propositions a and b can be written as:


P(a|b)=P(a) or P( b | a )= P ( b ) or P ( a Ʌ b) = P ( a ) P ( b ).
All these forms are equivalent.

⮚ Independence between variables X and Y can be written as follows (again, these are all equivalent):
P(X|Y)=P(X) or P ( Y | X ) = P(Y) or P ( X ,Y ) = P ( X ) P ( Y ).

Department of CSE, GIT ECS302: AI 56


13.6 Bayes’ Rule and its Use

We defined the product rule and pointed out that it can be written in two forms because of the commutativity of
conjunction:

Equating the two right-hand sides and dividing by P(a), we get

This equation is known as Bayes' rule (also Bayes' law or Bayes' theorem).

Department of CSE, GIT ECS302: AI 57


Continued...

⮚ The more general case of multivalued variables can be written in the P notation as

where again this is to be taken as representing a set of equations, each dealing with specific values of the
variables.
⮚ A more general version conditionalized on some background evidence e:

Applying Bayes’ rule: The simple case


⮚ Bayes' rule does not seem very useful.
⮚ It requires three terms:
a conditional probability and
two unconditional probabilities -- just to compute one conditional probability.

Department of CSE, GIT ECS302: AI 58


Continued...

▪ Bayes' rule is useful in practice because there are many cases where we do have
good probability estimates for these three numbers and need to compute the fourth.
▪ In a task such as medical diagnosis,
we often have conditional probabilities on causal relationships and want to derive a diagnosis.
▪ A doctor knows that the disease meningitis causes the patient to have a stiff neck, say, 50% of the time.
▪ The doctor also knows some unconditional facts:
the prior probability that a patient has meningitis is 1/50,000, and
the prior probability that any patient has a stiff neck is 1/20.
▪ Letting s be the proposition that the patient has a stiff neck and
m be the proposition that the patient has meningitis, we have

Department of CSE, GIT ECS302: AI 59


Continued...

⮚ That is, we expect only 1 in 5000 patients with a stiff neck to have meningitis.
⮚ Notice that, even though a stiff neck is quite strongly indicated by meningitis (with probability 0.5),
the probability of meningitis in the patient remains small.
⮚ This is because the prior probability on stiff necks is much higher than that on meningitis.

▪ The same process can be applied when using Bayes' rule.


We have:

▪ Thus, in order to use this approach we need to estimate P(s | ¬m) instead of P(s).

❖ The general form of Bayes‘ rule with normalization is

where a is the normalization constant needed to make the entries in P(Y I X) sum to 1.

Department of CSE, GIT ECS302: AI 60


Continued...

Using Bayes’ rule : Combining evidence


⮚ What happens when we have two or more pieces of evidence?
For example, what can a dentist conclude if her nasty steel probe catches in the aching tooth of a patient?

If we know the full joint distribution (Figure 13.3), one can read off the answer:

⮚ We know, however, that such an approach will not scale up to larger numbers of variables.
⮚ We can try using Bayes' rule to reformulate the problem:


------ (2)

(this is from the Bayes’ rule: )

Department of CSE, GIT ECS302: AI 61


Continued...

▪ For this reformulation to work, we need to know


the conditional probabilities of the conjunction toothache Ʌ catch for each value of Cavity.
(from RHS of equation-(2))
▪ The notion of independence provides a clue, but needs refining.

⮚ It would be nice if Toothache and Catch were independent,


but they are not: if the probe catches in the tooth, it probably has a cavity and that probably causes a toothache.

⮚ These variables are independent, however, given the presence or the absence of a cavity.

⮚ Each is directly caused by the cavity, but neither has a direct effect on the other:
toothache depends on the state of the nerves in the tooth, whereas
the probe's accuracy depends on the dentist's skill, to which the toothache is irrelevant.

Department of CSE, GIT ECS302: AI 62


Continued...

Mathematically, this property is written as:

This equation expresses the conditional independence of toothache and catch given Cavity.

From equation—(2), the probability of cavity is:

The general definition of conditional independence of two variables X and Y , given a third variable Z is
P(X, Y|Z) = P(X|Z) P(Y|Z)

In the dentist domain,


P(Toothache, Catch|Cavity) = P(Toothache | Cavity) P(Catch|Cavity) --- (3)
This assertion asserts independence only for specific values of Toothache and Catch.

Department of CSE, GIT ECS302: AI 63


Continued...

⮚ As with absolute independence in the equivalent forms

P(X|Y,Z) = P(X|Z) and P(Y|X,Z) = P(Y|Z)

⮚ It turns out that the same is true for conditional independence assertions.

For example, given the assertion in Equation—(3), we can derive decomposition as follows:

P(Toothache, Catch, Cavity)


= P(Toothache, Catch | Cavity) P(Cavity) -- product rule
= P(Toothache | Cavity) P(Catch | Cavity) P(Cavity) -- using equation (3)

⮚ In this way, the original large table is decomposed into three smaller tables.

Department of CSE, GIT ECS302: AI 64


65
66
67
68
69
18.3 Learning Decision Trees

Department of CSE, GIT ECS302: AI 70


Continued…

Department of CSE, GIT ECS302: AI 71


Continued…

Department of CSE, GIT ECS302: AI 72


73
74
75
76
Continued…

Expressiveness of decision trees:


Logically speaking, any particular decision tree hypotlhesis Ifor the Will Wait goal predicate can be seen as an
assertion of the form:

where each condition Pi (s) is a


conjunction of tests corresponding to a path from the root of the tree to a leaf with a positive outcome.

Department of CSE, GIT ECS302: AI 77


Continued…
• Although this looks like a first-order sentence, it is, in a sense, propositional,
because it contains just one variable and all the predicates are unary.

• The decision tree is really describing a relationship between Will Wait and some logical combination of attribute
values.

• Decision trees can express any function of the input attributes. For Boolean functions, truth table row gives path
to leaf.

• If the function is the parity function, which returns 1 if and only if an even number of inputs are 1, then an
exponentially large decision tree will be needed. It is also difficult to use a decision tree to represent a majority
function, which returns 1 if more than half of its inputs are 1.

• The truth table has 2n rows, because each input case is described by n attributes.
We can consider the "answer" column of the table as a 2n -bit number that defines the function.

Department of CSE, GIT ECS302: AI 78


Continued…
Inducing decision trees from examples :
Ex: A Boolean decision tree consists of a vector of' input attributes, X, and
a single Boolean output value 3.
A set of examples (X1,y1),…..,(X12, y12) is shown in Figure 18.3.

Department of CSE, GIT ECS302: AI 79


Continued…
• The positive examples are the ones in which the goal WillWait is true (X 1, X3,…..).
• The negative examples are the ones in which it is false (X2, X5,……).
• The complete set of examples is called the training set.

A trivial (unimportant) solution for the problem of finding a decision tree that agrees with the training set:

• Construct a decision tree that has one path to a leaf for each example,
where the path tests each attribute in turn and follows the value for the example and the leaf has the
classification of the example.

• When given the same example again, the decision tree will come up with the right classification.

• Unfortunately, it will not have much to say about any other cases!

Department of CSE, GIT ECS302: AI 80


Continued…
Figure 18.4 shows how the algorithm gets started.

Department of CSE, GIT ECS302: AI 81


Continued…
• We are given 12 training examples, which we classify into positive and negative sets.

• We then decide which attribute to use as the first test in the tree.

• Figure 18.4(a) shows that Type is a poor attribute, because it leaves us with four possible outcomes,
each of which has the same number of positive and negative examples.

• On the other hand, in Figure 18.4(b) we see that Patrons is a fairly important attribute, because if the value is
None or Some, then we are left with example sets for which we can answer definitively (No and Yes,
respectively).

• If the value is Full, we are left with a mixed set of examples.

• In general, after the first attribute test splits up the examples, each outcome is a new decision tree learning
problem in itself, with fewer examples and one fewer attribute.

Department of CSE, GIT ECS302: AI 82


Continued…
There are four cases to consider for these recursive problems:

1. If there are some positive and some negative examples, then choose the best attribute to split them.
(Figure 18.4(b) shows Hungry being used to split the remaining examples.)

2. If all the remaining examples are positive (or all negative), then we are done: we can answer Yes or No.
(Figure 18.4(b) shows examples of this in the None and Some cases.)

3. If there are no examples left, it means that no such example has been observed, and we return a default value
calculated from the majority classification at the node's parent.

4. If there are no attributes left, but both positive and negative examples, we have a problem.
⮚ It means that these examples have exactly the same description, but different classifications.
⮚ This happens when some of the data are incorrect; we say there is noise in the data.
⮚ It also happens either when the attributes do not give enough information to describe the situation fully, or when
the domain is truly nondeterministic. One simple way out of the problem is to use a majority vote.

Department of CSE, GIT ECS302: AI 83


Continued…
Decision Tree Learning Algorithm:

Department of CSE, GIT ECS302: AI 84


Continued…
The Decision Tree induced:

Department of CSE, GIT ECS302: AI 85


86
Continued…

Department of CSE, GIT ECS302: AI 87


88
89
90
91
92
93
94
95
96
97
98
99
100
101
Continued…
• We need a formal measure of "fairly good" and "really useless" and we can implement the CHOOSE-ATTRIBUTE
function of Figure 18.5.

• The measure should have its maximum value when the attribute is perfect and its minimum value when the
attribute of no use at all.

• One suitable measure is the expected amount of information provided by the attribute.
Ex: Whether a coin will come up heads.
-- The amount of information contained in the answer depends on one's prior knowledge.
-- The less you know, the more information is provided.

• Information theory measures information content in bits.

• One bit of information is enough to answer a yes/no question about which one has no idea, such as the flip of a
fair coin.

Department of CSE, GIT ECS302: AI 102


Continued…
• In general, if the possible answers vi have probabilities P(vi),

then the information content I of the actual answer is given by

To check this equation, for the tossing of a fair coin, we get

• If the coin is loaded to give 99% heads, we get I (1/100,99/100) = 0.08 bits, and
as the probability of heads goes to 1, the information of the actual answer goes to 0.

Department of CSE, GIT ECS302: AI 103


Continued…
• A correct decision tree can answer the question “what is the correct classification?”

• An estimate of the probabilities of the possible answers before any of the attributes have been tested is given by
the proportions of positive and negative examples in the training set.

• Suppose the training set contains p positive examples and n negative examples.
• Then an estimate of the information contained in a correct answer is:

• The restaurant training set in Figure 18.3 has p = n = 6, so we need 1 bit of information.

• Now a test on a single attribute A will not usually tell us this much information, but it will give us some of it.

• We can measure exactly how much by looking at how much information we still need after the attribute test.

Department of CSE, GIT ECS302: AI 104


Continued…
• Any attribute A divides the training set E into subsets E1,…..,Ev according to their values for A,
where A can have v distinct values.
• Each subset Ei has pi positive examples and ni negative examples,
so if we go along that branch, we will need an additional

bits of information to answer the question.

• A randomly chosen example from the training set has the ith value for the attribute
with probability (pi + ni)/(p+n).
⮚ So on average, after testing attribute A, we will need

bits of information to classify the example.


Department of CSE, GIT ECS302: AI 105
Continued…
• The information gain from the attribute test is
the difference between the original information requirement and the new requirement:

• The heuristic used in the CHOOSE-ATTRIBUTE function is just to choose the attribute with the largest gain.
• Returning to the attributes considered in Figure 18.4, we have

confirming our intuition that Patrons is a better attribute to split on.


⮚ In fact, Patrons has the highest gain of any of the attributes and would be chosen by the decision-tree learning
algorithm as the root.
***

Department of CSE, GIT ECS302: AI 106


13.7 The Wumpus world revisited

❖ Uncertainty arises in the wumpus world because the agent's sensors give only partial, local information about the
world.

Department of CSE, GIT ECS302: AI 107


Continued...

▪ Figure 13.6 shows a situation in which each of the three reachable squares-[1,3], [2,2], and [3,1]-might contain a
pit.
▪ Pure logical inference can conclude nothing about which square is most likely to be safe,
so a logical agent might be forced to choose randomly.
▪ A probabilistic agent can do much better than the logical agent.

Department of CSE, GIT ECS302: AI 108


Continued...
Aim: To calculate the probability that each of the three squares contains a pit.
(For the purposes of this example, we will ignore the wumpus and the gold.)

The relevant properties of the wumpus world are that


(1) a pit causes breezes in all neighboring squares, and
(2) each square other than [1,1] contains a pit with probability 0.2.

The first step is to identify the set of random variables we need:

• Pij is true if and only if square [i, j] actually contains a pit.


• Bij if and only if square [i, j] is breezy

• These variables are included only for the observed squares--[1,1], [1,2], and [2,1].

Department of CSE, GIT ECS302: AI 109


Continued...
Now specify the full joint distribution:

Applying the product rule, we have: [ Product rule: P (a Ʌ b) = P (a | b) P (b) ]

• The first term (on the RHS) : Conditional probability of a breeze configuration, given a pit configuration
-- this is 1 if the breezes are adjacent to the pits and 0 otherwise.
• The second term : Prior probability of a pit configuration.
-- Each square contains a pit with probability 0.2 independently of the other squares.
Hence,

For a configuration with n pits, this is just 0.2n X 0.816-n .

Department of CSE, GIT ECS302: AI 110


Continued...
In the situation in Figure 13.6(a), the evidence consists of --
the observed breeze (or its absence) in each square that is visited, combined with the fact that
each such square contains no pit.

We'll abbreviate these facts as :


b = ¬b1,1 Ʌ b1,2 Ʌ b2,1 and known = ¬p1,1 Ʌ ¬p1,2 Ʌ ¬p2,1

We are interested in answering queries such as : P(P1,3 | known, b).


(how likely is it that [1, 3] contains a pit, given the observation so far?)

To answer this query, we can follow the standard approach suggested by the equation

--- (4)

namely summing over entries from the full joint distribution.

Department of CSE, GIT ECS302: AI 111


Continued...
• Let Unknown be a composite variable consisting of the Pi,j variables for squares other than the Known squares and
the query square [1,3].
• Then by equation (4) we have

Department of CSE, GIT ECS302: AI 112


Continued...

Department of CSE, GIT ECS302: AI 113


Continued...

Department of CSE, GIT ECS302: AI 114

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy