7 Statistical Reasoning
7 Statistical Reasoning
Statistical Reasoning
Probabilistic reasoning in Artificial intelligence
• In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as "It will rain today," "behavior of someone
for some situations," "A match between two teams or two players." These are
probable sentences for which we can assume that it will happen but not sure
about it, so here we use probabilistic reasoning.
Need of probabilistic reasoning in AI:
o When there are unpredictable outcomes.
o When specifications or possibilities of predicates becomes too large to handle.
o When an unknown error occurs during an experiment.
In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
o Bayes' rule
o Bayesian Statistics
As probabilistic reasoning uses probability and related terms, so before
understanding probabilistic reasoning, let's understand some common terms:
Probability: Probability can be defined as a chance that an uncertain event will
occur. It is the numerical measure of the likelihood that an event will occur. The
value of probability always remains between 0 and 1 that represent ideal
uncertainties.
•P(A) = 0, indicates total uncertainty in an event A.
•P(A) =1, indicates total certainty in an event A.
We can find the probability of an uncertain event by using the below formula.
•0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
•P(¬A) = probability of a not happening event.
•P(¬A) + P(A) = 1.
Random variables: Random variables are used to represent the events and
objects in the real world.
Prior probability: The prior probability of an event is probability computed
before observing new information.
Posterior Probability: The probability that is calculated after all evidence or
information has taken into account. It is a combination of prior probability
and new information.
Conditional probability:
Conditional probability is a probability of occurring an event when another
event has already happened.
Where P(A⋀B)= Joint probability of a and B
P(B)= Marginal probability of B.
If the probability of A is given and we need to find the probability of B, then it will
be given as:
Hence, 57% are the students who like English also like Mathematics.
Bayes' theorem:
• Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian
reasoning, which determines the probability of an event with uncertain
knowledge.
• In probability theory, it relates the conditional probability and marginal
probabilities of two random events.
• Bayes' theorem was named after the British mathematician Thomas Bayes.
The Bayesian inference is an application of Bayes' theorem, which is
fundamental to Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
• Bayes' theorem allows updating the probability prediction of an event by
observing new information of the real world.
Example:
• If cancer corresponds to one's age then by using Bayes' theorem,
we can determine the probability of cancer more accurately with the help of age.
• Bayes' theorem can be derived using product rule and conditional probability of
event A with known event B.
Example-1:
Question: what is the probability that a patient has diseases meningitis with a
stiff neck?
Given Data:
A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it
occurs 80% of the time.
He is also aware of some more facts, which are given as follows:
o The Known probability that a patient has meningitis disease is 1/30,000.
o The Known probability that a patient has a stiff neck is 2%.
Let a be the proposition that patient has stiff neck and b be the proposition that
patient has meningitis. , so we can calculate the following as:
P(a|b) = 0.8
P(b) = 1/30000
P(a)= .02
Hence, we can assume that 1 patient out of 750 patients has meningitis
disease with a stiff neck.
Application of Bayes' theorem in Artificial intelligence:
Example :Let A represent the proposition "Moore is attractive". Then the axioms of
probability insist that P(A) + P(¬A) = 1.
Now suppose that Andrew does not even know who "Moore" is, then
We cannot say that Andrew believes the proposition if he has no idea what it means.
Also, it is not fair to say that he disbelieves the proposition.It would therefore be
meaningful to denote Andrew's belief B of