0% found this document useful (0 votes)
12 views4 pages

Probability I

The document provides an overview of basic concepts in probability, including definitions of random experiments, outcomes, sample spaces, events, and types of events such as simple and compound events. It also discusses various definitions of probability, including classical, statistical, and axiomatic approaches, along with their limitations. Additionally, it covers concepts like independent and dependent events, conditional probability, and Bayes' theorem.

Uploaded by

roorke439
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views4 pages

Probability I

The document provides an overview of basic concepts in probability, including definitions of random experiments, outcomes, sample spaces, events, and types of events such as simple and compound events. It also discusses various definitions of probability, including classical, statistical, and axiomatic approaches, along with their limitations. Additionally, it covers concepts like independent and dependent events, conditional probability, and Bayes' theorem.

Uploaded by

roorke439
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Unit-I Probability-I 1

Random Experiment: An experiment is said to be random if conducted under identical


conditions, the outcome is not unique, but may be any one of the possible outcomes. In
other words, an experiment is called random experiment if it satisfies the following two
conditions:

I) It has more than one possible outcome.


II) It is not possible to predict the outcome in advance.

Examples of random experiment are: throwing of a die, tossing of a coin and drawing a card
from a pack of cards etc.

Outcome: The result of random experiment is known as outcome.

Trail: Any particular performance of a random experiment is called trail. For example, tossing
of a coin is a random experiment or trial.

Sample Space: The set of all possible outcomes of a random experiment is known as sample
space. It is denoted by letter S. For example, in tossing of a coin twice, the sample space is
given by: S = (HH, HT, TH, TT). In throwing of a die, the sample space is given by: S = (1, 2, 3, 4,
5, 6).

Sample Point: Each element of sample space is called sample point. In other words, each
outcome of the random experiment is also called sample point.

Event: Outcome or combinations of outcomes of a random experiment is known as event. In

set terminology, an event is any subset of sample space. An event may also be defined as
follows:
“Of all the possible outcomes in the sample space of a random experiment, some
outcomes satisfy a specified description, which we call an event”

Simple Event: Event is said to be simple if it corresponds to a single possible outcome of a


random experiment. In other words, an event is said to be simple if it has only one sample
point of sample space. It is also known as elementary event. In a sample space containing ‘n’
distinct elements, there are exactly ‘n’ simple events. For example in the experiment of
tossing of two coins, there are four simple events viz, {HH}, {HT}, {TH} and {TT}.
Unit-I Probability-I 2

Compound Event: If an event has more than one sample point of sample space, it is called a
compound or composite event. For example in the experiment of ‘tossing of a coin thrice’ the
events:

E: ‘Exactly one head appeared’ F: ‘Atleast one head appeared’ G; ‘Atmost one head appeared’

are all compound events. The subsets of S associated with these events are:

E = {HTT, THT, TTH} , F = {HTT, THT, TTH, HHT, HTH, THH, HHH}, G = {TTT, THT, HTT, TH}
Each of above subsets contain more than one sample point, hence they are all compound
events.

Impossible and Sure Events: The empty set ∅ and the sample space S describe events. Infact
∅ is called an impossible event and S i.e. the whole sample space is called the sure event.

Exhaustive Events: The total number of possible outcomes of a random experiment is known
as the exhaustive events or cases. For example, in throwing of a die, there are 6 exhaustive
cases since any one of the 6 faces 1, 2, 3, 4, 5, 6 may come. In tossing of a coin, there are two
exhaustive cases viz., head and tail.

Favourable Events or Cases: The number of cases favourable to the happening of an event in
a random experiment are known as favourable cases. For example, in drawing a card from a
pack of cards the number of cases favourable to drawing of a King is 4, for drawing diamond
is 13 and for drawing a red card is 26.

Equally likely Events: Events are said to be equally likely if there is no reason to expect one in
preference to other. For example in tossing of a coin, the events head and tail are equally
likely; in throwing of a die, all the six faces are equally likely.

Mathematical or Classical or Prior Probability: If a random experiment or a trail results in ‘n’


exhaustive, mutually exclusive and equally likely outcomes (or cases), out of which ‘m’ are
favourable to the happening of an event E, then the probability of happening of event E,
usually denoted by 𝑃(𝐸 ) is given by:

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑓𝑎𝑣𝑜𝑢𝑟𝑎𝑏𝑙𝑒 𝑐𝑎𝑠𝑒𝑠 𝑚


𝑃 (𝐸 ) = =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑥ℎ𝑎𝑢𝑠𝑡𝑖𝑣𝑒 𝐶𝑎𝑠𝑒𝑠 𝑛
Unit-I Probability-I 3

Limitations of Classical definition of Probability: The classical definition of probability fails in


the following cases:

i) If the various outcomes of the random experiment are not equally. For example,
the probability that a ceiling fan in a room will fall is not ½, since the event of the
fan ‘falling’ and ‘not falling’ though mutually exclusive and exhaustive, are not
equally likely. In fact, the probability of the fan falling will be almost be zero.
ii) If the exhaustive number of outcomes of the random experiment is infinite or
unknown.

Statistical Definition of Probability (Relative frequency approach to probability; Richard


Von Mises): If an experiment is performed repeatedly under essentially homogeneous and
identical conditions, then the limiting value of the ratio of the number of times the event
occurs to the number of trials, as the number of trials becomes indefinitely large, is called
probability of happening of the event, it being assumed that the limit is finite and unique.

Symbolically, if in N trials an event E happens M times, then the probability of the


happening of E denoted by 𝑃(𝐸 ) is given by:

𝑀
𝑃(𝐸 ) = lim
𝑁→∞ 𝑁

Limitations of Statistical Definition of Probability: I) If an experiment is repeated a large


number of times, the experimental conditions may not remain identical and homogenous. II)
The limit may not attain a unique value, however large N may be.

Axiomatic Definition of Probability: Given a sample space S of a random experiment, the


probability of the occurrence of any event A is defined as a real valued function P(A) whose
domain is the power set of S and range in the interval [0, 1] satisfying the following axioms:

𝑨𝒙𝒊𝒐𝒎 𝟏. 𝐹𝑜𝑟 𝑎𝑛𝑦 𝑒𝑣𝑒𝑛𝑡 𝐸, 𝑃(𝐸) ≥ 0 (𝐴𝑥𝑖𝑜𝑚 𝑜𝑓 𝑛𝑜𝑛 − 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑖𝑡𝑦)

𝑨𝒙𝒊𝒐𝒎 𝟐. 𝑃(𝑆) = 1 (𝐴𝑥𝑖𝑜𝑚 𝑜𝑓 𝐶𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦)

𝑨𝒙𝒊𝒐𝒎 𝟑. 𝐼𝑓 𝐴1 , 𝐴2 , 𝐴3 , … , 𝐴𝑛 𝑖𝑠 𝑎𝑛𝑦 𝑓𝑖𝑛𝑖𝑡𝑒 𝑜𝑟 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑒 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒 𝑜𝑓 𝑚𝑢𝑡𝑢𝑎𝑙𝑙𝑦 𝑒𝑥𝑐𝑙𝑢𝑠𝑖𝑣𝑒 𝑒𝑣𝑒𝑛𝑡𝑠 𝑜𝑓 𝑆, 𝑡ℎ𝑒𝑛

𝑛 𝑛 ∞ ∞

𝑃 (⋃ 𝐴𝑖 ) = ∑ 𝑃(𝐴𝑖 ) 𝑜𝑟 𝑃 (⋃ 𝐴𝑖 ) = ∑ 𝑃(𝐴𝑖 ) (𝐴𝑥𝑖𝑜𝑚 𝑜𝑓 𝑎𝑑𝑑𝑖𝑡𝑖𝑣𝑖𝑡𝑦)


𝑖=1 𝑖=1 𝑖=1 𝑖=1
Unit-I Probability-I 4

Independent Events: Two or more events are said to be independent if happening or non-
happening of any one of them, does not, in any way, affect the happening of others. In other
words, two events A and B are said to be independent if:

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)

For example, in tossing of a die repeatedly, the event of getting ‘3’ in Ist throw is
independent of getting ‘3’ in second, third or subsequent throws.

Dependent Events: Two or more events are said to be dependent if happening or non-
happening of any one of them affect the happening of others. In other words two events A
and B are said to be dependent if :

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵/𝐴) = 𝑃(𝐵)𝑃(𝐴/𝐵)

Conditional Probability: Two events A and B are said to be dependent if the happening or
non-happening of one event affect the happening of other event. The probability attached to
such an event is called the conditional probability and is denoted by P(A/B). If two events A
and B are dependent, then the conditional probability of B given A is defined by:

𝑃(𝐴 ∩ 𝐵)
𝑃(𝐵/𝐴) = ; 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐴) > 0
𝑃(𝐴)

Similarly the conditional probability of A given B is defined by:

𝑃(𝐴∩𝐵)
𝑃(𝐴/𝐵) = ; 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐵) > 0
𝑃(𝐵)

Bayes theorem: 𝐼𝑓 𝐸1, 𝐸2, 𝐸3, … , 𝐸𝑛, 𝑎𝑟𝑒 𝑛 𝑚𝑢𝑡𝑢𝑎𝑙𝑙𝑦 𝑒𝑥𝑐𝑙𝑢𝑠𝑖𝑣𝑒 𝑒𝑣𝑒𝑛𝑡𝑠 𝑤𝑖𝑡ℎ 𝑃(𝐸𝑖 ) > 0, (𝑖 =
1, 2, … , 𝑛), 𝑡ℎ𝑒𝑛 𝑓𝑜𝑟 𝑎𝑛𝑦 𝑎𝑟𝑏𝑖𝑡𝑟𝑎𝑟𝑦 𝑒𝑣𝑒𝑛𝑡 𝐴 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎 𝑠𝑢𝑏𝑠𝑒𝑡 𝑜𝑓 ⋃𝑛𝑖=1 𝐸𝑖 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝑃(𝐴) >
0, 𝑤𝑒 ℎ𝑎𝑣𝑒

𝑃(𝐸𝑖 )𝑃(𝐴/𝐸𝑖 ) 𝑃(𝐸𝑖 )𝑃(𝐴/𝐸𝑖 )


𝑃(𝐸𝑖 /𝐴) = 𝑛 = ; 𝑖 = 1, 2, … , 𝑛
∑𝑖=1 𝑃(𝐸𝑖 )𝑃(𝐴/𝐸𝑖 ) 𝑃(𝐴)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy