Probability I
Probability I
Examples of random experiment are: throwing of a die, tossing of a coin and drawing a card
from a pack of cards etc.
Trail: Any particular performance of a random experiment is called trail. For example, tossing
of a coin is a random experiment or trial.
Sample Space: The set of all possible outcomes of a random experiment is known as sample
space. It is denoted by letter S. For example, in tossing of a coin twice, the sample space is
given by: S = (HH, HT, TH, TT). In throwing of a die, the sample space is given by: S = (1, 2, 3, 4,
5, 6).
Sample Point: Each element of sample space is called sample point. In other words, each
outcome of the random experiment is also called sample point.
set terminology, an event is any subset of sample space. An event may also be defined as
follows:
“Of all the possible outcomes in the sample space of a random experiment, some
outcomes satisfy a specified description, which we call an event”
Compound Event: If an event has more than one sample point of sample space, it is called a
compound or composite event. For example in the experiment of ‘tossing of a coin thrice’ the
events:
E: ‘Exactly one head appeared’ F: ‘Atleast one head appeared’ G; ‘Atmost one head appeared’
are all compound events. The subsets of S associated with these events are:
E = {HTT, THT, TTH} , F = {HTT, THT, TTH, HHT, HTH, THH, HHH}, G = {TTT, THT, HTT, TH}
Each of above subsets contain more than one sample point, hence they are all compound
events.
Impossible and Sure Events: The empty set ∅ and the sample space S describe events. Infact
∅ is called an impossible event and S i.e. the whole sample space is called the sure event.
Exhaustive Events: The total number of possible outcomes of a random experiment is known
as the exhaustive events or cases. For example, in throwing of a die, there are 6 exhaustive
cases since any one of the 6 faces 1, 2, 3, 4, 5, 6 may come. In tossing of a coin, there are two
exhaustive cases viz., head and tail.
Favourable Events or Cases: The number of cases favourable to the happening of an event in
a random experiment are known as favourable cases. For example, in drawing a card from a
pack of cards the number of cases favourable to drawing of a King is 4, for drawing diamond
is 13 and for drawing a red card is 26.
Equally likely Events: Events are said to be equally likely if there is no reason to expect one in
preference to other. For example in tossing of a coin, the events head and tail are equally
likely; in throwing of a die, all the six faces are equally likely.
i) If the various outcomes of the random experiment are not equally. For example,
the probability that a ceiling fan in a room will fall is not ½, since the event of the
fan ‘falling’ and ‘not falling’ though mutually exclusive and exhaustive, are not
equally likely. In fact, the probability of the fan falling will be almost be zero.
ii) If the exhaustive number of outcomes of the random experiment is infinite or
unknown.
𝑀
𝑃(𝐸 ) = lim
𝑁→∞ 𝑁
𝑛 𝑛 ∞ ∞
Independent Events: Two or more events are said to be independent if happening or non-
happening of any one of them, does not, in any way, affect the happening of others. In other
words, two events A and B are said to be independent if:
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)
For example, in tossing of a die repeatedly, the event of getting ‘3’ in Ist throw is
independent of getting ‘3’ in second, third or subsequent throws.
Dependent Events: Two or more events are said to be dependent if happening or non-
happening of any one of them affect the happening of others. In other words two events A
and B are said to be dependent if :
Conditional Probability: Two events A and B are said to be dependent if the happening or
non-happening of one event affect the happening of other event. The probability attached to
such an event is called the conditional probability and is denoted by P(A/B). If two events A
and B are dependent, then the conditional probability of B given A is defined by:
𝑃(𝐴 ∩ 𝐵)
𝑃(𝐵/𝐴) = ; 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐴) > 0
𝑃(𝐴)
𝑃(𝐴∩𝐵)
𝑃(𝐴/𝐵) = ; 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐵) > 0
𝑃(𝐵)
Bayes theorem: 𝐼𝑓 𝐸1, 𝐸2, 𝐸3, … , 𝐸𝑛, 𝑎𝑟𝑒 𝑛 𝑚𝑢𝑡𝑢𝑎𝑙𝑙𝑦 𝑒𝑥𝑐𝑙𝑢𝑠𝑖𝑣𝑒 𝑒𝑣𝑒𝑛𝑡𝑠 𝑤𝑖𝑡ℎ 𝑃(𝐸𝑖 ) > 0, (𝑖 =
1, 2, … , 𝑛), 𝑡ℎ𝑒𝑛 𝑓𝑜𝑟 𝑎𝑛𝑦 𝑎𝑟𝑏𝑖𝑡𝑟𝑎𝑟𝑦 𝑒𝑣𝑒𝑛𝑡 𝐴 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑎 𝑠𝑢𝑏𝑠𝑒𝑡 𝑜𝑓 ⋃𝑛𝑖=1 𝐸𝑖 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 𝑃(𝐴) >
0, 𝑤𝑒 ℎ𝑎𝑣𝑒