0% found this document useful (0 votes)
4 views17 pages

Lecture Sheet F

The document provides an introduction to probability, covering basic concepts such as experiments, outcomes, sample space, events, and types of events. It explains the definitions of probability, including classical and empirical approaches, and discusses the addition laws of probability for mutually exclusive and independent events. Additionally, it includes examples and problems related to calculating probabilities in various scenarios.

Uploaded by

mehedihasanio388
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views17 pages

Lecture Sheet F

The document provides an introduction to probability, covering basic concepts such as experiments, outcomes, sample space, events, and types of events. It explains the definitions of probability, including classical and empirical approaches, and discusses the addition laws of probability for mutually exclusive and independent events. Additionally, it includes examples and problems related to calculating probabilities in various scenarios.

Uploaded by

mehedihasanio388
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Lecture Sheet F

Chapter 6
1. Introduction to Probability
2. Addition Laws of Probability
3. Conditional probability, Multiplication rule of probability, Bayes’ Theorem

1. Introduction to Probability

A. Some basic concepts:


(1) Experiment:
Experiment is an act that can be repeated under given conditions. Unit experiment
is known as trial. Experiment may be a trial or two or more trials.
Outcomes:
The results of an experiment are known as outcomes.

Example:
(i) Tossing of a coin is a trial and getting head and tail are outcomes.
(ii) Throwing of a die is a trial and obtaining 1 or 2 or 3 or 4 or 5 or 6 is an
outcome.
(iii) Drawing three balls from a bag containing 3 red and 4 black ball is a trial and
getting one red and 2 black balls is an outcome.

(2) Random Experiment:


Experiments are called random experiments if the outcomes depend on chance and
cannot be predicted with certainity.

Example:
(i) Tossing of a fair coin,
(ii) Throwing of an unbaised die, etc.,
are the examples of random experiments.

(3) Exhaustive cases:


The total number of possible outcomes of a random experiment is khown as
exhaustive cases.

Example:
(i) In tossing a fair coin, the exhaustive number of cases is 2, 2n =
(ii) In throwing an unbaised die, the exhaustive number of cases is 6,
(iii) In throwing two unbaised dice, the exhaustive number of cases is 6 2  36 . 6 n =

(4) Sample space:


The set of points representing all possible outcomes of a random experiment is
called the sample space and it is denoted by S .
A particular outcome, i.e., an element in S , is called a sample point.

1
Example:
The sample space S for the experiment of tossing two fair coins is given by
S  HH , HT , TH , TT 
Here, the sample points are HH or HT or TH or TT .
The number of sample points in a sample space is generally denoted by n(S )

(5) Event:
Any statement regarding one or more of the sample point(s) of a sample space
recorded from a random experiment is known as event.
In other words, an event is a subset of the sample space.
An empty set  is a subset of the sample space S , is also an event, known as
impossible event.
The sample space S is also subset of itself, is also an event, known as sure event.

There are two types of envents,


(i) Simple event and
(ii) Compound event

(6) Simple event and compound event


An event is called simple event if it contains only one sample point.
An event is called compound event if it contains more than one sample point or it
is the union of simple events.

Example:
Suppose a fair coin is tossed twice. Let H and T denote the head and tail of the
coin respectively. Then the sample space of the experiment is
S  HH , HT , TH , TT   w1 , w2 , w3 , w4  In this example, there four simple events, which
are w1  HH , w2  HT  , w3  TH , and w4  TT  .
Let A be the event of head of the first coin, then A  HH , HT   w1 , w2  . Here A is
compound event, since it contains two sample points.

(7) Favouable cases:


The number of sample points in favour of a statement or an event constitute the
favourable cases.

Example:
(i) Suppose two fair coiuns are tosswed. Then the sapmle sapce is
S  HH , HT , TH , TT 

Here the exhaustive cases are 4.


Let A be the event of head of the first coin, then A  HH , HT 
Here favourable number of cases to the event A is 2.

2
(8) Equally likely cases:
If all the exhaustive cases of a random experiment have equal chance to occur,
then the cases are called equally likely cases.

Example:
(i) In tossing a fair coin, the outcomes ‘head’ and ‘tail’ are equally likely.
(ii ) In throwing an unbaised die, all the six faces are equally likely.

(9) Mutually exclusive cases:


Outcomes or cases are said to be mutually exclusive if no two or more of them can
happen simultaneously.

Example:
(i) In tossing a fair coin, the outcomes ‘head’ and ‘tail’ are mutually exclusive
because if ‘head’ comes, we cann’t get tail and if ‘tail’ comes we cann’t get ‘head’.
(ii) In throwing a balalnced die, the six faces numbered 1, 2, 3, 4, 5, and 6 are
mutually excliusive.

Definition of probabilirty:
There are two definitions of probability. These are
(i) Mathematical or Classical or a Priori Probabbility.
(ii) Statistical or Empirical Probability.

(i) Mathematical or Priori Definition: If in any random experiment the n


outcomes are exhaustive, mutually exclusive and equally likely and m of these are
favourale to an event A, then the probability of A is defined by

m
P ( A) 
n

This definition was given by James Bernoulli.

Remarks:
(a) 0  P( A)  1
The exhaustive number of cases n and favourable cases m are non-negetive and
hence P ( A)  0.
Again m  n ( favourable cases canno exceed exchaustive number of cases).
m
Therefore, P( A)  1
n
 0  P ( A)  1

Impossible event: An event A is said to be an impossible event if P( A)  0 .

Sure event: An event A is said to be an sure event if P ( A)  1 .

3
(b) Drawbacks or limitations of classical definition of probabilty:
Mainly, there are three drawbacks of classical definition of probabilty:
(i) This definition fails when the total number of possible outcomes are infinite.
(ii) This definition leaves us completely helpless when the possible outcome are
not equally likely.
(iii) It is not always possible to enumerate all the equally likely cases.

Problems/ Assignment:
Problem 1:
Consider an experiment in which two dice are tossed. Write down the sample
space and find the probabiilty that
(i) the sum of the spots on the dice is greater than 12,
(ii) the sum of spots on the dice is divisible by 3,
(iii) the sum is greater than or eqaul to 2 and is less than or equal to 12.

Solution:
The sample space S for the experiment is given by
S  1, 2, 3, 4, 5, 6 1, 2, 3, 4, 5, 6
Here
n( S )  6  6  36

2nd die  1 2 3 4 5 6
1st die

1 S ={11 12 13 14 15 16
2 21 22 23 24 25 26
3 31 32 33 34 35 36
4 41 42 43 44 45 46
5 51 52 53 54 55 56
6 61 62 63 64 65 66}

(i) Let us define


A1 : The event that ‘the sum of the spots on the dice is greater than 12’.
 A1  
Favourable number of cases to the event A1 = m = 0
Total number of cases = n =36
m 0
 P ( A1 )   0
n 36
Here A1 is an ‘impossible event’.

(ii) Let us define


A2 : The event that ‘the sum of the spots on the dice is divisible by 3’.
 A2  12, 15, 21, 24, 33, 36, 42, 45, 51, 54, 63, 66
Favourable number of cases to the event A2 = m = 12

4
Total number of cases = n =36
m 12 1
 P ( A1 )   
n 36 3
(iii) Let us define
A3 : the event that ‘the sum is greater than or equal to 2 and is less than or equal to
12’.
 A3  S
Favourable number of cases to the event A3 = m =36
Total number of cases = n =36
m 36
 P ( A3 )   1
n 36
Here A3 is a ‘certain (sure) event’.

Problem 2: The six planes of a die is numbered by 1,3,3,5,6,6 and the six planes
of another die is numbered by 2,2,4,4,5,6. Two dice are thrown simultaneously.
Write down the the sample space and find the probability that the sum of the
obtained numbers is 7.

Problem 3: Suppose three coins are tossed at a time. Write down the sample space
and find the following probability of the event that
(i) the number of heads excceds the number of tails,
(ii) the event of getting two heads,
(iii) the event of getting head in the first trial.

Solution:
The sample space S for the experiment of tossing three coins at a time is given by
S  H , T  H , T  H , T 
 S  H , T  HH , HT , TH , TT 
 S  HHH , HHT , HTH , HTT , THH , THT , TTH , TTT 
and n( S )  2  2  2  8

(i) Let us define


A1 : the event that ‘the number of heads excceds the number of tails’.
 A1  HHH , HHT , HTH , THH 
Favourable number of cases to the event A1 = m = 4
Total number of cases = n =8
m 4 1
 P ( A1 )   
n 8 2
(ii) Let us define
A2 : the event of getting two heads.
 A2  HHT , HTH , THH 
Favourable number of cases to the event A2 = m = 3
Total number of cases = n =8

5
m 3
 P ( A2 )  
n 8
(iii) Let us define
A3 : the event of getting head in the first trial.
 A3  HHH , HHT , HTH , HTT 
Favourable number of cases to the event A3 = m = 4
Total number of cases = n =8
m 4 1
 P ( A3 )   
n 8 2

2. Addition Laws of Probability

Some basic concepts


(1) Mutually exclusive events:
If two or more events cannot occur together, they are called mutually exclusive
events. There is no commopn sample point between any two mutually exclusive
events. Thus, two events A and B are said to be mutually exclusive if
A  B    P ( A  B )  P ( )  0

(2) Not mutually exclusive events:

(3) Independent events:


Events are said to be independent of each other if happening of any one of them is
not affected by and does not affect the happening of any one of others.
In other words, two events A and B are said to be independent if and only if
P ( A  B )  P ( A).P ( B )
For three independent events A, B, C
P ( A  B  C )  P ( A).P ( B ).P (C )

(4) Not independent events:


If the happening of an event A is affected by the happening of another event B,
then the events are not independent. In this case, P ( A  B )  P ( A).P ( B )

(5) Addition law of probability:


Theorem 1: If A and B are two mutually exclusive events, then
P ( A  B )  P ( A)  P ( B )
Proof: Let S be the sample space of a random experiment and A and B are two
mutually exclusive events defined on S . The Venn-diagram for A and B is shown
below:

A B

6
Suppose,
the total number of elements in S  n(S )
the favourable number of cases to the event A  n( A)
the favourable number of cases to the event B  n(B)
the favourable number of cases to the event A  B  n( A  B )
Now, by the classical definition of probability, we have
n( A)
P ( A) 
n( S )
n( B )
P( B) 
n( S )
n( A  B )
P( A  B) 
n( S )
From the Venn-diagram,
n( A  B )  n( A)  n( B ) ( A and B are mutually exclusive events)
n( A  B ) n( A)  n( B )
 
n( S ) n( S )
n( A  B ) P ( A) P ( B )
  
n( S ) n( S ) n( S )
 P ( A  B )  P ( A)  P ( B )

Theorem 2: If A and B are any two events and are not mutually exclusive,
then
P ( A  B )  P ( A)  P ( B )  P ( A  B )
Proof:
Let S be the sample space of a random experiment, and A and B are two events
defined on S , and they are not mutually exclusive events. The Venn-diagram for A
and B is shown below:

Note: A  Ac , B  B C
Suppose,
the total number of elements in S  n(S )
the favourable number of cases to the event A  n( A)
the favourable number of cases to the event B  n(B)
the favourable number of cases to the event A  B  n( A  B )

7
the favourable number of cases to the event A  B  n( A  B )
the favourable number of cases to the event A  B  n( A  B )
the favourable number of cases to the event A  B  n( A  B)

Now, by the classical definition of probabilty, we have


n( A)
P ( A) 
n( S )
n( B )
P( B) 
n( S )
n( A  B )
P( A  B) 
n( S )
n( A  B )
P( A  B) 
n( S )
From the Venn-diagram,
n( A  B )  n( A)  n( A  B )
n( A  B )  n( B )  n( A  B )

Since the events ( A  B ), ( A  B ) and ( A  B ) are disjoint, hence


n( A  B )  n( A  B )  n( A  B )  n( A  B )
 n( A)  n( A  B )  n( A  B )  n( B )  n( A  B )
 n( A)  n( B )  n( A  B )
 n( A  B )  n( A)  n( B )  n( A  B )

Dividing both sides by n(S ), we have


n( A  B ) n( A)  n( B )  n( A  B )

n( S ) n( S )
n( A  B ) n( A) n( B ) n( A  B )
   
n( S ) n( S ) n( S ) n( S )
 P ( A  B )  P ( A)  P ( B )  P ( A  B )

Theorem 3: If A , B and C are any three events defined on a smple space S ,


then P ( A  B  C )  P ( A)  P ( B )  P (C ) if A , B and C are mutually exclusive.

Theorem 4: If A, B and C are any three events defined on a sample space S ,


then P ( A  B  C )  P ( A)  P ( B )  P (C )  P ( A  B )  P ( A  C )  P ( B  C )  P ( A  B  C ) if
A , B and C are not mutually exclusive.

8
Problems/Assignment:
Problem 1: Two unbiased dice are thrown. Find the probability that
(I) first die shows 5 or sum of the upper faces is more than or equal to 8,
(ii) second die shows 4 or sum of the upper faces is more than 10.

Solution:
The sample space S for the experiment of throwing two unbiased dice is given by

2nd die  1 2 3 4 5 6
1st die

1 S ={11 12 13 14 15 16
2 21 22 23 24 25 26
3 31 32 33 34 35 36
4 41 42 43 44 45 46
5 51 52 53 54 55 56
6 61 62 63 64 65 66}

The total number of equally likely and exhaustive cases, n( S )  6 2  36 .

(i) Let A be the event that the first die shows 5 ; and
B be the event that sum of the upper faces of dice is more than or equal to 8.
We need P ( A  B ).
Favourable cases to A are
A : 51, 52, 53, 54, 55, 56
n( A) 6
 n A  6 and P ( A)  
n( S ) 36
Favourable cases to B are
B : 26, 35, 36, 44, 45, 46, 53, 54, 55, 56, 62, 63, 64, 65, 66
n( B ) 15
 nB   15 and P ( B )  
n( S ) 36
Favourable cases to A  B are
A  B : 53, 54, 55, 56
n( A  B ) 4
 n A  B   4 and P ( A  B )  
n( S ) 36
We have
P ( A  B )  P ( A)  P ( B )  P ( A  B )
6 15 4
  
36 36 36
17

36
(ii) Let A be the event that the second die shows 4 ; and
B be the event that sum of the upper faces is more than 10.
We need P ( A  B ).

9
Favourable cases to A are
A : 14, 24, 34, 44, 54, 64
n( A) 6
 n A  6 and P ( A)  
n( S ) 36
Favourable cases to B are B : 56, 65, 66
n( B ) 3
 nB   3 and P ( B )  
n( S ) 36
Favourable cases to A  B are A  B : 
 A and B are mutually exclusive events.
n( A  B ) 0
 n A  B   0 and P ( A  B )  
n( S ) 36
We have
6 3 0 1
P ( A  B )  P ( A)  P ( B )  P  A  B     
36 36 36 4

Problem 2: In a survey of 100 readers, it is observed that 30 read “The Daily Star’,
40 read ‘The Independent’ and 15 read both. Find probability that a reader read at
least one of the paper.

Solution: Let A be the event that a reader read ‘The Daily Star’ and
B be the event that a reader read ‘The Independent’.
Given P( A)  0.30 , P( B)  0.40 and P ( A  B )  0.15 .
We need P ( A  B ).
 P ( A  B )  P ( A)  P ( B )  P ( A  B )  0.30  0.40  0.15  0.56

Problem 3: In an examination 20% students have failed in statistics course, 30%


have failed in mathematics course and 10% students have failed in both the
courses. A student is randomly selected. Find the probability that he has failed in a
course.

Solution: Let A be the event that the student has failed in statistics course; and
B be the event that the student has failed in mathematics course.
Given P( A)  0.20 , P( B)  0.30 , and P ( A  B )  0.10
We need P ( A  B ).
 P ( A  B )  P ( A)  P ( B )  P ( A  B )  0.20  0.30  0.10  0.40

Problem 4: Three programmers A , B and C have started to write a computer


1 1 1
program. The chances of success of them are , and respectively. Each of
2 3 4
them is writing the program independently. Find the probability that the program
will be written correctly.

10
Solution: The program will be developed if it is properly written by any one of
them. Therefore, we need P( A  B  C ). Since all of them are writing independently.
1 1 1 1 1 1
P ( A  B )  P ( A) P ( B )    , P ( A  C )  P ( A) P (C )   
2 3 6 2 4 8
1 1 1 1 1 1 1
P ( B  C )  P ( B ) P (C )    , P ( A  B  C )  P ( A) P ( B ) P (C )    
3 4 12 2 3 4 24
P ( A  B  C )  P ( A)  P ( B )  P (C )  P ( A  B )  P ( A  C )  P ( B  C )  P ( A  B  C )
1 1 1 1 1 1 1 3
       
2 3 4 6 8 12 24 4

3. Conditional probability, Multiplication rule of probability, Bayes’ Theorem

Conditional probability:
Let A and B be two events defined on a sample space S . Then the conditional
probability of A given that B has already occurred is defined by
P( A  B)
P( A / B)  , given P ( B )  0 (i)
P( B)

Here P( A / B) is known as conditional probability. Similarly, the conditional


probability of B given that A has already occurred is given by
P ( B  A)
P ( B / A)  , giiven P ( A)  0 ( ii)
P ( A)

A B  B  A

 P ( A  B )  P ( B  A)

(i) and ( ii) can be written respectively as

P ( A  B )  P ( B ).P ( A / B )

P ( A  B )  P ( A).P ( B / A)

The above rule of probability is known as multiplication rule of probability. The


rule can be generalized for n events.

Independent events: Two events A and B are said to be independent, if and only
if P ( A  B )  P ( A).P ( B ) .

For independent events, P ( A / B )  P ( A) and P ( B / A)  P ( B )

11
Theorem 1: If A and B are two independent events, then (i) A and B , (ii) A and
B are independent events.

Theorem 2: If A , B and C are three mutually independent events, then A  B and


C are also independent.

Problem 1: In a university there are 2000 students. Out of which 1500 got first
division in HSC and 500 are females. Those who got first division in HSC 300 of
them are females. A student is selected at random and found that the selected one
is female. Find the probability that she got first division in HSC examination.

Problem 2.
Consider an experiment in which two dice are tossed. Write down the sample
space. Let A be the event that the 1st die shows 4 and B be event that the 2nd die
shows odd numbers. Find P( A) , P(B) , P ( A  B ) , P( A / B) , P( B / A) . Show that the
events A and B are independent.

Solution:
The sample space S for the experiment is given by
S  1, 2, 3, 4, 5, 6 1, 2, 3, 4, 5, 6

1 2 3 4 5 6
2 die 
nd

1st die

1 S ={11 12 13 14 15 16
2 21 22 23 24 25 26
3 31 32 33 34 35 36
4 41 42 43 44 45 46
5 51 52 53 54 55 56
6 61 62 63 64 65 66}

Total number of cases = n( S )  N  6  6  36 =36

A : The event that the 1st die shows 4


 A  {(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)}
Favourable number of cases to the event A = n( A)  6

B : The event that the 2nd die shows odd numbers

12
 B  {(1,1), (1,3), (1,5)
(2,1), (2,3), (2,5)
(3,1), (3,3), (3,5)
(4,1), (4,3), (4,5)
(5,1), (5,3), (5,5)
(6,1), (6,3), (6,5)}
Favourable number of cases to the event B = n(B )  18
A  B  {(4,1), (4,3), (4,5)}
Favourable number of cases to the event A  B = n( A  B )  3

A / B = {(4,1), (4,3), (4,5)}


Favourable number of cases to the event A / B  n( A / B )  3

B / A = {(4,1), (4,3), (4,5)}


Favourable number of cases to the event B / A  n( B / A)  3

Now,
n( A) 6 1
P ( A)   
N 36 6

n( B ) 18 1
P( B)   
N 36 2

n( A  B ) 3 1
P( A  B)   
N 36 12

n( A  B ) 3 1
P( A / B)   
n( B ) 18 6

n( A  B ) 3 1
P ( B / A)   
n( A) 6 2

1 1 1
We have P ( A).P ( B )  .  and
6 2 12
1
P( A  B) 
12

 P ( A  B )  P ( A).P ( B )

1
Also, P ( A / B )   P ( A)
6
1
P ( B / A)   P( B)
2

So the events A and B are independent.

13
Problem 3. A bag contain 6 white and 9 black balls. Four balls are drawn at a time.
Find the probability for the first draw to give 4 white and the second to give 4
black balls in each of the following cases:
(i) The balls are replaced before the second draw.
(ii) The balls are not replaced before the second draw.

Solution: Let
A be the event that the first drawing gives 4 white balls, and
B be the event that second drawing gives 4 black balls.
With replacement:
P ( A  B )  P ( A).P ( B / A)
 P ( A).P ( B ), as B is independent of A
6C 9C 6
 4  4 
15C4 15C4 5426
Without replacement:
P ( A  B )  P ( A).P ( B / A)
6C 9C 3
 4  4 
15C4 11C4 715

Concept of Marginal Probability:

Problem 4. In an examination 100 students appeared at the examination. Some of


them belong to semester-1, some belong to semester-2 and some belong to
semester-3. The students appeared at the examination of mathematics and statistics.
The number of students in each subject and each semester are shown below.

Number of students classified by semester and subject

Subject Semester Total


1 2 3
Mathematics 28 18 19 65
Statistics 12 17 6 35
Total 40 35 25 100

A student is randomly selected from the group of students.

Find the probability that


(i) the selected one is a student of mathematics given that he is a student of
semester-3.
(ii) the selected one is a second semester student and appeared at the statistics
examination.
(iii) the selected one is a student of semester-2.

14
Theorem 3: Bayes’ Theorem
Statement: Let H1 , H 2 , H 3 , ..., H n be n mutually exclusive events associated
with a sample space S with P ( H i )  0 , (i  1, 2, ..., n) and they form a partition of
n
S. Consider an arbitrary event E which is a subset of UH
i 1
i such that P( E ) > 0 , then
P( H i ) P( E / H i ) P( H i  E )
P( H i / E )  n

P( E )
 P( H ) P( E / H )
i 1
i i

Proof: Let us suppose that the probabilities P ( H1 ), P ( H 2 ), P ( H 3 ), ..., P ( H n ) and


the conditional probability P ( E / H i ) are known.
Using multiplication rule of probability, we have
P( E  H i )  P( E ) P( H i / E )  P( H i ) P( E / H i )
P( H i ) P( E / H i )
 P( H i / E )  (1)
P( E )

n
Since E  U H i , (i.e., E  ( H1  H 2  H 3  ...  H n ) the event E can happen in any of
i 1

the mutually exclusive cases H1  E , H 2  E , H 3  E , ..., H n  E . We have

E  ( H 1  E )  ( H 2  E )  ( H 3  E )  ...  ( H n  E ) ( H i  E  H i , i  1, 2, ..., n )

P ( E )  P[( H 1  E )  ( H 2  E )  ( H 3  E )  ...  ( H n  E )]

Using addition law of probability,

P ( E )  P ( H 1  E )  P ( H 2  E )  P ( H 3  E )  ...  P ( H n  E )

= P ( H1 ) P ( E / H1 )  P ( H 2 ) P ( E / H 2 )  P ( H 3 ) P ( E / H 3 ))  ...  P ( H n ) P ( E / H n )

n
=  P( H i ) P( E / H i ) (2)
i 1

Therefore, from (1) and (2) we have

P( H i ) P( E / H i ) P( H i  E )
P( H i / E )  n
 , Hence proved.
P( E )
 P( H ) P( E / H )
i 1
i i

Note:
(i) The following terminologies are also used when the Bayes’ theorem is applied:

15
Hypotheses: The events H1 , H 2 , H 3 , ..., H n are called the hypotheses.

Priori Probability: The probability P( H i ) is considered as the priori probability


of hypothesis H i .

Posteriori Probability: The probability P ( H i / E ) is considered as the posteriori


probability of hypothesis H i

(ii) Bayes’ Theorem Applications


One of the many applications of Bayes’ theorem is Bayesian inference, a particular
approach to statistical inference. Bayesian inference has found application in
various activities, including medicine, science, philosophy, engineering, sports,
law, etc. For example, we can use Bayes’ theorem to define the accuracy of
medical test results by considering how likely any given person is to have a
disease and the test’s overall accuracy. Bayes’ theorem relies on consolidating
prior probability distributions to generate posterior probabilities. In Bayesian
statistical inference, prior probability is the probability of an event before new data
is collected.

Problem 5. There are two identical boxes containing respectively 4 white and 3
red balls, 3 white and 7 red balls. A box is chosen at random and a ball is drawn at
random from it. If the ball is white, what is the probability that it is from the first
box?

Solution: Let
H1 : the event that the first box is chosen
H 2 : the event that the second box is chosen
E : the event of getting a white ball

E / H1 : the event of getting a white ball from the first box


E / H 2 : the event of getting a white ball from the second box

1
Now, P ( H1 )  P ( H 2 )  ( As the boxes are identical and are chosen at random)
2
and
 4  3
   
1 4 1 3
P( E / H1 )     , P( E / H 2 )    
7 7 10  10
   
1  1 

We want P( H1 / E ) .

16
P H 1  E  P ( H 1 ).P ( E / H 1 )
P( H1 / E )  
P( E ) P( E )

E  ( H1  E )  ( H 2  E )
P ( E )  P[( H 1  E )  ( H 2  E )]
P( E )  P( H1  E )  P( H 2  E )
1 4 1 3
P( E )  P( H1 ) P( E / H1 )  P( H 2 ) P( E / H 2 ) = . + .
2 7 2 10

Therefore,

P ( H 1  E ) P ( H 1 ).P ( E / H 1 ) P ( H 1 ).P ( E / H 1 )
P( H1 / E )    =
P( E ) P( E ) P ( H 1 ).P ( E / H 1 )  P ( H 2 ).P ( E / H 2 )
1 4
.
2 7 40

1 4 1 3 61
.  .
2 7 2 10

Problem 6. In an urn there are 4 white and 5 black balls and in another urn there
are 3 white and 4 black balls. The selection of an urn is equally likely. A ball is
selected from an urn and found that it is white. Find the probability that (i) it is
selected from urn-1, (ii) it is selected from urn-2.

Problem 7. A computer centre has 100 computers which are collected from three
companies A, B, and C. The collected computers from these companies are 50, 30
and 20 respectively. The probabilities of trouble which is faced in these computers
daily are 0.15, 0.20 and 0.25 respectively. One day during work a computer is
found defective. What is the probability that it is collected from company A?

(Prof. Dr. Rahmat Ali)

17

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy