0% found this document useful (0 votes)
27 views23 pages

Proba

This document provides an overview of probability and statistics concepts taught in a course. It includes definitions of key terms like sample space, random experiment, classical and frequency definitions of probability, algebra, σ-algebra, probability space, events, conditional probability, independent events, pairwise and mutual independence. It also covers mutually exclusive and exhaustive events and partitions. The document lists topics that will be covered like probability laws, random variables, distributions, law of large numbers, estimation and hypothesis testing.

Uploaded by

Abhiraj Ranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views23 pages

Proba

This document provides an overview of probability and statistics concepts taught in a course. It includes definitions of key terms like sample space, random experiment, classical and frequency definitions of probability, algebra, σ-algebra, probability space, events, conditional probability, independent events, pairwise and mutual independence. It also covers mutually exclusive and exhaustive events and partitions. The document lists topics that will be covered like probability laws, random variables, distributions, law of large numbers, estimation and hypothesis testing.

Uploaded by

Abhiraj Ranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

PROBABILITY & STATISTICS(MA20205, LTP-

3-0-0,CRD- 3)

BUDDHANANDA BANERJEE

Contents
1. Preliminary 2
2. Probability : Definition & laws 4
3. Random variable 15
4. Modeling with Random Variables 24
4.1. Examples of discrete random variables: 24
4.2. Examples of continuous random variables: 26
5. Joint and conditional distributions 55
5.1. Laws of expectation 55
6. Law of Large Numbers 64
7. Estimation 71
8. Testing of Hypothesis 80

Date: Updated on August 21, 2023,


Weekly classes:MON(11:00-11:55) , TUE(08:00-08:55) , TUE(09:00-09:55) .
1
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 2

1. Preliminary
Definition 1. A set Ω is said to be finite if there exists an n ∈ N and
a bijection from Ω onto {1, 2, . . . , n} . An infinite set Ω is said to be
countable if there is a bijection from N onto Ω.
If Ω is an infinite countable set, then using any bijection f : Ω → N,
we can list the elements of Ω as a sequence f (1), f (2), f (3), ... so that
each element of Ω occurs exactly once in the sequence. Conversely, if
you can write the elements of Ω as a sequence, it defines an injective
function from natural numbers onto Ω (send 1 to the first element of
the sequence, 2 to the second element, etc.).
Example 2. The set of integers Z is countable. Define f : N → Z by
!
n/2, if n is even
f (n) =
−(n − 1)/2, if n is odd
It is clear that f maps N into Z. Thus, we have found a bijection
from N onto Z.which shows that Z. is countable. This function is a
formal way of saying the we can list the elements of Z as

0, +1, −1, +2, −2, +3, −3, ...


Check that f is one-one and onto.
Example 3. The set N × N is countable. Rather than give a formula,
we list the elements of N × N as follows;

(1, 1), (1, 2), (2, 1), (1, 3), (2, 2), (3, 1), (1, 4), (2, 3), (3, 2), (4, 1), ...
Exercise 4. Define a bijection from N onto N × N, and hence show
that N × N is countable.
Example 5. Z × Z is countable.
Example 6. The set of rational numbers Q is countable.
Theorem 7. The set of real numbers R is not countable.
Proof. The extraordinary proof of this fact is due to Cantor, and the
core idea, called the ‘diagonalization argument’ is one that can be used
in many other contexts. !
Consider any function f : N → [0, 1]. We show that it is not onto,
and hence not a bijection. Indeed, use the decimal expansion to write
a number x ∈ [0, 1] as 0.x1 x2 x3 . . . where xi ∈ {0, 1, . . . , 9}. Write
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 3

the decimal expansion for each of the numbers f (1), f (2), f (3), . . . . as
follows:
f (1) = 0.X1,1 X1,2 X1,3 . . .
f (2) = 0.X2,1 X2,2 X2,3 . . .
f (3) = 0.X3,1 X3,2 X3,3 . . .
······ ··· ···
Let Y1 , Y2 , Y3 , . . . be any numbers in {0, 1, . . . , 9} with the only condi-
tion that Yi ̸= Xi,i . Clearly, it is possible to choose Yi like this. Now,
consider the number y = 0.Y1 Y2 Y3 . . . which is a number in [0, 1]. How-
ever, it does not occur in the above list. Indeed, y disagrees withf (1)
in the first decimal place, disagrees with f (2) in the second decimal
place, etc. Thus, y ̸= f (i) for any i ∈ N which means that f is not
onto [0, 1].
Theorem 8. Thus, no function f : N → [0, 1] is onto, and hence there
is no bijection from N onto [0, 1] and hence [0, 1] is not countable.
Obviously, if there is no onto function onto [0, 1], there cannot be an
onto function onto R. Thus, R is also uncountable.
Example 9. Let A1 , A2 , ... be subsets of a set Ω. Suppose each Ai is
countable (finite is allowed). Then, ∪i Ai is also countable.
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 4

2. Probability : Definition & laws


Definition 10. Random experiment: A random experiment is a
physical phenomena which satisfies the followings.
(1) It has more than one outcomes.
(2) The outcome of a particular trial is not known in advance.
(3) It can be repeated countably many times in in identical condi-
tion.
Example 11. (a) Tossing a coin, (b) Rolling a die and (c) Arranging
52 cards etc.
Definition 12. Sample space: A set which is collection of all possible
outcomes of a random experiment is known as sample space for the
experiment and it is denoted by Ω or S.
Example 13. For the above examples the sample spaces are
(a) {H, T }, (b) {1, 2, 3, 4, 5, 6}, (c) {π|π is any permutation of 52 cards}
respectively

Definition 14. Classical definition of probability: If the sample


space (Ω) of a random experiment is a f inite set and A ⊆ Ω the
probability of A is defined as
|A|
P (A) =
|Ω|
under the assumption that all outcomes are equally likely. Here | · |
denotes the cardinality of a set.

Exercise 15. Consider the equation a1 +a2 +· · ·+ar = n where r < n.


Suppose a computer provides an integral solution of it at random such
that each ai ∈ N ∪ {0} for any solution. Find the probability that each
ai ∈ N only, for a solution.

Definition 16. Frequency definition of probability: If the sample


space (Ω)of a random experiment is a countable set and A ⊆ Ω the
probability of A is defined as
|An |
P (A) =
n↑∞ |Ωn |

where n↑∞ An = A and n↑∞ Ωn = Ω and An ⊆ Ωn .


PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 5

Exercise 17. What is the probability that a randomly chosen number


from N will be a even number?
Exercise 18. What is the probability that a randomly chosen number
from N will be a k − digit number? Can you conclude an answer with
frequency definition of probability?
Definition 19. Algebra: A collection A of the subsets of Ω is called
an algebra if
(1) Ω ∈ A
(2) any A ⊆ Ω and A ∈ A then Ac ∈ A [closed under complemen-
tation ]
(3) any A, B ⊆ Ω and A, B ∈ A implies A ∪ B ∈ A [ closed under
finite union ]
Definition 20. σ-algebra or σ-field: An algebra A of the subsets
of Ω is called an σ-algebra/ field if {Ai } ⊆ Ω and {Ai } ∈ A implies
∪∞i=1 Ai ∈ A [ closed countable union ].

Example 21. (a) A = {∅, Ω}, (b) A = {∅, Ω, A, Ac }, (c) A = 2Ω


[power set]

Definition 22. Axiomatic definition of probability ( Kol-


mogorov ): If A is σ-algebra of the subsets of a non-empty set Ω
then the probability (P ) is defined to be a function P : A )→ [0, 1]
which satisfies,
(1) P (Ω) = 1,
(2) P (A) ≥ 0 for any A ∈ A, "∞
(3) {Ai } ∈ A implies P (∪∞
i=1 Ai ) = i=1 P (Ai ) if Ai ∩ Aj = ∅ ∀i ̸=
j.

Example 23. What is the probability that a randomly chosen number


from Ω = [0, 1] will be
(1) a rational number?
(2) less than 0.4? Can you conclude an answer with classical /
frequency definition of probability?
Definition 24. Probability space: (Ω, A, P ) is known as a proba-
bility space.
Definition 25. Event: For a given Probability space (Ω, A, P ) if A ⊆
Ω and A ∈ A then A is called an event.
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 6

Definition 26. Conditional Probability: For a given Probability


space (Ω, A, P ) if A andB are two events such that P (B) > 0 then the
conditional probability of A given B is defined as
P (A ∩ B)
P (A|B) = .
P (B)
Definition 27. Independent events: For a given probability space
(Ω, A, P ) the two events A and B are called independent if P (A|B) =
P (A), which implies
P (A ∩ B) = P (A)P (B).
Remark 28. If the probability function P is changed to some P1 on the
same (Ω, A) then events A and B may not be independent any more.
Definition 29. Pairwise independence: For a given probability
space (Ω, A, P ) consider a sequence of events {Ai }. This sequence of
events are called pairwise independent if
P (Ai ∩ Aj ) = P (Ai )P (Aj ) ∀i ̸= j.
Definition 30. Mutual independence: For a given probability space
(Ω, A, P ) consider a sequence of events {Ai }. This sequence of events
are called mutually independent if
̸ ik .for any k ∈ N
P (∩i1 ,i2 ···ik Ai ) = Πi1 ,i2 ,···ik P (Ai ) ∀i1 ̸= i2 , ̸= · · · =
Exercise 31. Give an example to show that pairwise independence
does not imply mutual independence.
Definition 32. Mutually exclusive events: For a given probability
space (Ω, A, P ) a sequence of events {Ai } are called mutually exclusive
if Ai ∩ Aj = ∅ ∀i ̸= j
Definition 33. Mutually exhaustive events: For a given proba-
bility space (Ω, A, P ) a sequence of events {Ai } are called mutually
exhaustive if ∪∞
i=1 Ai = Ω

Remark 34. Mutually exhaustive or exclusive events does not depend


the probability function.
Definition 35. Partition: For a given probability space (Ω, A, P ) a
sequence of events {Ai } are called a partition of Ω if {Ai } are mutually
exclusive and exhaustive.
Exercise 36. Prove the following properties:
(1) P (Ac ) = 1 − P (A)
(2) P (∅) = 0
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 7

(3) If A ⊆ B then P (A) ≤ P (B)


(4) 1 − P (∪∞ ∞ c
i=1 Ai ) = P (∩i=1 Ai )
(5) P (A ∪ B) = P" (A) + P (B) − P (A ∩ B)
(6) P (∪ni=1 Ai ) ≤ "ni=1 P (Ai )
(7) P (∪ni=1 Ai ) = nk=1 (−1)k−1 Sk where
#
Sk = P (Ai1 ∩ · · · ∩ Aik )
1≤i1 <i2 <···<ik ≤n

Exercise 37. Suppose n letters are put in n envelops distinct by ad-


dresses. What is the probability that no letter will reach to the correct
address. What is the limiting probability as n ↑ ∞?

Theorem 38. Bayes Theorem: Let A1 , A2 , · · · Ak is a partition of


Ω and (Ω, A, P ) be a probability space with P (Ai ) > 0 ∀i and P (B) > 0
for some B ⊆ Ω. Then
P (B|Ai )P (Ai )
P (Ai |B) = "k .
i=1 P (B|A i )P (Ai )

Exercise 39. There are three drawers in a table. The first drawer
contains two gold coins. The second drawer contains a gold and a
silver coin. The third one contains two silver coins. Now a drawer is
chosen at random and a coin is also chose randomly. It is found that
the a gold coin has been selected then what is the probability that the
second drawer was chosen?
Exercise 40. Let X and Y be the face values of a fair die when rolled
twice independently. Find the probabiliry that X + Y + XY is odd.

Exercise 41. Consider the quadratic equation u2 − Y u + X = 0,
where (X, Y ) is a random point chosen uniformly from a unit square.
What is the probability that the equation will have a real root?
Exercise 42. Six distinct balls are placed randomly into three boxes
A, B, C. For each ball the probability of going into a specific box is
1/3. Find the probability that box A will contain at least two balls.
Exercise 43. A circular target of unit radius is divided into four an-
1 1 2
nular zones with radii , , and 1 respectively. If 6 shots are fired
6 2 3
1
and hit inside the disc of radius , find the probability that at least
2
1
one of these hits inside the disc of radius .
6
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 8

Exercise 44. A random point is uniformly chosen from the interior of


a unit disk. Find the maximum probability that the point will lie in a
isosceles triangle which has a vertex at the centre of the disk and the
other two are on the circumference of the disk.
Exercise 45. A fair dice is rolled once. If the observed face value
is X then it is again rolled independently X times more. Find the
probability that the first roll will have face value greater than or equal
to the face values observed in subsequent roll(s).
Exercise 46. Give a randomized algorithm to approximate value of π.
Exercise 47. Give a randomized algorithm to approximate value of e.
Exercise 48. Let a and b are randomly chosen from N. Find the value
P (gcd(a, b) = 1)
. Find its connection with Eulers Pi Prime Product and Riemanns
Zeta Function. [FUN EXERCISE]
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 9

# Date:26 July 2019


## Pi estimation
## Range of X
a1<-0
b1<-1
## Range of Y
a2<-0
b2<-1
itrn<- 10000 # iteration number
x<-runif(itrn,a1,b1) # generate X
y<-runif(itrn,a2,b2) # generate Y
pi_true<-rep(pi,itrn) # True value of pi
pi_hat<-array(0,dim=c(itrn))
count<-array(0,dim=c(itrn))
xx<-seq(a1,b1,by=0.01) # Sequence on [0,1]

for(i in 1 : itrn){
count [i]<- (x[i]ˆ2+y[i]ˆ2<1) # (X,Y) in circle or not
pi_hat[i]<-4*(sum(count)/i) # How many in cirecle amonge i trials
}
# Plot
plot(pi_hat, type = l)
lines(pi_true, col=2)

s0<- which (count==0) # points out of circle


s1<- which (count==1) # points in circle
plot(y[s0]˜x[s0], pch = 20, cex = 0.5, xlab="X", ylab = "Y")
lines (y[s1]˜x[s1], col=2, type=p, pch = 20, cex = 0.5)
lines(sqrt(1-(xx)ˆ2)˜xx , col=3, lwd=4) # equation of circle
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 10

4.0
3.8
3.6
pi_hat

3.4
3.2
3.0
2.8

0 2000 4000 6000 8000 10000

Index
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 11

1.0
0.8
0.6
Y

0.4
0.2

0.2 0.4 0.6 0.8 1.0

X
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 12

# e estimation
# # Date:26 July 2019
a1<-1
b1<-2
a2<-0
b2<-1
itrn<- 10000
x<-runif(itrn,a1,b1)
y<-runif(itrn,a2,b2)
e_true<-rep(exp(1),itrn)
e_hat<-array(0,dim=c(itrn))
count<-array(0,dim=c(itrn))
xx<-seq(a1,b1,by=0.01)
for(i in 1 : itrn){
count [i]<- (x[i]*y[i]<1)
area<-(sum(count)/i)
e_hat[i]<- 2ˆ(1/area)

}
plot(e_hat, type = l)
lines(e_true, col=2)
s0<- which (count==0)
s1<- which (count==1)
plot(y[s1]˜x[s1], col=2, pch = 20, cex = 0.5, xlab="X", ylab = "Y")
lines (y[s0]˜x[s0], type=p, pch = 20, cex = 0.5)
lines((1/(xx))˜xx , col=3, lwd=4)
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 13

4.0
3.5
e_hat

3.0
2.5
2.0

0 2000 4000 6000 8000 10000

Index
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 14

1.0
0.8
0.6
Y

0.4
0.2
0.0

1.0 1.2 1.4 1.6 1.8 2.0

X
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 15

3. Random variable
• Primary data : Primary data is a type of data that is collected
by researchers directly from main sources through interviews,
surveys, experiments, etc.
• Secondary data: Using existing data generated/collected by
large government Institutions, healthcare facilities, research group
etc. as part of organizational record keeping. The data is then
extracted from more varied datafiles.
Categorical Data
Nominal: : Categories with names only, without ordering
Ordinal: : Categories with names and ordering

Numerical Data
Countable/discrte: : Values from integers (Z) or a function of
it
Continuous[uninterrupted]: : Values from intervals (a, b), (a, b], [a, b), [a, b], (−∞, a], [b, ∞
etc .

:
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 16

Definition 49. Random variable: Let (Ω, A, P ) be a probability


space. Then a function X : Ω → R is called a random variable if

X −1 ((−∞, x]) ≡ {ω|X(ω) ≤ x} ∈ A ∀x ∈ R

Remark 50. Random variable is a deterministic function which


has nothing random in it.
Remark 51. Consider a function g : R → R and X is a random variable
on (Ω, A, P ). Then Y = g(X) is also a random variable. It implies that
X −1 (g −1 (−∞, x]) ∈ A for any x ∈ R. It means Q((−∞, x]) = P (Y ∈
(−∞, x]) = P (X ∈ g −1 ((−∞, x])), which is known as push forward
of probability.
Remark 52. Vector Valued Random variable: (X1 (ω), X2 (ω), · · · , Xk (ω))
is a vector valued random variable where ω ∈ Ω.
Remark 53. Let X and Y be random variables. Then,
• aX + bY is a random variable for alla, b ∈ R.
• {X, Y } and {X, Y } are random variables.
• XY is a random variable.
• Provided that P (Y (ω) = 0) = 0 for each ω ∈ Ω, then X/Y is a
random variable.

Definition 54. Cumulative distribution function (c.d.f.): Cu-


mulative distribution function of a random variable X is a function
F : R → [0, 1] defined as
F (x) = P (X ≤ x)
= P (X −1 (−∞, x])
= P ({ω|X(ω) ∈ (−∞, x]}) ∀x ∈ R.
A c.d.f. has following properties
(1) F (−∞) = F (x) = 0
x↓−∞
(2) F (∞) = F (x) = 1
x↑∞
(3) F (a) ≤ F (b) ∀a ≤ b ∈ R [non-decreasing]
(4) F (a) = F (x) ∀a ∈ R [right-continuous]
x↓a
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 17

Remark 55. Cumulative distribution function uniquely identifies a ran-


dom variable.

Definition 56. Discrete valued random variable: For a given


probability space (Ω, A, P ) a random variable X is said to be a discrete
valued random variable if S = {X(ω)|ω ∈ Ω} is a finite or countably
infinite set and X −1 (si ) ∈ A for all si ∈ S.

Remark 57. There can be finitely or countably many jump discontinu-


ities in a c.d.f. of a random variable. The sum of the magnitude of
jumps is one, which is the total probability.

Definition 58. Probability mass function(p.m.f.): If X is a dis-


crete valued random variable in a given probability space (Ω, A, P )
then a non-negative function f (x) := P (X = x) on R is called a proba-
bility mass function or discrete density function of the random variable
X. Probability mass function has the following properties
• f (x) ≥ 0 ∀x ∈ R.
• S
# = {x|f (x) > 0} is finite or a countably infinite set.
• f (s) = 1
s∈S

Definition 59. Continuous valued random variable: For a given


probability space (Ω, A, P ) a random variable X is said to be a con-
tinuous valued random variable P (X = x) = 0∀x ∈ R.
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 18

Definition 60. Probability density function(p.d.f.): If X is


a continuous valued random variable in a given probability space
(Ω, A, P ) with c.d.f F (·) then a non-negative function f : R → [0, ∞)
is called a probability density function of X if
$
P (X ∈ A) = f (x) {x∈A} dx
x

Remark 61. In particular for A = (−∞, x] for any x ∈ R then

$ x
d d
f (x) = F (x) = f (t)dt.
dx dx −∞

Definition 62. Expectation: The expectation % of a random variable


X with c.d.f FX (·) is defined as E(X) = xdFX (x) where,
$ !" "
xf (x), if |x|f (x) < ∞ for discrete X,
xdFX (x) = % x % x
x
xf (x), if x |x|f (x) < ∞ for continuous X.

Exercise 63. Find the expectation of the random variables with the
following densities
(a) f (x) = π(1+x
1
2 ) when x ∈ R

(b) f (x) = |x|(1+|x|) when x ∈ S = {(−1)n n|n ∈ N}


1
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 19

Definition 64. Moment generating function: The moment gen-


erating function (m.g.f.) of a random variable X is defined as
MX (t) = E(etX ) if E(etX ) < ∞ ∀t ∈ (−ϵ, ϵ) for some ϵ > 0

• Cumulative distribution function (c.d.f) and uniquely identify


the probability distribution of a random variable.
• Moment generating function (m.g.f.) if exists then uniquely
identifies the probability distribution of a random variable.
• Probability density function identifies the probability distribu-
tion of a random variable up to some length or volume zero set.
So it is not unique in general.

Remark 65. Probability mass function can be considered as discrete


density function with respect to count measure. If X is a discrete
valued random variable with P (X ∈ S) = 1, where S is a countable
set, then a non-negative function f is called a probability mass function
or
#discrete density function of the random variable X if P (X ≤ x) =
f (s) {s≤x} ∀x ∈ R.
s∈S

Theorem 66. If X is a non-negative integer-valued random variable


with finite expectation then

#
E(X) = P (X ≥ k)
k=1

Definition 67. Raw moment: " Let X be a discrete valued random


variable with p.m.f f (·) such that x |xr |f (x) < ∞. Then the rth order
raw moment of X is defined as

#
µr = E(X r ) = xr f (x)
x
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 20

Definition 68. Central moment:"Let X be a discrete valued random


variable with p.m.f f (·) such that x |(x − µx )r |f (x) < ∞. Then the
rth order raw moment of X is defined as
#
µr = E(X − µx )r = (x − µx )r f (x).
x

"
• Mean of random variable X is µ1 = E(X) = x xf (x)" = µx .

• Variance of random variable X is µ2 = E(X − µx )2 = x (x −


µx )2 f (x) = V ar(X).

Exercise 69. Prove that:


n n n
1# 2 1 ##
(xi − x̄) = 2 (xi − xj )2
n i=1 2n i=1 j=1
"n
Exercise 70. Show that g(a) = 1
n i=1 (xi − a)2 is minimum if a = x̄.

Moments from Moment generating function: Let X be a dis-


crete valued random with moment generating function

∞ k
# t E(X k )
MX (t) = E(etx ) =
k=0
k!

Then one can obtain the kth order raw moment from m.g.f. by

∂k ′

k
MX (t)|t=0 = µk
∂t
Exercise 71. Prove the following inequalities:

• Markov’s Inequity: If X is a non-negative valued random


variable then

E(X)
P (X > t) ≤ ∀t > 0
t

• Chebyshev’s Inequity: P (|X − µx | > ϵ) ≤ E(X−µx )2


ϵ2
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 21

Bound in variance
6
5
4
pi_hat

3
2
1
0

0 2000 4000 6000 8000 10000

Index

Definition 72. For a discrete probability distribution, a median is by


definition any real number m that satisfies the inequalities

1 1
(X ≤ m) ≥ and (X ≥ m) ≥
2 2
and for a continuous probability distribution,
1
(X ≤ m) = = (X ≥ m) .
2

"
Exercise 73. Graphically show that g(a) = n1 ni=1 |xi −a| is minimum
if a = median of {x1 , · · · , xn }. Observe that median need not be
unique.
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 22

Table 1. Moment based summary

Measure Theoritical Sample


n
#
Mean
′ ′
µ1 = µ = E(X) m1 = x = n1 xi
i=1
n
#
Variance µ2 = σ 2 = E(X − µ)2 m 2 = s2 = 1
n
(xi − x)2
1
"
i=1
n
µ3 [(X − µ)3 ] m3 − x)3
i=1 (xi
Skewness γ1 = 3 = g1 = 3/2 = & "n n
'
σ ( [(X − µ)2 ])3/2 m2 1
(x − x) 2 3/2
n "i=1 i
1 n
[(X−µ)4 ] m4 (xi − x)4
Kurtosis γ2 = ( [(X−µ)2 ])2
−3= µ4
σ4
− 3 g2 = 2 − 3 = & n "ni=1 ' −3
m2 1
(x − x) 2 2
n i=1 i

Figure 3.1. Central Tendency & Quartiles

Definition 74. Mode: Mode of a probability distribution is defined


as
mode = argmaxx∈R f (x)
PROBABILITY & STATISTICS(MA20205, LTP- 3-0-0,CRD- 3) 23

Figure 3.2. Skewness

Figure 3.3. Kurtosis

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy