0% found this document useful (0 votes)
5 views16 pages

Formula Sheet - II Year BE

The Mathematics Handbook for II Year BE covers complex functions, including definitions, important results, and properties of analytic functions. It also discusses curve fitting, probability, and sampling theory, providing formulas for correlation coefficients, regression lines, and probability laws. Key concepts such as Cauchy’s Theorem and Bayes' Theorem are included, along with laws on set operations and discrete probability distributions.

Uploaded by

nikhil10nikhil10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views16 pages

Formula Sheet - II Year BE

The Mathematics Handbook for II Year BE covers complex functions, including definitions, important results, and properties of analytic functions. It also discusses curve fitting, probability, and sampling theory, providing formulas for correlation coefficients, regression lines, and probability laws. Key concepts such as Cauchy’s Theorem and Bayes' Theorem are included, along with laws on set operations and discrete probability distributions.

Uploaded by

nikhil10nikhil10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

MATHEMATICS HANDBOOK

II Year BE
Academic Year 2023 - 2024
Calculus of Complex functions
Complex Number:

Cartesian form: 𝑧 = 𝑥 + 𝑖𝑦, where 𝑖 = √−1


Polar form: 𝑧 = 𝑟𝑒 𝑖𝜃 , where 𝑒 𝑖𝜃 = cos 𝜃 + 𝑖𝑠𝑖𝑛 𝜃
Modulus of 𝑧: |𝑧| = 𝑟 = √𝑥 2 + 𝑦 2
𝑦
Amplitude / Argument of 𝑧 : 𝑎𝑚𝑝 𝑧 or arg 𝑧 = 𝜃 = 𝑡𝑎𝑛−1 (𝑥 )

Some Important results:

𝑒 𝑖𝜃 = 𝑐𝑜𝑠𝜃 + 𝑖 𝑠𝑖𝑛𝜃
𝑒 −𝑖𝜃 = 𝑐𝑜𝑠𝜃 − 𝑖 𝑠𝑖𝑛𝜃
𝑒 𝑖𝜃 +𝑒 −𝑖𝜃 𝑒 𝑖𝜃 − 𝑒 −𝑖𝜃
cos 𝜃 = ; sin 𝜃 =
2 2𝑖
cos(𝑖𝜃) = cosh 𝜃, sin(𝑖𝜃) = 𝑖𝑠𝑖𝑛ℎ 𝜃

Complex Valued Functions:


𝑤 = 𝑓(𝑧) = 𝑓(𝑥 + 𝑖𝑦) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) [ Cartesian form]
𝑤 = 𝑓(𝑧) = 𝑓(𝑟𝑒 𝑖𝜃 ) = 𝑢(𝑟, 𝜃) + 𝑖𝑣(𝑟, 𝜃) [Polar Form]

Analytic (Regular, Holomorphic) function:


𝑑𝑤 𝑓(𝑧+𝛿𝑧)−𝑓(𝑧)
𝑓 ′ (𝑧) = = lim exists and is unique.
𝑑𝑧 𝛿𝑧→0 𝛿𝑧

Cauchy-Riemann (C-R) equations:


𝜕𝑢 𝜕𝑣 𝜕𝑣 𝜕𝑢
In Cartesian form: = and = − 𝜕𝑦
𝜕𝑥 𝜕𝑦 𝜕𝑥
𝜕𝑢 1 𝜕𝑣 𝜕𝑣 1 𝜕𝑢
In Polar form: = 𝑟 𝜕𝜃 and 𝜕𝑟 = − 𝑟 𝜕𝜃
𝜕𝑟

The derivative of analytic function 𝑓(𝑧) is given by


𝑓 ′ (𝑧) = 𝑢𝑥 + 𝑖𝑣𝑥 [Cartesian form]
𝑓 ′ (𝑧) = 𝑒 −𝑖𝜃 (𝑢𝑟 + 𝑖𝑣𝑟 ) [Polar form]
Harmonic Function:
∇2 ∅ = 0 implies ∅ is harmonic.
𝜕2 ∅ 𝜕2 ∅
Cartesian form: ∇2 ∅ = + =0
𝜕𝑥 2 𝜕𝑦 2
𝜕2 ∅ 1 𝜕∅ 1 𝜕2 ∅
Polar form: ∇2 ∅ = 𝜕𝑟 2 + 𝑟 𝜕𝑟 + 𝑟 2 𝜕𝜃2 = 0
𝑎𝑧+𝑏 (𝑤−𝑤 )(𝑤 −𝑤 ) (𝑧−𝑧 )(𝑧 −𝑧 )
Bilinear Transformation: 𝑤 = or 𝑧 = (𝑤−𝑤0 )(𝑤1 −𝑤2 ) = (𝑧−𝑧0 )(𝑧1 −𝑧2 )
𝑐𝑧+𝑑 2 1 0 2 1 0
Invariant points: If a point z maps onto itself, that is 𝑤 = 𝑧 under the bilinear
transformation, then the point is called an invariant point.
complex Integration:
1) ∫𝑐 𝑓(𝑧)𝑑𝑧 = ∫𝑐(𝑢 + 𝑖𝑣)(𝑑𝑥 + 𝑖𝑑𝑦) = ∫𝑐(𝑢𝑑𝑥 − 𝑣𝑑𝑦) + 𝑖 ∫𝑐(𝑣𝑑𝑥 + 𝑢𝑑𝑦)
2) If –C is the curve traversed in the anticlockwise direction, then
∫−𝑐 𝑓(𝑧)𝑑𝑧 = − ∫𝑐 𝑓(𝑧)𝑑𝑧
3)If C is split into a number of parts C1, C2, C3……, then
∫ 𝑓(𝑧)𝑑𝑧 = ∫ 𝑓(𝑧)𝑑𝑧 + ∫ 𝑓(𝑧)𝑑𝑧 + ∫ 𝑓(𝑧)𝑑𝑧 + ⋯
𝑐 𝑐1 𝑐2 𝑐3

𝑦−𝑏 𝑑−𝑏
4)Equation of the straight line joining (𝑎, 𝑏) and (𝑐, 𝑑) :𝑥−𝑎 = 𝑐−𝑎
Equation of the circle: |𝑧 − 𝑎| = 𝑟 represents a circle 𝑧 − 𝑎 = 𝑟𝑒 𝑖𝜃
Cauchy Theorem : If 𝑓(𝑧) is analytic at all points inside and on the simple closed curve C
then ∫𝑐 𝑓(𝑧)𝑑𝑧 = 0.
Note: If C1 and C2 are two simple closed curves such that C2 lies entirely within C1 and if
𝑓(𝑧) is analytic on C1, C2 and in the region bounded by C1, C2, then
∫𝑐 𝑓(𝑧)𝑑𝑧 = ∫𝑐 𝑓(𝑧)𝑑𝑧.
1 2
Cauchy Integral Formula: If 𝑓(𝑧) is analytic inside and on a simple closed curve C and if
1 𝑓(𝑧)
‘a’ is any point within C then 𝑓(𝑎) = 2𝜋𝑖 ∫𝑐 𝑧−𝑎 𝑑𝑧.
Generalized Cauchy Integral Formula: If 𝑓(𝑧) is analytic inside and on a simple closed
𝑛! 𝑓(𝑧)
curve C and if ‘a’ is any point within C, then 𝑓 (𝑛) (𝑎) = 2𝜋𝑖 ∫𝑐 (𝑧−𝑎)(𝑛+1) 𝑑𝑧

Curve Fitting, Probability and Sampling Theory


Curve Fitting:
∑(𝑥−𝑥̅ )(𝑦−𝑦̅)
 Correlation coefficient 𝑟 = where
𝑛𝜎𝑥 𝜎𝑦
∑(𝑥−𝑥̅ )2 ∑(𝑦−𝑦̅)2 ∑𝑥 ∑𝑦
𝜎𝑥2 = , 𝜎𝑦2 = , 𝑥̅ = , 𝑦̅ =
𝑛 𝑛 𝑛 𝑛

∑ 𝑋𝑌
 𝑟= where 𝑋 = 𝑥 − 𝑥̅ , 𝑌 = 𝑦 − 𝑦̅
√∑ 𝑋 2 √∑ 𝑌 2

𝜎𝑥2 +𝜎𝑦2 −𝜎𝑧2


 𝑟= where 𝑧 = 𝑥 − 𝑦
2𝜎𝑥 𝜎𝑦

 The regression line of x on y is given by

𝜎 ∑ 𝑋𝑌
𝑥 − 𝑥̅ = 𝑟 𝜎𝑥 (𝑦 − 𝑦̅) OR 𝑋 = ∑ 𝑌2
(𝑌)
𝑦

 The regression line of y on x is given by


𝜎𝑦 ∑ 𝑋𝑌
𝑦 − 𝑦̅ = 𝑟 𝜎 (𝑥 − 𝑥̅ ) OR 𝑌 = ∑ 𝑋 2 (𝑋)
𝑥

 𝑟 = ±√(𝑐𝑜𝑒𝑓𝑓. 𝑜𝑓 𝑥)(𝑐𝑜𝑒𝑓𝑓. 𝑜𝑓 𝑦
 Rank correlation for non-repeated ranks
6 ∑(𝑥−𝑦)2 6 ∑ 𝑑2
𝜌=1− or 𝜌 = 1 − 𝑛(𝑛2 −1)
𝑛(𝑛2 −1)
 Rank correlation for repeated ranks
𝑚(𝑚2 − 1)
6 [∑ 𝑑 2 + + ⋯]
𝜌=1− 12
𝑛(𝑛2 − 1)
Laws on Set Operations:
1. 𝐴 ∪ 𝐵 = 𝐵 ∪ 𝐴; 𝐴 ∩ 𝐵 = 𝐵 ∩ 𝐴 (Commutative law)
2. 𝐴 ∪ (𝐵 ∪ 𝐶) = (𝐴 ∪ 𝐵) ∪ 𝐶; 𝐴 ∩ (𝐵 ∩ 𝐶) = (𝐴 ∩ 𝐵) ∩ 𝐶 (Associative laws)
3. 𝐴 ∪ (𝐵 ∩ 𝐶) = (𝐴 ∪ 𝐵) ∩ (𝐴 ∪ 𝐶); 𝐴 ∩ (𝐵 ∪ 𝐶) = (𝐴 ∩ 𝐵) ∪ (𝐴 ∩ 𝐶) (Distributive
laws)
̅̅̅̅̅̅̅
4. (𝐴 ∪ 𝐵 ) = 𝐴̅ ∩ 𝐵̅ ; (𝐴 ̅̅̅̅̅̅̅
∩ 𝐵 ) = 𝐴̅ ∪ 𝐵̅ (De Morgan’s laws)
5. 𝐴 − 𝐵 = 𝐴 ∩ 𝐵̅ ; (𝐴 ̅̅̅̅̅̅) = 𝐴

Probability:
𝑃(𝐸)
1. 𝑃(𝐴) = 𝑃(𝑆)
2. 𝑃(𝑆) = 1
3. 𝑃(𝜙) = 0
4. 𝑃(𝐸̅ ) = 1 − 𝑃(𝐸)
5. 0 ≤ 𝑃(𝐸) ≤ 1

Addition Theorem:
1. 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) − 𝑃(𝐴 ∩ 𝐵)
2. If A and B are mutually exclusive then 𝑃(𝐴 ∩ 𝐵) = 0

Conditional Probability:
𝑃(𝐴∩𝐵)
1. 𝑃(𝐴⁄𝐵 ) = 𝑃(𝐵)
𝑃(𝐴∩𝐵)
2. 𝑃(𝐵⁄𝐴) = 𝑃(𝐴)

Multiplication rule:
1. 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴) ⋅ 𝑃(𝐵⁄𝐴)
2. 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴) ⋅ 𝑃(𝐵) ⇔ A and B are independent.

Baye’s Theorem:
𝑃(𝐴𝑖 )𝑃(𝐴⁄𝐴𝑖 )
𝑃(𝐴𝑖 ⁄𝐴) = 𝑛
∑𝑖=1 𝑃(𝐴𝑖 )𝑃(𝐴⁄𝐴𝑖 )

Discrete Probability Distribution:


1. 𝑝(𝑥𝑖 ) ≥ 0; ∑𝑖 𝑝(𝑥𝑖 ) = 1 then 𝑝(𝑥) is a p.d.f.
2. 𝑓(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥𝑖=1 𝑃(𝑥𝑖 ) is a c.d.f.
3. Mean (𝜇) = ∑𝑖 𝑥𝑖 ∙ 𝑝(𝑥𝑖 )
4. Variance (𝑉) = ∑𝑖(𝑥𝑖 − 𝜇)2 ∙ 𝑝(𝑥𝑖 ) or (𝑉) = ∑ 𝑥𝑖2 ∙ 𝑝(𝑥𝑖 ) − 𝜇 2
5. Standard Deviation (𝜎) = √𝑉

Binomial Distribution:
1. 𝑃(𝑥) = 𝑛𝐶𝑥 𝑝 𝑥 𝑞 𝑛−𝑥
2. Mean (𝜇) = 𝑛𝑝
3. Variance (𝑉) = 𝑛𝑝𝑞
4. S.D (𝜎) = √𝑛𝑝𝑞

Poisson Distribution:
𝑒 −𝑚 𝑚𝑥
1. 𝑃(𝑥) = 𝑥!
2. Mean (𝜇) = 𝑚
3. Variance (𝑉) = 𝑚
4. S.D (𝜎) = √𝑚

Continuous Probability Distribution:



1. 𝑓(𝑥) ≥ 0; ∫−∞ 𝑓(𝑥) 𝑑𝑥 = 1 then 𝑓(𝑥) is a p.d.f.
𝑥
2. 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∫−∞ 𝑓(𝑥) 𝑑𝑥 is a c.d.f.
𝑏
3. 𝑃(𝑎 ≤ 𝑥 ≤ 𝑏) = ∫𝑎 𝑓(𝑥) 𝑑𝑥

4. 𝑃(𝑥 ≥ 𝑟) = ∫𝑟 𝑓(𝑥) 𝑑𝑥
𝑟
5. 𝑃(𝑥 < 𝑟) = ∫−∞ 𝑓(𝑥) 𝑑𝑥

6. Mean (𝜇) = ∫−∞ 𝑥 ∙ 𝑓(𝑥) 𝑑𝑥
∞ ∞
7. Variance (𝑉) = ∫−∞(𝑥 − 𝜇)2 ∙ 𝑓(𝑥) 𝑑𝑥 or (𝑉) = ∫−∞ 𝑥 2 ∙ 𝑓(𝑥) 𝑑𝑥 − 𝜇 2
8. Standard Deviation (𝜎) = √𝑉

Exponential Distribution:
𝛼𝑒 −𝛼𝑥 , 𝑥>0
1. 𝑓(𝑥) = { 𝑤ℎ𝑒𝑟𝑒 𝛼 > 0
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1
2. Mean (𝜇) = 𝛼
1
3. Variance (𝑉) = 𝛼2
1
4. S.D (𝜎) = 𝛼

Normal Distribution:
−(𝑥−𝜇)2
1
1. 𝑓(𝑥) = 𝜎√2𝜋 𝑒 2𝜎2

2. Mean (𝜇) = 𝜇
3. Variance (𝑉) = 𝜎 2
4. S.D (𝜎) = 𝜎

Standard Normal Variate (SNV):


𝑥−𝜇
𝑧=
𝜎
Joint Probability Distribution:
1. ∑𝑚 𝑛
1 ∑1 𝐽𝑖𝑗 = 1
2. 𝑓(𝑥𝑖 )𝑔(𝑦𝑗 ) = 𝐽𝑖𝑗 then 𝑋 and 𝑌 are independent.
3. 𝜇𝑋 = 𝐸(𝑋) = ∑𝑖 𝑥𝑖 ∙ 𝑓(𝑥𝑖 )
4. 𝜇𝑌 = 𝐸(𝑌) = ∑𝑗 𝑦𝑗 ∙ 𝑔(𝑦𝑗 )
5. 𝜇𝑋𝑌 = 𝐸(𝑋𝑌) = ∑𝑖,𝑗 𝑥𝑖 ∙ 𝑦𝑗 ∙ 𝐽𝑖𝑗 )
6. 𝐶𝑂𝑉(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)
7. 𝑉(𝑋) = ∑𝑖(𝑥𝑖 − 𝜇)2 ∙ 𝑓(𝑥𝑖 )
2
8. 𝑉(𝑌) = ∑𝑗(𝑦𝑗 − 𝜇) ∙ 𝑔(𝑦𝑗 )
𝐶𝑂𝑉(𝑋,𝑌)
9. 𝜌(𝑋, 𝑌) = 𝜎𝑋 𝜎𝑌
10. If 𝑋 and 𝑌 are independent then 𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌); 𝐶𝑂𝑉(𝑋, 𝑌) = 0; 𝜌(𝑋, 𝑌) = 0

Continuous Joint Probability Distribution:

If X & Y are Continuous random variables, then

𝑑 𝑏
𝑃(𝑎 ≤ 𝑥 ≤ 𝑏, 𝑐 ≤ 𝑦 ≤ 𝑑) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦
𝑐 𝑎

𝑥 𝑦
Cumulative Distribution Function is 𝐹(𝑥, 𝑦) = ∫−∞ ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦

∞ ∞ ∞
𝐸[𝑋] = ∫−∞ ∫−∞ 𝑥𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦 = ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥 ,

∞ ∞ ∞
𝐸[𝑌] = ∫−∞ ∫−∞ 𝑦𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦 = ∫−∞ 𝑦𝑔(𝑦)𝑑𝑦 ,

∞ ∞
𝐸[𝑋𝑌] = ∫ ∫ 𝑥𝑦𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦
−∞ −∞

𝑉𝑎𝑟(𝑋) = 𝐸[𝑋 2 ] − (𝐸[𝑋])2

𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸[𝑋𝑌] − 𝐸[𝑋]𝐸[𝑌]

𝐸[𝛼𝑥 + 𝛽𝑦] = 𝛼𝐸[𝑋] + 𝛽𝐸[𝑌]

𝑉𝑎𝑟(𝛼𝑥 + 𝛽𝑦) = 𝛼 2 𝑣𝑎𝑟(𝑋) + 𝛽 2 𝑣𝑎𝑟(𝑌) + 2𝛼𝛽𝑐𝑜𝑣(𝑋, 𝑦)

Analysis of Variance:

𝑇2
Correction Factor = 𝑛

Total Sum of Squares = ∑ 𝑥𝑖2 − 𝐶𝐹

(∑ 𝑥𝑐 )2
Sum of square between the sample 𝑆𝑆𝐶 = − 𝐶𝐹
𝑛𝑐

(∑ 𝑥𝑟 )2
Sum of square between the sample 𝑆𝑆𝑟 = − 𝐶𝐹
𝑛𝑟

𝑀𝑒𝑎𝑛 𝑆𝑢𝑚 𝑜𝑓 𝑠𝑞𝑢𝑎𝑟𝑒 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒


𝐹 − 𝑟𝑎𝑡𝑖𝑜 = 𝑀𝑒𝑎𝑛 𝑆𝑢𝑚 𝑜𝑓 𝑠𝑞𝑢𝑎𝑟𝑒 𝑤𝑖𝑡ℎ𝑖𝑛 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒
Sampling Theory
𝑥̅ −𝜇
Standard normal variate (s.n.v): z=
𝜎/√(𝑛)

𝜎
95% confidence interval for 𝜇 : 𝑥̅ ± 1.96 ( 𝑛)

𝜎
99% confidence interval for 𝜇 : 𝑥̅ ± 2.58 ( 𝑛)

Students’ t-distribution:
𝑥̅ −𝜇 1
For sample mean t= √(𝑛) where 𝑠 2 = (𝑛−1) ∑(𝑥𝑖 − 𝑥̅ )2
𝑠

t-test for two sample means:


𝑥̅ − 𝑦̅
𝑡=
1 1
𝑠( √𝑛 + 𝑛 )
1 2
1 𝑛 𝑛
where 𝑠 2 = (𝑛 ){∑1 1(𝑥𝑖 − 𝑥̅ )^2 + ∑1 2 (𝑦𝑖 − 𝑦̅)^2 }
1 +𝑛2 −2

Enumeration and Generating Functions


Principle of inclusion exclusion: Let 𝑆 be a finite set and 𝐴1 , 𝐴2 , … , 𝐴𝑛 be subsets of 𝑆.
Then the Principle of inclusion exclusion for 𝐴1 , 𝐴2 , … , 𝐴𝑛 states that
|𝐴1 ∪ 𝐴2 ∪ … ∪ 𝐴𝑛 |

= ∑|𝐴𝑖 | − ∑|𝐴𝑖 ∩ 𝐴𝑗 |

+ ∑|𝐴𝑖 ∩ 𝐴𝑗 ∩ 𝐴𝑘 | − . . . + (−1)𝑛−1 ∑|𝐴1 ∩ 𝐴2 ∩ … ∩ 𝐴𝑛 |

̅̅̅1 ∩ ̅̅̅
1. |𝐴 𝐴2 ∩ ̅̅̅
𝐴3 ∩ … ∩ ̅̅̅̅
𝐴𝑛 | = |𝑆| − ∑|𝐴𝑖 | + ∑|𝐴𝑖 ∩ 𝐴𝑗 | − ∑|𝐴𝑖 ∩ 𝐴𝑗 ∩
𝐴𝑘 | + . . . − (−1)𝑛−1 ∑|𝐴1 ∩ 𝐴2 ∩ … ∩ 𝐴𝑛 |
2. The number of elements in 𝑆 that satisfy exactly 𝑚 of the 𝑛 conditions (0 ≤ 𝑚 ≤ 𝑛)
is
𝑚+1 𝑚+2 𝑛
𝐸𝑚 = 𝑆𝑚 − ( ) 𝑆𝑚+1 + ( ) 𝑆𝑚+2 − ⋯ (−1)𝑛−𝑚 ( )𝑆
1 2 𝑛−𝑚 𝑛
3. The number of elements in 𝑆 that satisfy at least 𝑚 of the 𝑛 conditions (1 ≤ 𝑚 ≤ 𝑛)
is
𝑚 𝑚+1 𝑛−1
𝐿𝑚 = 𝑆𝑚 − ( ) 𝑆𝑚+1 + ( ) 𝑆𝑚+2 − ⋯ (−1)𝑛−𝑚 ( )𝑆
𝑚−1 𝑚−1 𝑚−1 𝑛
4. Rook Polynomial 𝑟(𝐶, 𝑥) = 1 + 𝑟1 𝑥 + 𝑟2 𝑥 2 + ⋯ + 𝑟𝑛 𝑥 𝑛
5. Expansion Polynomial for 𝑟(𝐶, 𝑥) = 𝑥 𝑟(𝐷, 𝑥) + 𝑟(𝐸, 𝑥)
6. Rook Polynomial for 𝑟(𝐶, 𝑥) = 𝑟(𝐶1 , 𝑥) × 𝑟(𝐶2 , 𝑥)
Fourier Series
Fourier Series of period 2𝜋 and Euler’s formulae for the Fourier coefficients
𝑎0 , 𝑎𝑛 , 𝑏𝑛
∞ ∞
𝑎0
𝑓(𝑥) = + ∑ 𝑎𝑛 𝑐𝑜𝑠 𝑛𝑥 + ∑ 𝑏𝑛 𝑠𝑖𝑛 𝑛𝑥
2
𝑛=1 𝑛=1
1 𝑐+2𝜋
𝑎0 = 𝜋 ∫𝑐 𝑓(𝑥)𝑑𝑥

1 𝑐+2𝜋
𝑎𝑛 = ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥
𝜋 𝑐

1 𝑐+2𝜋
𝑏𝑛 = ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑛𝑥 𝑑𝑥
𝜋 𝑐
Fourier Series of arbitrary period 2𝑙 and related Euler’s formulae
∞ ∞
𝑎0 𝑛𝜋𝑥 𝑛𝜋𝑥
𝑓(𝑥) = + ∑ 𝑎𝑛 𝑐𝑜𝑠 + ∑ 𝑏𝑛 𝑠𝑖𝑛
2 𝑙 𝑙
𝑛=1 𝑛=1

1 𝑐+2𝑙
𝑎0 = ∫ 𝑓(𝑥)𝑑𝑥
𝑙 𝑐

1 𝑐+2𝑙 𝑛𝜋𝑥
𝑎𝑛 = ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑑𝑥
𝑙 𝑐 𝑙

1 𝑐+2𝑙 𝑛𝜋𝑥
𝑏𝑛 = ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑑𝑥
𝑙 𝑐 𝑙

Fourier coefficients in the case of even and odd nature of 𝑓(𝑥)

Interval 𝑓(𝑥) is even 𝑓(𝑥) is odd

(-𝜋,𝜋) 2 𝜋 𝑎0 =0
or 𝑎0 = ∫ 𝑓(𝑥)𝑑𝑥
𝜋 0
(0,2𝜋) 𝑎𝑛 =0
𝜋
2
𝑎𝑛 = ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑛𝑥𝑑𝑥 2 𝜋
𝜋 0 𝑏𝑛 = ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑛𝑥𝑑𝑥
𝜋 0
𝑏𝑛 = 0

(-l, l) 2 𝑙 𝑎0 =0
or 𝑎0 = ∫ 𝑓(𝑥)𝑑𝑥
𝑙 0 𝑎𝑛 =0
(0,2l)
2 𝑙 𝑛𝜋𝑥 2 𝑙 𝑛𝜋𝑥
𝑎𝑛 = ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑑𝑥 𝑏𝑛 = ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑑𝑥
𝑙 0 𝑙 𝑙 0 𝑙
𝑏𝑛 = 0
Half range Fourier series(cosine/sine) along with the related formulae.

𝑓(𝑥) 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑚𝑒𝑛𝑡 𝑆𝑒𝑟𝑖𝑒𝑠 𝐹𝑜𝑢𝑟𝑖𝑒𝑟 𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠


𝑖𝑛
(0, 𝜋) Cosine series 𝑎0

2 𝜋
+ ∑ 𝑎𝑛 𝑐𝑜𝑠 𝑛𝑥 𝑎0 = ∫ 𝑓(𝑥)𝑑𝑥
2 𝜋 0
1
𝑎𝑛
2 𝜋
= ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥
𝜋 0

(0, 𝜋) Sine series 𝑏𝑛
∑ 𝑏𝑛 𝑠𝑖𝑛 𝑛𝑥 2 𝜋
1
= ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑛𝑥 𝑑𝑥
𝜋 0

(0, 𝜋) Cosine series 𝑎0 𝑛𝜋𝑥 2 𝑙
+ ∑ 𝑎𝑛 𝑐𝑜𝑠 𝑎0 = ∫ 𝑓(𝑥)𝑑𝑥
2 𝑙 𝑙 0
1
𝑎𝑛
2 𝑙 𝑛𝜋𝑥
= ∫ 𝑓(𝑥)𝑐𝑜𝑠 𝑑𝑥
𝑙 0 𝑙

(0, 𝜋) Sine series 𝑛𝜋𝑥 𝑏𝑛
∑ 𝑏𝑛 𝑠𝑖𝑛 2 𝑙 𝑛𝜋𝑥
𝑙 = ∫ 𝑓(𝑥)𝑠𝑖𝑛 𝑑𝑥
1
𝑙 0 𝑙

Harmonic Analysis Formulae:


Case-(i) : ( period 2𝜋)
2 2 2
𝑎0 = 𝑁 ∑ 𝑦, 𝑎𝑛 = 𝑁 ∑ 𝑦 cos 𝑛𝑥 , 𝑏𝑛 = 𝑁 ∑ 𝑦 sin 𝑛𝑥
Case-(ii) : ( period 2𝑙)
2 2 2
𝑎0 = 𝑁 ∑ 𝑦, 𝑎𝑛 = 𝑁 ∑ 𝑦 cos 𝑛𝜃 , 𝑏𝑛 = 𝑁 ∑ 𝑦 sin 𝑛𝜃
Transforms
Fourier Transforms:
Type Transform Inverse Transform

Fourier 1 ∞
transform ∫ 𝑓(𝑥)𝑒 𝑖𝑢𝑥 𝑑𝑥 = 𝐹(𝑢) ∫ 𝐹(𝑢)𝑒 −𝑖𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
−∞ 2𝜋 −∞

Fourier cosine 2 ∞
∫ 𝑓(𝑥) cos 𝑢𝑥 𝑑𝑥 = 𝐹𝑐 (𝑢) ∫ 𝐹 (𝑢) cos 𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
transform
0 𝜋 0 𝑐

Fourier sine 2 ∞
∫ 𝑓(𝑥) sin 𝑢𝑥 𝑑𝑥 = 𝐹𝑠 (𝑢) ∫ 𝐹 (𝑢) sin 𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
transform
0 𝜋 0 𝑠
Note: Definitions in the alternative / equivalent form
Type Transform Inverse Transform
∞ ∞
Fourier 1 𝑖𝑢𝑥
1
transform ∫ 𝑓(𝑥)𝑒 𝑑𝑥 = 𝐹(𝑢) ∫ 𝐹(𝑢)𝑒 −𝑖𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
√2𝜋 −∞ √2𝜋 −∞

Fourier cosine
2 ∞ 2 ∞
transform √ ∫ 𝑓(𝑥) cos 𝑢𝑥 𝑑𝑥 = 𝐹𝑐 (𝑢) √ ∫ 𝐹𝑐 (𝑢) cos 𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
𝜋 0 𝜋 0

Fourier sine
2 ∞ 2 ∞
transform √ ∫ 𝑓(𝑥) sin 𝑢𝑥 𝑑𝑥 = 𝐹𝑠 (𝑢) √ ∫ 𝐹𝑠 (𝑢) sin 𝑢𝑥 𝑑𝑢 = 𝑓(𝑥)
𝜋 0 𝜋 0

Discrete Fourier Transforms (DFT):


𝑁−1
2𝜋
𝐷𝐹𝑇{𝑥(𝑛)} = 𝑋(𝐾) = ∑ 𝑥(𝑛) 𝑒 −𝑖 𝑁
𝐾𝑛
, 0≤𝐾 ≤𝑁−1
𝑛=0
𝑁−1
1 2𝜋
𝐼𝐷𝐹𝑇{𝑋(𝐾)} = 𝑥(𝑛) = ∑ 𝑋(𝐾) 𝑒 𝑖 𝑁 𝐾𝑛 , 0≤𝑛 ≤𝑁−1
𝑁
𝐾=0

Z Transforms:
𝑍𝑇 (𝑢𝑛 ) = ∑∞
𝑛=0 𝑢𝑛 𝑧
−𝑛
= 𝑢̅(𝑧)
𝑧 𝑑
𝑍𝑇 (𝑘 𝑛 𝑢𝑛 ) = 𝑢̅ (𝑘) and 𝑍𝑇 (𝑛𝑘 ) = −𝑧 𝑑𝑧 𝑍𝑇 (𝑛𝑘−1 )

List of standard Z-transforms:


𝑧 𝑧
𝑍𝑇 (1) = 𝑍𝑇 (𝑘 𝑛 ) =
𝑧−1 𝑧−𝑘
𝑧 𝑘𝑧
𝑍𝑇 (𝑛) = 𝑍𝑇 (𝑘 𝑛 𝑛) =
(𝑧 − 1)2 (𝑧 − 𝑘)2

𝑧2 + 𝑧 𝑘𝑧 2 + 𝑘 2 𝑧
𝑍𝑇 (𝑛2 ) = 𝑍𝑇 (𝑘 𝑛 𝑛2 ) =
(𝑧 − 1)3 (𝑧 − 𝑘)3

𝑧 3 + 4𝑧 2 + 𝑧 𝑘𝑧 3 + 4𝑘 2 𝑧 2 + 𝑘 3 𝑧
𝑍𝑇 (𝑛3 ) = 𝑍𝑇 (𝑘 𝑛 𝑛3 ) =
(𝑧 − 1)4 (𝑧 − 𝑘)4

𝑛𝜋 𝑧 𝑛𝜋 𝑧2
𝑍𝑇 (sin )= 2 𝑍𝑇 (cos ) = 2
2 𝑧 +1 2 𝑧 +1
Initial value theorem:
If 𝑍𝑇 (𝑢𝑛 ) = 𝑢̅(𝑧) then lim 𝑢̅(𝑧) = 𝑢0
𝑧→∞

Final value theorem:


If 𝑍𝑇 (𝑢𝑛 ) = 𝑢̅(𝑧) then lim[(𝑧 − 1)𝑢̅(𝑧)] = lim 𝑢𝑛
𝑧→1 𝑛→∞

List of standard inverse Z-transforms:


𝑧 𝑧
𝑍𝑇 −1 [ ]=1 𝑍𝑇 −1 [ ] = 𝑘𝑛
𝑧−1 𝑧−𝑘
𝑧 𝑘𝑧
𝑍𝑇 −1 [ ]=𝑛 𝑍𝑇 −1 [ ] = 𝑘𝑛𝑛
(𝑧 − 1)2 (𝑧 − 𝑘)2

−1 𝑧2 + 𝑧 −1 𝑘𝑧 2 + 𝑘 2 𝑧
𝑍𝑇 [ ] = 𝑛2 𝑍𝑇 [ ] = 𝑘 𝑛 𝑛2
(𝑧 − 1)3 (𝑧 − 𝑘)3

𝑧 3 + 4𝑧 2 + 𝑧 𝑘𝑧 3 + 4𝑘 2 𝑧 2 + 𝑘 3 𝑧
𝑍𝑇 −1 [ ] = 𝑛3 𝑍𝑇 −1 [ ] = 𝑘 𝑛 𝑛3
(𝑧 − 1)4 (𝑧 − 𝑘)4

𝑧 𝑛𝜋 𝑧2 𝑛𝜋
𝑍𝑇 −1 [ ] = sin 𝑍𝑇 −1 [ 2 ] = cos
𝑧2 +1 2 𝑧 +1 2

Expressions for solving difference equation using Z-transforms.


𝑍𝑇 (𝑢𝑛+1 ) = 𝑧[𝑢̅(𝑧) − 𝑢0 ]
𝑍𝑇 (𝑢𝑛+2 ) = 𝑧 2 [𝑢̅(𝑧) − 𝑢0 − 𝑢1 𝑧 −1 ]

Fundamentals of Logic
Truth Table of Logical Connectives:
p q 𝑝→𝑞 𝑝↔𝑞 𝑝∨𝑞 𝑝∧𝑞 𝑝↑𝑞 𝑝↓𝑞
0 0 1 1 0 0 1 1
0 1 1 0 1 0 1 0
1 0 0 0 1 0 1 0
1 1 1 1 1 1 0 0

Laws of Logic:
Let a, b, c be any three propositions , 𝑇0 𝑖𝑠 𝑇𝑎𝑢𝑡𝑜𝑙𝑜𝑔𝑦 𝑎𝑛𝑑 𝐹0 𝑖𝑠 𝑐𝑜𝑛𝑡𝑟𝑎𝑑𝑖𝑐𝑡𝑖𝑜𝑛 . Then
1. Law of double negation ~(~𝑎) ⇔ 𝑎
2. Idempotent laws 𝑎 ∧ 𝑎 ⇔ 𝑎 𝑎𝑛𝑑 𝑎 ∧ 𝑎 ⇔ 𝑎
3. Identity laws 𝑎 ∨ 𝐹0 ⇔ 𝑎 𝑎𝑛𝑑 𝑎 ∧ 𝑇0 ⇔ 𝑎
4. Inverse laws 𝑎 ∧ ~𝑎 ⇔ 𝐹0 𝑎𝑛𝑑 𝑎 ∨ ~𝑎 ⇔ 𝑇0
5. Domination laws 𝑎 ∨ 𝑇0 ⇔ 𝑇0 𝑎𝑛𝑑 𝑎 ∧ 𝐹0 ⇔ 𝐹0
6. Commutative laws (𝑎 ∧ 𝑏) ⇔ (𝑏 ∧ 𝑎) 𝑎𝑛𝑑 (𝑎 ∨ 𝑏) ⇔ (𝑏 ∨ 𝑎)
7. Absorption laws [𝑎 ∨ (𝑎 ∧ 𝑏)] ⇔ 𝑎 𝑎𝑛𝑑 [𝑎 ∧ (𝑎 ∨ 𝑏)] ⇔ 𝑎
8. De’ Morgan Laws ~(𝑎 ∨ 𝑏) ⇔ ~𝑎 ∧ ~𝑏 𝑎𝑛𝑑 ~(𝑎 ∧ 𝑏) ⇔ ~𝑎 ∨ ~𝑏
9. Associative laws 𝑎 ∧ (𝑏 ∧ 𝑐) ⇔ (𝑎 ∧ 𝑏) ∧ 𝑐 𝑎𝑛𝑑 𝑎 ∨ (𝑏 ∨ 𝑐) ⇔ (𝑎 ∨ 𝑏) ∨ 𝑐
10. Distributive laws 𝑎 ∧ (𝑏 ∨ 𝑐) ⇔ (𝑎 ∧ 𝑏) ∨ (𝑎 ∧ 𝑐) and 𝑎 ∨ (𝑏 ∧ 𝑐) ⇔ (𝑎 ∨ 𝑏) ∧
(𝑎 ∨ 𝑐)

Let a, b, c be any three propositions and 𝑎 → 𝑏 be a conditional. Then


1. 𝑏 → 𝑎 is converse of 𝑎 → 𝑏 ,
2. ~𝑎 → ~𝑏 is inverse of 𝑎 → 𝑏
3. ~𝑏 → ~𝑎 is contrapositive of 𝑎 → 𝑏

Rule of inference:
Let a, b, c be any three propositions. Then
1. Rule of Conjunctive Simplification (𝑎 ∧ 𝑏) ⇒ 𝑎 𝑎𝑛𝑑 (𝑎 ∧ 𝑏) ⇒ 𝑏
2. Rule of Disjunctive Amplification 𝑎 ⇒ (𝑎 ∨ 𝑏) 𝑎𝑛𝑑 𝑏 ⇒ (𝑎 ∨ 𝑏)
3. Rule of syllogism 𝑎 → 𝑏) ∧ (𝑏 → 𝑐) ⇒ (𝑎 → 𝑐)
4. Rule of Modus Pones 𝑝 ∧ (𝑝 → 𝑞) ⇒ 𝑞
5. Rule of Modus Tollens (𝑝 → 𝑞) ∧ ~𝑞 ⇒ ~𝑝
6. Rule of Disjunctive Syllogism [(p∨ 𝑞) ∧ ~𝑝] ⇒ 𝑞
7. Rule of Contradiction (~𝑝 → 𝐹0 )⇒ 𝑝

Truth Value of Quantified Statement:


Let p(x) be an open statement on an universe S
1. The statement ∀𝑥 ∈ 𝑆, 𝑝(𝑥) is true only when p(x) is true for all x in S.
2. The statement ∃𝑥 ∈ 𝑆, 𝑝(𝑥) is false se when p(x) is false for all x in S.
3. Rule of universal Specification: The open statement p(x) known to be true for all x in
S and a is an element of S then p(a) is true.
4. Rule of universal Generalization: The open statement p(x) is proved to be true for an
arbitrary x chosen from S then ∀𝑥 ∈ 𝑠, 𝑝(𝑥) is true.
Graph Theory
1. Hand shaking property: ∑𝑖 deg(𝑣𝑖 ) = 2|𝐸|
2. Euler’s Theorem: A connected planar graph with 𝑛 vertices and 𝑚 edges has exactly
𝑚 + 𝑛 − 2 regions in all of its diagrams
3. ∑𝑛𝑖=1 𝑑+ (𝑣𝑖 ) = ∑𝑛𝑖=1 𝑑 − (𝑣𝑖 ) = m
4. 𝐾𝑟,𝑠 has 𝑟 + 𝑠 vertices and 𝑟. 𝑠 edges.
Chromatic Polynomials:
1) 𝑃(𝑁𝑛 ,⋋) =⋋𝑛 , where 𝑁𝑛 is a null graph.
2) 𝑃(𝐾𝑛 ,⋋) = 0, 𝑖𝑓 ⋋< 𝑛.
3) 𝑃(𝐾𝑛 ,⋋) = 𝑛!, 𝑖𝑓 ⋋= 𝑛.
4) 𝑃(𝐾𝑛 ,⋋) =⋋ (⋋ −1) … . (⋋ −𝑛 + 1) 𝑖𝑓 ⋋> 𝑛.
5) 𝑃(𝐿𝑛 ,⋋) =⋋ (⋋ −1)𝑛−1 , 𝑖𝑓 ⋋≥ 2 , where 𝐿𝑛 is path of n vertices.
Product Rule:
𝑃(𝐺1 ,⋋)
𝑃(𝐺2 ,⋋) =
𝑃(𝐺,⋋)
Multiplication Theorem:
𝑃(𝐺1 ,⋋).𝑃(𝐺2 ,⋋)
𝑃(𝐺,⋋) =
⋋( 𝑛 )

Ordinary Differential Equations of Higher Order


A linear Differential Equation of the form
𝑑𝑛 𝑦 𝑑 𝑛−1 𝑦 𝑑𝑦
𝑛
+ 𝑎1 𝑛−1
+∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ +𝑎𝑛−1 + 𝑎𝑛 𝑦 = 𝜑(𝑥)
𝑑𝑥 𝑑𝑥 𝑑𝑥
Method of finding Complementary function:
Case 1: If roots are real and distinct, say 𝑚1 , 𝑚2 … … . . 𝑚𝑛 then
𝑦 = 𝑦𝑐 = 𝑐1 𝑒 𝑚1 𝑥 + 𝑐2 𝑒 𝑚2 𝑥 + 𝑐3 𝑒 𝑚3 𝑥 +∙∙∙∙∙∙∙∙∙∙∙∙∙ 𝑐𝑛 𝑒 𝑚𝑛 𝑥
Case 2: If roots are real and equal say 𝑚1 = 𝑚2 =. . . . . . . . = 𝑚𝑛 = 𝑚 Then
𝑦 = 𝑦𝑐 = (𝑐1 + 𝑐2 𝑥 2 + 𝑐3 𝑥 3 + ⋯ + 𝑐𝑛 𝑥 𝑛 )𝑒 𝑚𝑥
Case 3: If 𝑚1 , 𝑚2 are complex pair of roots say 𝑝 ± 𝑖𝑞 then
𝑦 = 𝑦𝑐 = 𝑒 𝑝𝑥 (𝑐1 𝑐𝑜𝑠 𝑞𝑥 + 𝑐2 𝑠𝑖𝑛 𝑞𝑥 )
Method of finding Particular Integral:
𝑒 𝑎𝑥
Case 1: If PI of the form 𝑓(𝐷) , Then
𝑒 𝑎𝑥 𝑒 𝑎𝑥
 Replace D by a, = where 𝑓(𝑎) ≠ 0
𝑓(𝐷) 𝑓(𝑎)
𝑒 𝑎𝑥 𝑒 𝑎𝑥
 = 𝑥 𝑓′ (𝑎) where 𝑓(𝑎) = 0 𝑎𝑛𝑑 𝑓 ′ (𝑎) ≠ 0 and result can be extended
𝑓(𝐷)
further
𝑐𝑜𝑠 𝑎𝑥 𝑜𝑟 𝑠𝑖𝑛𝑎𝑥
Case 2: If PI of the form , Then
𝑓(𝐷)
𝑐𝑜𝑠 𝑎𝑥 𝑜𝑟 𝑠𝑖𝑛𝑎𝑥
 Replace 𝐷2 by –𝑎2 , where 𝑓(𝐷2 → −𝑎2 ) ≠ 0
𝑓(𝐷)
𝑐𝑜𝑠 𝑎𝑥 𝑜𝑟 𝑠𝑖𝑛 𝑎𝑥 𝑐𝑜𝑠 𝑎𝑥 𝑜𝑟 𝑠𝑖𝑛𝑎𝑥
 = x where
𝑓(𝐷) 𝑓 ′ (𝐷)
2 2) ′ (𝐷 2
𝑓(𝐷 → −𝑎 = 0 and 𝑓 → −𝑎2 ) ≠ 0 and the result can be extended
further.

Linear Algebra
Inner product: If 𝑢 = [𝑢1 , 𝑢2 , … , 𝑢𝑛 ]𝑡 and 𝑣 = [𝑣1 , 𝑣2 , … , 𝑣𝑛 ]𝑡 are any two vectors then
inner product between them is 𝑢 ∙ 𝑣 = 𝑢1 𝑣1 + 𝑢2 𝑣2 + ⋯ + 𝑢𝑛 𝑣𝑛 .
Length of a vector: The length or norm of 𝑢 = [𝑢1 , 𝑢2 , … , 𝑢𝑛 ]𝑡 is denoted by ‖𝑢‖ and is

defined by ‖𝑢‖ = √𝑢12 + 𝑢22 + ⋯ + 𝑢𝑛2 .


Orthogonal projection onto a line: The orthogonal projection of 𝑦 = [𝑦1 , 𝑦2 , … , 𝑦𝑛 ]𝑡 onto
𝑦∙𝑢
the line spanned by 𝑢 = [𝑢1 , 𝑢2 , … , 𝑢𝑛 ]𝑡 is denoted by 𝑦̂ and defined by 𝑦̂ = 𝑢.
𝑢∙𝑢
Orthogonal projection onto a subspace: The orthogonal projection of 𝑦 =
𝑦 ∙ 𝑢1 𝑦∙𝑢 𝑦 ∙ 𝑢𝑝
[𝑦1 , 𝑦2 , … , 𝑦𝑛 ]𝑡 onto the subspace 𝑊 is 𝑦̂ = 𝑢1 + 𝑢 ∙ 𝑢2 𝑢2 + ⋯ + 𝑢 ∙ 𝑢 𝑢𝑝
𝑢 ∙𝑢 1 1 2 2 𝑝 𝑝

where {𝑢1 , 𝑢2 , … , 𝑢𝑝 } is an orthogonal basis of 𝑊.


The Gram Schmidt Process: Given a basis {𝑥1 , 𝑥2 , … , 𝑥𝑚 } for a subspace 𝑊 of 𝑅 𝑛 the
Orthogonal basis is given by
𝑣1 = 𝑥1
𝑥2 ∙ 𝑣1
𝑣2 = 𝑥2 − 𝑣1 ∙𝑣1
𝑣1

𝑥3 ∙ 𝑣1 𝑥3 ∙ 𝑣2
𝑣3 = 𝑥3 − 𝑣1 − 𝑣2 …
𝑣1 ∙ 𝑣1 𝑣2 ∙ 𝑣2
𝑥𝑚 ∙ 𝑣1 𝑥𝑚 ∙ 𝑣2 𝑥𝑚 ∙ 𝑣𝑚−1
𝑣𝑚 = 𝑥𝑚 − 𝑣1 − 𝑣2 − ⋯ − 𝑣𝑚−1 .
𝑣1 ∙ 𝑣1 𝑣2 ∙ 𝑣2 𝑣𝑚−1 ∙ 𝑣𝑚−1

Then {𝑣1 , 𝑣2 , … , 𝑣𝑚 } is an orthogonal basis.

Linear Programming Problems (LPP)


𝑥 𝑦
1. The equation 𝑎 + 𝑏 = 1 represents straight line which passes through the points (𝑎, 0)
and (0, 𝑏).
2. If the constraints of a general LPP be ∑𝑛𝑗=1 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 (𝑖 = 1,2 … ). Then the non-
negative variables 𝑆𝑖 , which are introduced to convert the inequalities to the equalities
of the form ∑𝑛𝑗=1 𝑎𝑖𝑗 𝑥𝑗 + 𝑆𝑖 = 𝑏𝑖 (𝑖 = 1,2 … ) are called slack variables.
Normal Probability Table:
Student’s t distribution table:
Chi-square distribution:
𝑛
2
(𝑂𝑖 − 𝐸𝑖 )2
𝜒 =∑
𝐸𝑖
1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy