0% found this document useful (0 votes)
522 views1 page

Linear Algerbra PDF

1) Linear algebra concepts like vectors, matrices, eigenvalues, and eigenvectors are essential for machine learning. 2) Key operations include vector addition and subtraction, scalar multiplication, dot products, and matrix multiplication. 3) Basis vectors, orthonormal bases, and the Gram-Schmidt process are used to represent data in different coordinate systems.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
522 views1 page

Linear Algerbra PDF

1) Linear algebra concepts like vectors, matrices, eigenvalues, and eigenvectors are essential for machine learning. 2) Key operations include vector addition and subtraction, scalar multiplication, dot products, and matrix multiplication. 3) Basis vectors, orthonormal bases, and the Gram-Schmidt process are used to represent data in different coordinate systems.

Uploaded by

Bob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Mathematics for Machine Learning: Linear Algebra

Formula Sheet

Vector operations Change of basis


Change from an original basis to a new, primed basis.
r+s=s+r
The columns of the transformation matrix B are the new
2r = r + r basis vectors in the original coordinate system. So
X
krk2 = ri2 Br0 = r
i
where r0 is the vector in the B-basis, and r is the vector
- dot or inner product: in the original basis. Or;
X
r.s = ri si r0 = B −1 r
i
If a matrix A is orthonormal (all the columns are of unit
commutative r.s = s.r size and orthogonal to eachother) then:
distributive r.(s + t) = r.s + r.t AT = A−1
associative r.(as) = a(r.s)
Gram-Schmidt process for constructing an
r.r = krk 2 orthonormal basis
Start with n linearly independent basis vectors v =
r.s = krkksk cos θ
{v1 , v2 , ..., vn }. Then
- scalar and vector projection: v1
e1 =
r.s ||v1 ||
scalar projection:
krk u2
r.s u2 = v2 − (v2 .e1 )e1 so e2 =
vector projection: r ||u2 ||
r.r
... and so on for u3 being the remnant part of v3 not
Basis composed of the preceding e-vectors, etc. ...

A basis is a set of n vectors that:


(i) are not linear combinations of each other Transformation in a Plane or other object
(ii) span the space First transform into the basis referred to the reflection
The space is then n-dimensional. plane, or whichever; E −1 .
Then do the reflection or other transformation, in the
Matrices plane of the object TE .
Then transform back into the original basis E.
Ar = r0 So our transformed vector r0 = ETE E −1 r.
    
a b e ae + bf
=
c d f ce + df
Eigenstuff
A(nr) = n(Ar) = nr0
A(r + s) = Ar + As To investigate the characteristics of the n by n matrix A,
you are looking for solutions the the equation,
  Ax = λx
1 0
Identity: I= where λ is a scalar eigenvalue. Eigenvalues will satisfy
0 1
the following condition
(A − λI)x = 0
 
cos θ sin θ
clockwise rotation by θ:
− sin θ cos θ where I is an n by n dimensional identity matrix
 
a b - PageRank
determinant of 2x2 matrix: det = ad − bc
c d To find the dominant eigenvector of link matrix L, the
Power Method can be iteratively applied, starting from a
 −1   uniform initial vector r.
a b 1 d −b
inverse of 2x2 matrix: = ri+1 = Lri
c d ad − bc −c a
- summation convention for multiplying matrices a and b: A damping factor, d, can be implement to stabilize this
X method as follows.
abik = aij bjk 1−d
j ri+1 = dLri +
n

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy