0% found this document useful (0 votes)
16 views32 pages

Lecture 7 Linear Vector-Space

The document provides an overview of vector spaces, detailing their historical development, definitions, and properties. It explains the structure of vector spaces through operations like vector addition and scalar multiplication, along with axioms that define them. Additionally, it presents examples of vector spaces, including n-tuple spaces, matrix spaces, polynomial spaces, and continuous function spaces.

Uploaded by

Mohsin Rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views32 pages

Lecture 7 Linear Vector-Space

The document provides an overview of vector spaces, detailing their historical development, definitions, and properties. It explains the structure of vector spaces through operations like vector addition and scalar multiplication, along with axioms that define them. Additionally, it presents examples of vector spaces, including n-tuple spaces, matrix spaces, polynomial spaces, and continuous function spaces.

Uploaded by

Mohsin Rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Vector Spaces

The idea of vectors dates back to the early 1800’s, but the
generality of the concept waited until Peano’s work in 1888. It
took many years to understand the importance and extent of
the ideas involved.
The underlying idea can be used to describe the forces and
accelerations in Newtonian mechanics and the potential
functions of electromagnetism and the states of systems in
quantum mechanics and the least-square fitting of
experimental data and much more.
Vector Spaces

In the study of vector analysis triple of numbers (a1, a2, a3) has been
used to represents a point in space
(a1, a2, a3, a4) = four dimensional Space
(a1, a2, a3, a4, a5) = Five dimensional Space
n
1 Vectors in R
 An ordered n-tuple :
a sequence of n real numbers ( x1, x2, , xn )
n
 R -space :
the set of all ordered n-tuples

n=1 R1-space = set of all real numbers


(R1-space can be represented geometrically by the x-axis)

n=2 R2-space = set of all ordered pair of real numbers ( x1 , x2 )


(R2-space can be represented geometrically by the xy-plane)
n=3 R3-space = set of all ordered triple of real numbers ( x1 , x2 , x3 )
(R3-space can be represented geometrically by the xyz-space)
n=4 R4-space = set of all ordered quadruple of real numbers ( x1 , x2 , x3 , x4 )

4.3
u   u1 , u2 , , un  , v   v1 , v2 , , vn  (two vectors in Rn)
 Equality:
u  v if and only if u1  v1 , u2  v2 , , un  vn

 Vector addition (the sum of u and v):


u  v  u1  v1 , u2  v2 , , un  vn 

 Scalar multiplication (the scalar multiple of u by c):


cu  cu1 , cu 2 , , cu n 

 Difference between u and v:


u  v  u  (1) v  (u1  v1 , u2  v2 , u3  v3 ,..., un  vn )

4.4
Theorem 1: Properties of vector addition and scalar multiplication

Let u, v, and w be vectors in Rn, and let c and d be scalars


n
(1) u+v is a vector in R (closure under vector addition)
(2) u+v = v+u (commutative property of vector addition)
(3) (u+v)+w = u+(v+w) (associative property of vector addition)
(4) u+0 = u (additive identity property)
(5) u+(–u) = 0 (additive inverse property)
(6) cu is a vector in Rn (closure under scalar multiplication)
(7) c(u+v) = cu+cv (distributive property of scalar multiplication over vector
addition)
(8) (c+d)u = cu+du (distributive property of scalar multiplication over real-
number addition)
(9) c(du) = (cd)u (associative property of multiplication)
(10) 1(u) = u (multiplicative identity property)

4.5
Vector Spaces
General Linear Vector Spaces
General Linear Vector Spaces
General Linear Vector Spaces
 Notes:
A vector u  (u1 , u2 ,, un ) in R n can be viewed as:

a 1×n row matrix (row vector): u  [u1 u2 un ]


or u1 
u 
a n×1 column matrix (column vector): u   2 
 
 
u n 

4.10
Vector addition Scalar multiplication
u  v  (u1 , u2 , , un )  (v1 , v2 , , vn ) cu  c(u1 , u2 ,, un )
 (u1  v1 , u2  v2 ,  , un  vn )  (cu1 , cu 2 , , cu n )

Treated as 1×n row matrix


u  v  [u1 u2 un ]  [v1 v2 vn ] cu  c[u1 u2 un ]
 [u1  v1 u2  v2 un  vn ]  [cu1 cu2 cun ]

Treated as n×1 column matrix


u1  v1  u1  v1  u1  cu1 
u  v  u  v  u  cu 
u v   2   2   2 2 cu  c  2    2 
          
         
un  vn  un  vn  un  cu n 
4.11
2 Vector Spaces
 Vector spaces :
Let V be a set on which two operations (vector addition and
scalar multiplication) are defined. If the following ten axioms
are satisfied for every u, v, and w in V and every scalar (real
number) c and d, then V is called a vector space

Addition:
(1) u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) V has a zero vector 0 such that for every u in V, u+0=u
(5) For every u in V, there is a vector in V denoted by –u
such that u+(–u)=0
4.12
Scalar multiplication:
(6) cu is in V
(7) c(u  v )  cu  cv
(8) (c  d )u  cu  du
(9) c(du)  (cd )u
(10) 1(u)  u

※ Any set V that satisfies these ten properties (or axioms) is called a vector
space, and the objects in the set are called vectors
※ Thus, we can conclude that Rn is of course a vector space

4.13
 Four examples of vector spaces are introduced as follows. (It is
straightforward to show that these vector spaces satisfy the above ten axioms)
n
(1) n-tuple space: R
(u1 , u2 ,un )  (v1 , v2 , v2 )  (u1  v1 , u2  v2 ,un  vn ) (standard vector addition)
k (u1 , u2 ,un )  (ku1 , ku2 , kun ) (standard scalar multiplication for vectors)

(2) Matrix space : V  M mn


(the set of all m×n matrices with real-number entries)
Ex: (m = n = 2)
u11 u12   v11 v12   u11  v11 u12  v12 
u u   v v   u  v u  v  (standard matrix addition)
 21 22   21 22   21 21 22 22 
u11 u12   ku11 ku12 
k    (standard scalar multiplication for matrices)
 21 22   21
u u ku ku22  4.14
(3) n-th degree or less polynomial space : V  Pn
(the set of all real polynomials of degree n or less)
p( x)  q( x)  (a0  b0 )  (a1  b1 ) x   (an  bn ) x n
kp( x)  ka0  ka1 x    kan x n
※ By the fact that the set of real numbers is closed under addition and
multiplication, it is straightforward to show that Pn satisfies the ten
axioms and thus is a vector space
(4) Continuous function space : V  C ( ,  )
(the set of all real-valued continuous functions defined on the
entire real line)
( f  g )( x)  f ( x)  g ( x)
(kf )( x)  kf ( x)
※ By the fact that the sum of two continuous function is continuous
and the product of a scalar and a continuous function is still a
continuous function, C ( ,  ) is a vector space 4.15
 Summary of important vector spaces
R  set of all real numbers
R 2  set of all ordered pairs
R 3  set of all ordered triples
R n  set of all n-tuples
C (, )  set of all continuous functions defined on the real number line
C[a, b]  set of all continuous functions defined on a closed interval [a, b]
P  set of all polynomials
Pn  set of all polynomials of degree  n
M m,n  set of m  n matrices
M n ,n  set of n  n square matrices
※ Each element in a vector space is called a vector, so a vector can be a real
number, an n-tuple, a matrix, a polynomial, a continuous function, etc.
4.16
 Theorem 2: Properties of scalar multiplication
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true
(1) 0v  0
(2) c0  0
(3) If cv  0, either c  0 or v  0
(4) (1) v   v (the additive inverse of v equals ((–1)v)

4.17
 Notes: To show that a set is not a vector space, you need
only find one axiom that is not satisfied
 Ex 1: The set of all integers is not a vector space
Pf: 1V , and 12 is a real-number scalar
( 12 )(1)  12  V (it is not closed under scalar multiplication)
  
scalar noninteger
integer

 Ex 2: The set of all (exact) second-degree polynomial functions is


not a vector space
Pf: Let p ( x)  x 2 and q( x)   x 2  x  1
 p( x)  q ( x)  x  1  V
(it is not closed under vector addition)
4.18
 Ex 3: The set of singular matrices is not a subspace of M2×2
Let W be the set of singular (noninvertible) matrices of
order 2. Show that W is not a subspace of M2×2 with the
standard matrix operations

Sol:
1 0  0 0 
A  W , B    W
0 0  0 1 

1 0 
A B    W (W is not closed under vector addition)
0 1 
W2 is not a subspace of M 22

4.19
(3) the standard basis matrix space:

Ex: 2 2 matrix space:


1 0 0 1 0 0 0 0 
  ,  ,  , 
 0 0   0 0   1 0   0 1 

(4) the standard basis for Pn(x):


{1, x, x2, …, xn}
Ex: P3(x) {1, x, x2, x3}

4.20
3.4 Spanning Sets and Linear Independence
 Linear combination:
A vector v in a vector space V is called a linear combination of
the vectors u1 , u2 ,L , uk in V if v can be written in the form

v  c1u1  c2u 2  K  ck u k c1 ,c2 ,L ,ck : scalars

Ex: Given v = (– 1, – 2, – 2), u1 = (0,1,4), u2 = (– 1,1,2), and


u3 = (3,1,2) in R , find a, b, and c such that v = au1  bu 2  cu 3 .
3

Sol: b  3c  1
a  b  c  2
Thus v  u1  2u2  u3
4a  2b  2c  2
 a  1, b  2, c  1

4 - 21
Ex: (Finding a linear combination)
v1  (1,2,3) v 2  (0,1,2) v 3  (  1,0,1)
Prove w  (1,1,1) is a linear combination of v1 , v 2 , v 3

Sol: w  c1 v1  c2 v 2  c3 v 3
1,1,1  c1 1, 2, 3  c2  0,1, 2   c3  1,0,1
 (c1  c3 , 2c1  c2 , 3c1  2c2  c3 )
c1 -c3 1
 2c1  c2 1
3c1  2c2  c3 1

4 - 22
 1 0 1 1  1 0 1 1 

  2 1 0 1  Gauss  Jordan
   0 1 2 1
 
 3 2 1 1  0 0 0 0 

 c1  1  t , c2  1  2t , c3  t

(this system has infinitely many solutions)


t 1
 w  2 v1  3v 2  v 3

4 - 23
Basis
 A basis of V is a linearly independent set of vectors
in V which spans V.
 Example: Fn the standard basis

 V is finite dimensional if there is a finite basis.


Dimension of V is the number of elements of a basis.
(Independent of the choice of basis.)

4.24
AGC
DSP Linear Transformations
 A transformation from a vector space X to a vector
space Y with the same scalar field denoted by
L: X Y
is linear when
L(ax)  aL( x)

L( x1  x2 )  x1  x2
 Where
x, x1 , x2  X
 We can think of the transformation as an operator

25
AGC
DSP Linear Transformations …
 Example: Mapping a vector space from R n to
R m
can be expressed as a mxn matrix.
 Thus the transformation
L( x1 , x2 .x3 )  ( x1  2 x2 ,3x2  4 x3 )
can be written as
 x1 
 y1  1 2 0  
 y    0 3 4   x2 
 2  
 x3 
26
AGC
DSP Linear Transformations
 
Example: Let x  x1 x2 x3 ... xm 
T

and let the transformation a nxm matrix

Then

Ax  x1p1  x2p 2  x3p 3  ...  xm p m


Thus, the range of the linear transformation (or
column space of the matrix A ) is the span of the
basis vectors.
The null space is the set which yields Ax  0
27
AGC
DSP A Problem
 Given a signal vector x in the vector space S, we
want to find a point v in the subset V of the space ,
nearest to x
W x w0  x  v0
w0
V

v1 v0 v2
28
AGC
DSP A Problem …
 Let us agree that “nearest to” in the figure is taken in
the Euclidean distance sense.
 The projection v orthogonal to the set V gives the
0
desired solution.

 Moreover the error of representation is

w0  x  v0
 This vector is clearly orthogonal to the set V (More
on this later)

29
AGC
DSP Orthogonality Principle
 Let {p1 p 2 p 3 ... p m } be a set of
independent vectors in a vector space S.
 We wish to express any vector x in S as
x  x1p1  x2p 2  x3p 3  ...  xm p m
 If x is in the span of the independent vectors then
the representation will be exact.
 If on the other hand it is not then there will be an
error

30
AGC
DSP Orthogonality Principle
 Thus for p   x, p1 x, p 2 ... x, p m T
χ  x1 x2 ... x m 
T

 p1 , p1 p1 , p 2 .. p1 , p m 
 p ,p p2 ,p2 .. p2 ,pm 
R 2 1 
 .. .. .. .. 
 p ,p 
 m 1 pm ,p2 .. pm ,pm 

Hence p  Rχ or χ  R 1p

31
AGC
DSP Orthogonalisation
 A signal may be projected into any linear space.
 The computation of its coefficients in the various
vectors of the selected space is easier when the
vectors in the space are orthogonal in that they are
then non-interacting, ie the evaluation of one such
coefficient will not influence the others
 The error norm is easier to compute
 Thus it makes sense to use an orthogonal set of
vectors in the space into which we are to project a
signal

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy