0% found this document useful (0 votes)
5 views12 pages

4389 LA ch2

Uploaded by

Ashutosh Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

4389 LA ch2

Uploaded by

Ashutosh Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Linear Algebra and Matrix Theory

Part 2 - Vector Spaces

1. References

(1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall.

(2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa-

tions Using Matlab, Brooks-Cole.

(3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall.

(4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca-

demic Press.

2. Definition and Examples

Briefly, a vector space consists of a set of objects called vectors along

with a set of objects called scalars. The vector space axioms concern the

algebraic relationships among the vectors and scalars. They may be found

in any of the above references. Informally, they are as follows. Vectors can

be added to form new vectors. Vectors can also be multiplied by scalars to

form new vectors. There is one vector, called the zero vector, that acts as

an identity element with respect to addition of vectors. Each vector has a

negative associated with it. The sum of a vector and its negative is the zero

vector. Addition of vectors is associative and commutative. In summary,

the set of vectors with the operation of vector addition is an abelian group.
1
2

The set of scalars is an algebraic field, such as the field of real numbers or

the field of complex numbers. Multiplication of scalars by vectors distributes

over addition of vectors and also over addition of scalars. The product of

the unit element 1 of the field with any vector is that vector. Finally,

multiplication of vectors by scalars is associative.

Generically, we denote the set of vectors by V and the field of scalars by F.

It is common to say that V is a vector space over F. Individual vectors will

be denoted by lower case Latin letters and individual scalars by lower case

Greek letters. We use the same symbol + to denote both addition of scalars

and addition of vectors. Multiplication of scalars by scalars or scalars by

vectors is indicated simply by conjoining the symbols for the multiplicands.

Example 2.1. Let F be either R or C and let V = Fm×n . Two vectors (ma-

trices) are added by adding corresponding entries and a matrix is multiplied

by a scalar by multiplying each entry by that scalar. The zero matrix is the

matrix all of whose entries are 0. The negative of a matrix is obtained by

multiplying it by -1. If m = 1 the vectors of this space are called row vectors

and if n = 1 they are called column vectors.

Example 2.2. Two directed line segments in the Euclidean plane are equiv-

alent if they have the same length and the same direction. Let V be the set

of equivalence classes. If u and v are vectors (i.e., elements of V), choose

representative line segments such that the representative of v begins where

the representative of u ends. Then u + v is represented by the line segment


3

from the initial point of u to the terminal point of v. To multiply a vector u

by a real number α, let αu be represented by a line segment whose length is

|α| times the length of u. Let αu have the same direction as u if α > 0 and

the opposite direction if α < 0. These are the familiar geometric vectors of

elementary mathematics. We leave it to you to explain how the zero vector

and negatives of vectors are defined and to verify the other properties of a

vector space.

Example 2.3. Let F = R and let V = C k (0, 1), the set of all functions u :

(0, 1) → R with at least k continuous derivatives on (0, 1). Vector addition

and scalar multiplication are defined pointwise: (u + v)(t) = u(t) + v(t) and

(αu)(t) = αu(t) for each t ∈ (0, 1).

3. Subspaces

Let V be a vector space over F and let W be a nonempty subset of V. We

say that W is a subspace of V if it is a vector space over F with the operations

of vector addition and scalar multiplication inherited from V. This means

that W is closed under vector addition and scalar multiplication and that it

contains the zero vector of V. There is a simple test for when a subset of V

is a subspace.

Theorem 3.1. W is a subspace of V if and only if αu + βv ∈ W for all

u, v ∈ W and all α, β ∈ F.

Example 3.1. In Example 2.3, let W = C r (0, 1), where r > k.


4

Definition 3.1. Let V be a vector space over F and let S be a nonempty

subset of V. The span of S is the subspace

sp(S) = {α1 u1 + · · · + αk uk |k ∈ N; α1 , · · · , αk ∈ F; u1 , · · · , uk ∈ S}

We also say that sp(S) is the subspace spanned by the elements of S.

Definition 3.2. Let A ∈ Fm×n . The row space of A is the subspace of F1×n

spanned by the rows of A. The column space of A is the subspace of Fm×1

spanned by the columns of A. The null space of A is the set of all solutions

x ∈ Fn×1 of the homogeneous system Ax = 0.

There are short ways of denoting these subspaces. The row space is

{yA|y ∈ F1×m }, the column space is {Ax|x ∈ Fn×1 }, and the null space is

{x ∈ Fn×1 |Ax = 0}.

Theorem 3.2. Row equivalent matrices have the same row space and null

space.

Subspaces can be combined to form new subspaces. One way is by taking

their intersection. Another is by forming their sum, defined as

Definition 3.3. Let U and W be subspaces of a vector space V. Their sum

is

U + W = {u + w|u ∈ U, w ∈ W}.

Theorem 3.3. U ∩ W and U + W are subspaces of V.


5

Solved Problems:

1. Tell whether the following sets S are subspaces of V or not. If the answer

is no, explain.

(1) V = R1×2 , S = {(x1 , x2 )|x1 = 0 or x2 = 0}.

(2) V = Fn×1 , S = {x|Ax = b}, where A ∈ Fm×n and b ∈ Fm×1 are

given, b 6= 0.

(3) V = C 2 (0, 1), S =All real solutions y on (0, 1) of the homogeneous

differential equation y 00 − 2y 0 + y = 0.

(4) V = C4×4 , F = C, S = {H ∈ V|H ∗ = H}, where H ∗ is the conjugate

of H t .

(5) Same V and S as in the preceding problem, but F = R.

Solution:

(1) Not a subspace because it is not closed under vector addition. For

example, (1, 0) and (0, 1) are both in S but (1, 1) is not.

(2) Not a subspace because it is not closed under vector addition, not

closed under scalar multiplication, does not contain the zero vector,

and other reasons.

(3) S is a subspace of V.

(4) Not a subspace because it is not closed under scalar multiplication

by elements of C.

(5) S is a subspace. Incidentally, a complex matrix satisfying H ∗ = H

is called hermitian.
6

2. Describe in the simplest possible terms the row space of the matrix
 
1 2 0 1
 
 
0 1 1 0
 
 
 
1 2 0 1

Solution: The reduced row echelon form has the same row space as the

given matrix. It is
 
1 0 −2 1
 
 
0 1 1 0
 
 
 
0 0 0 0

The row space is the span of the rows of this matrix, i.e., the set of row

vectors of the form

(α1 , α2 , −2α1 + α2 , α1 )

where α1 and α2 are arbitrary elements of F.

3. Describe in simplest possible terms the null space of the same matrix.

Solution: The reduced row echelon form has the same null space. A vector

x = (x1 , x2 , x3 , x4 )t is in the null space if and only if

x1 − 2x3 + x4 = 0

x2 + x3 = 0

You may think of two of the variables, say x1 and x2 , as having arbitrary

values and the other two variables as being determined by them according
7

to the last set of equations. After substituting, the null space is the set of

all vectors of the form

(x1 , x2 , −x2 , −x1 − 2x2 )

where x1 and x2 are arbitrary elements of F.

4. Describe in simplest terms the column space of the same matrix.

Solution: A vector y is in the column space of A if and only if there is a

solution of the linear system Ax = y. In the present case the augmented

matrix is
 
1 2 0 1 | y 1 
 
 
0 1 1 0 | y 
 2
 
 
1 2 0 1 | y3

Now row-reduce the augmented matrix to get the coefficient part in row

echelon form, with symbolic calculations in the last column. The result is

 
1 0 −2 1 | y1 − 2y2 
 
 
0 1 1 0 | y 
 2 
 
 
0 0 0 0 | y3 − y1

Obviously, this system has a solution if and only if y3 − y1 = 0. The column

space is the set of all vectors y = (y1 , y2 , y3 )t with y3 = y1 .

Unsolved Problems:
8

1. Describe the row space, the null space and the column space of the

following matrix.
 
1 2 1
 
 
1 0 1
 
 
 
1 1 1

2. Show that the intersection of two subspaces of a vector space is a sub-

space.

3. Let S1 and S2 be nonempty subsets of a vector space V. Show that

sp(S1 ) + sp(S2 ) = sp(S1 ∪ S2 ).

4. Let A ∈ Fm×n . Show that if y ∈ F1×n is in the row space of A and

x ∈ Fn×1 is in the null space of A, then yx = 0.

4. Linear Independence, Bases and Coordinates

Definition 4.1. Vectors v1 , · · · , vm in a vector space V over F are linearly

dependent if there are scalars α1 , · · · , αm , not all zero, such that α1 v1 +

· · · + αm vm = 0. Otherwise, v1 , · · · , vm are linearly independent.

An informal way of expressing linear dependence is to say that there is

a non-trivial linear combination of the given vectors which is equal to the

zero vector.

Definition 4.2. A vector space V is finite-dimensional if there is a finite

linearly independent set of vectors in V which spans V. Such a set of vectors

is called a basis for V.


9

Example 4.1. In F1×n , let ei denote the vector whose entries are all 0

except the ith , which is 1. Then {e1 , · · · , en } is a basis for F1×n called the

standard basis. In Fm×n , let Ei,j denote the matrix all of whose entries are

0 except the i, j th , which is 1. Then {Ei,j |1 ≤ i ≤ m; 1 ≤ j ≤ n} is a basis

for Fm×n .

There is never just one basis for a finite-dimensional vector space. If

v1 , · · · , vm is a basis then so, for example, is 2v1 , · · · , 2vm . However, we

have the following theorem.

Theorem 4.1. Any two bases for a finite-dimensional vector space have

the same number of elements. The set {v1 , · · · , vm } is a basis for V if and

only if for each vector u ∈ V there are unique scalars α1 , · · · , αm such that

u = α1 v1 + · · · + αm vm .

Definition 4.3. The dimension of a finite-dimensional vector space is the

number of elements in a basis. The dimension of V is denoted by dim(V).

If dim(V) = m and v1 , · · · , vk are linearly independent vectors in V, then

k ≤ m and if k < m we may extend this set of vectors to form a basis

v1 , · · · , vk , vk+1 , · · · , vm . Similarly, if u1 , · · · , ur spans V then r ≥ m and we

may select a subset of m of the given vectors which forms a basis for V.

Definition 4.4. Let B = {v1 , · · · , vm } be an ordered basis for V and let


Pm
u ∈ V. The unique m-tuple of scalars (α1 , · · · , αm ) such that i=1 αi vi =u
10

is called the coordinate vector of u relative to the basis B. It is usually

represented as a column vector in Fm×1

 
 α1 
 
 . 
 .. 
[u]B =  
 
 
αm

Solved Problems:

1. Show that any set of more than m vectors in Fm×1 is linearly dependent.

Solution: Suppose n > m vectors are given. Let A ∈ Fm×n be a matrix

whose columns are the given vectors. These vectors are linearly dependent

if and only if the homogeneous linear system Ax = 0 has a solution other

than x = 0. Since row equivalent matrices have the same null space, we

may assume that A is in reduced row echelon form. There are at least

n − m columns of A that do not contain leading 1’s. The variables corre-

sponding to these columns may be assigned arbitrary values. The variables

corresponding to the columns with leading 1’s are determined once these are

specified.

2. Show that any set of fewer than m vectors in Fm×1 does not span Fm×1 .

Solution: Again, let A be an m × n matrix whose columns are the given

vectors and n < m. These vectors fail to span Fm×1 if and only if there is

a vector y ∈ Fm×1 such that the system Ax = y has no solution. For such

a vector y and for any invertible matrix Q, the system QAx = Qy has no

solution. Thus, we may suppose that A is in reduced row echelon form to


11

begin with. Since A has more rows than columns, its last row must be all

zeros. Therefore, if we choose ym 6= 0 the system has no solution.

3. In F3×1 , let v1 = (1, 0, 0)t , v2 = (1, −1, 0)t , and v3 = (1, 0, −1)t . Show

that B = {v1 , v2 , v3 } is a basis and find the coordinate vector of u = (0, 1, 0)t

relative to this basis.

Solution: Let A be a matrix with columns v1 , v2 , and v3 .


 
1 1 1
 
 
A = 0 −1 0 


 
 
0 0 −1

B is a basis if and only if it is a linearly independent spanning set of vectors,

which is equivalent to invertibility of A. Clearly, A is invertible because its

determinant is 1.

To find the coordinates of u, we must find scalars α1 , α2 , and α3 such

that u = α1 v1 + α2 v2 + α3 v3 . Then [u]B = (α1 , α2 , α3 )t is the solution of

A[u]B = u. The augmented matrix is


 
1 1 1 | 0
 
 
0 −1 0 | 1
 
 
 
0 0 −1 | 0

The solution is  
1
 
 
[u]B = 
−1

 
 
0
Unsolved Problems:
12

1. Let V be the set of all polynomials of degree ≤ 3 with coefficients in

F. Constant functions are included as polynomials of degree 0. With the

obvious definitions of addition and multiplication by elements of F, V is a

vector space over F of dimension 4. Find a basis for this vector space. With

respect to this basis, find the coordinates of the vector v(z) = −1 + 2z 2 .

2. Let V be the real vector space of all 3×3 hermitian matrices with complex

entries. Find a basis for V.

5. Change of Coordinates Under Change of Basis

Let B1 = {v1 , · · · , vn } be a basis for V. Let Q = (qi,j ) ∈ Fn×n and define

a new set of vectors B2 = {u1 , · · · , un } by the equations


n
X
uj = qi,j vi
i=1

for j = 1, · · · , n.

Theorem 5.1. B2 is a basis if and only if the matrix Q is nonsingular. If

so, then the coordinate vectors [w]1 and [w]2 with respect to the bases B1 and

B2 of a vector w are related by

[w]1 = Q[w]2

The matrix Q is called the transition matrix from one basis to the other. If

two bases are given, then the transition matrix is completely determined by

expressing the elements of one basis as linear combinations of the elements

of the other basis.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy