Definition. A vector space (or linear space) consists of the following:
- a field F of scalars;
- a set V of objects, called vectors;
- a rule (or operation), called vector addition, which associates with each pair of vectors α,β in V a vector α+β in V, called the sum of α and β, in such a way that
( a ) addition is commutative, α+β=β+α;
( b ) addition is associative, α+(β+γ)=(α+β)+γ;
( c ) there is a unique vector 0 in V, called the zero vector, such that α+0=α for all α in V;
( d ) for each vector α in V there is a unique vector −α in V such that α+(−α)=0; - a rule (or operation), called scalar multiplication, which associates with each scalar c in F and vector α in V a vector cα in V, called the product of c and α, in such a way that
( a ) 1α=α for every α in V;
( b ) (c1c2)α=c1(c2α);
( c ) c(α+β)=cα+cβ;
( d ) (c1+c2)α=c1α+c2α.
Definition. A vector β in V is said to be a linear combination of the vectors α1,…,αn in V provided there exist scalars c1,…,cn in F such that
β=c1α1+⋯+cnαn=i=1∑nciαi
Definition. Let V be a vector space over the field F. A subspace of V is a subset W of V which is itself a vector space over F with the operation of vector addition and scalar multiplication on V.
Theorem 1. A non-empty subset W of V is a subspace of V if and only if for each pair of vectors α,β in W and each scalar c in F the vector cα+β is in W.
Lemma. If A is an m×n matrix over F and B,C are n×p matrices over F then
A(dB+C)=d(AB)+AC,∀d∈F
Theorem 2. Let V be a vector space over the field F. Then intersection of any collection of subspaces of V is a subspace of V.
Definition. Let S be a set of vectors in a vector space V. The subspace spanned by S is defined to be the intersection W of all subspaces of V which contains S. When S is a finite set of vectors, S={α1,α2,…,αn}, we shall simply call W the subspace spanned by the vectors α1,α2,…,αn.
Theorem 3. The subspace spanned by a non-empty subset S of a vector space V is the set of all linear combinations of vectors in S.
Definition. If S1,S2,…,Sk are subsets of a vector space V, the set of all sums α1+α2+⋯+αk of vectors αi in Si is called the sum of the subsets S1,S2,…,Sk and is denoted by S1+S2+⋯+Sk or by ∑i=1kSi.
Definition. Let V be a vector space over F. A subset S of V is said to be linearly dependent (or simply, dependent) if there exist distinct vectors α1,α2,…,αn in S and scalars c1,c2,…,cn in F, not all of which are 0, such that
c1α1+⋯+cnαn=0
A set which is not linearly depenent is called linearly independent. If the set S contains only finitely many vectors α1,α2,…,αn, we sometimes say that α1,α2,…,αn are dependent (or independent) instead of saying S is dependent (or independent).
Definition. Let V be a vector space. A basis for V is a linealy independent set of vectors in V which spans the space V. The space V is finite-dimensional if it has a finite basis.
Theorem 4. Let V be a vector space which is spanned by a finite set of vectors β1,β2,…,βm. Then any independent set of vectors in V is finite and contains no more than m elements.
Corollary 1. If V is a finite-dimensional vector space, then any two bases of V have the same (finite) number of elements.
Corollary 2. Let V be a finite-dimensional vector space and let n=dimV. Then
( a ) any sunset of V which contains more than n vectors is linearly dependent;
( b ) no subset of V which contains fewer than n vectors can span V.
Theorem 5. If W is a subspace of a finite-dimensional vector space V, every linearly independent subset of W is finite and is part of a (finite) basis for W.
Corollary 1. If W is a proper subspace of a finite-dimensional vector space V, then W is finite-dimensional and dimW<dimV.
Corollary 2. In a finite-dimensional vector space V every non-empty linearly independent set of vectors is part of a basis.
Corollary 3. Let A be an n×n matrix over a field F, and suppose the row vectors of A form a linearly independent set of vectors in Fn. Then A is invertible.
Theorem 6. If W1 and W2 are finite-dimensional subspaces of a vector space V, then W1+W2 is finite-dimensional and
dimW1+dimW2=dim(W1∩W2)+dim(W1+W2)
Definition. If V is a finite-dimensional vector space, an ordered basis for V is a finite sequence of vectors which is linearly independent and spans V.
Theorem 7. Let V be an n-dimensional vector space over the field F, and let B and B′ be two ordered bases of V. Then there is a unique, necessarily invertible, n×n matrix P with entries in F such that
[α]B=P[α]B′[α]B′=P−1[α]B
for every vector α in V. Then columns of P are given by
Pj=[αj′]B,j=1,…,n
Theorem 8. Suppose P is an n×n invertilbe matrix over F. Let V be an n-dimensional vector space over F, and let B be an ordered basis of V. Then there is a unique basis B′ of V such that
[α]B=P[α]B′[α]B′=P−1[α]B
for every vector α in V.
Theorem 9. Row-equivalent matrices have the same row space.
Theorem 10. Let R be a non-zero row-reduced echelon matrix. Then the non-zero row vectors of R form a basis for the row space of R.
Theorem 11. Let m and n be positive integers and let F be a field. Suppose W is a subspace of Fn and dimW≤m. Then there is precisely one m×n row-reduced echelon matrix over F which has W as its row space.
Corollary. Each m×n matrix A is row-equivalent to one and oonly one row-reduced echelon matrix.
Corollary. Let A and B be m×n matrices over the field F. Then A and B are row-equivalent if and only if they have the same row space.