Friday, September 27, 2013

7.5: span and nullspace and base, oh my!

Section 7.5 is entitled “Bases of a Subspace.”

Let’s start with a definition: “If x1, x2, … , xk are vectors in Rn, we define the span of x1, … , xk to be the set of all linear combinations of x1, … , xk, and we will denote this set by span(x1, … , xk)” (308).

Allow me to remind you of the example I made up and did to show you what the span is of a matrix:


In this case,


Something similar to our properties of nullspace, we have some handy properties about span as well:
0. Assume x1, … , xk are vectors in Rn and the set V = span(x1, … , xk).
1. If x and y are vectors in V, then so is their sum x + y.
2. If x is in V and a is a random number, then their product ax is also in V.

So these properties are basically the exact same for span and nullspace. There’s actually a special name for sets of vectors that have these properties: subspace. The exposition to subspace is that it must be a nonempty set and it must be in Rn. So this is actually really awesome because if we have numbers a and b and we also have vectors x and y, and these vectors are in a subspace V. Then this means ax and by are in V, along with ax + by. This means that any linear combination of vectors in a subspace V will also be in V.
The zero vector is an element of every subspace. Also, a set with the single vector 0 can be a subspace. This subspace will also be denoted with a boldface zero, 0. Along with this really obvious subspace is the total space, Rn, which is also a subspace. These two subspaces are known as trivial subspaces.

Assume V is a subspace of Rn and V ≠ 0. Then there are vectors x1, … , xk such that V = span(x1, … , xk).

If V is indeed equal to this span, then we can say V is spanned by {x1, … , xk}. We can also say the set of vectors {x1, … , xk} spans V, or that {x1, … , xk} is the spanning set for V. It doesn’t matter how we say it; V is a set of all of the linear combinations x = a1x1 + a2x2 + … + akxk, in which a can be any number. If we think of the coefficients as parameters, then our linear combination equation is a parametric equation for the subspace V = span((x1, x2, … , xk).

So how do we know if one vector x is in a span of a set of other vectors? We have a method to follow concerning this question:

If you want to find out if x is in span(x1, x2, … , xk), the you start by forming the matrix X = [x1, x2, … , xk]. Then you solve the system Xa = x. You have two paths this road could lead you:
1. If there is no solution, then x is not in the span.
2. If a = (a1, a2, … , ak)T is a solution, then x is in the span.

Spanning sets for a subspace does not have to be unique. They don’t even need to have the same number of vectors. Just as an example, span(x1, x2) can equal span(x1, x2, x3). However, we want to find a way to eliminate unneeded vectors from a spanning set. This want of ours leads to what is called linear dependence or independence. A set of vectors is linearly dependent if one or more of the vectors is unneeded to express the span. Here’s a formal definition for you concerning independence:

“The vectors x1, x2, … , xk are linearly independent if the only linear combination of them that is equal to the zero vector is the trivial one where all of the coefficients are equal to 0. In symbols…” (312).


So if you want to determine if vectors (let’s call them x­1, x2, … , xk) in Rn are linearly dependent or independent, then what you want to do is form a matrix (let’s call it X) where the columns are your vectors x­1, x2, … , xk. You’ll find null(X), and you’ll have two different sets of answers:
1. If null(X) = 0, then your vectors are linearly independent.
2. If you get a nonzero vector c = (c1, c2, … , ck)T, then you will have c­1x1 + c1x2 + … + ckxk = 0, which means your vectors are linearly dependent.

Remember, you find null(X) by reducing X to row echelon form and examining what is left over.

Moving onto a basis, which is the minimal spanning set (in a few words). If you have a set of vectors { x­1, x2, … , xk} in a subspace V, they are a basis of V if and only if they have the following two properties:
1. The vectors span V
2. The vectors are linearly independent

If V is a subspace of Rn, then V will have a basis. Furthermore, any two bases for V will have the exact same number of elements. These two facts brings us to a position where we can define the dimension of this subspace V. The dimension is the number of elements in any basis for V, but because V’s bases have the same number of elements, the dimension will be consistent. Hooray!

Something neat to note about Rn and its dimension can be shown, but since this is a summary and I’m getting progressively lazy as this blog goes on, I will just give you the answer here and now (on the basis that you believe the things I am writing): dim Rn = n. Also, something else to note about bases is that they are not unique. (The book stressed this note by putting it in italics, and so will I.)

If you want a good way of describing a subspace of Rn, then you can simply provide a basis. Also, the dimension of the nullspace of a matrix is the number of free variables in the matrix. Neat, right?
I guess it’s time for an example since I’ve been spitting math words at you for a while and we should neatly tie everything together. I’m going to use a matrix from the book (and it’s also in reduced row echelon form, which is helpful in the context of time and space).


If we label each column by the respective variables x1, x2, x3, x4, and x5, then the free variables are x­2 and x4 and x5. We’ll set x2 = s, x4 = t, and x5 = u, and we’ll then solve for our pivot variables (those being x1 and x3). In this case, null(C) would be parameterized by


Now we have to show that v1, v2, and v3 are linearly independent. Remember, this is to determine what the basis of null(C) is. In order to do this, we will consider the linear combination of our vectors:


So if we want this linear combination to be equal to the zero vector, then we need those entries in the matrix to all equal zero. If we focus on the entries dealing with our free variables, then we see that c­1, c2, and c3 are all equal to zero. Thus our linear combination is trivial, and we can also conclude that v1, v2, and v3 are linearly independent. Our conclusion would then be that null(C) is the subspace of R5 with basis v1, v2, and v3. Finally, because there are three vectors in the basis, this would make the dimension of null(C) equal to 3.

One final thing to leave you with:

If we are given the spanning set for a subspace, we can find a basis by eliminating unneeded vectors from the spanning set.

Okay, one more thing before I leave 7.5: If you’re slightly confused about what span is, I’ll provide some websites to possibly clear up any confusion. You could always read the book or go to class, though.



I’ll see you when I see you.

No comments:

Post a Comment