Linear Algebra - Basis of a Vector Space

> Linear Algebra

1 - About

A basis for vector space V is a linearly independent set of generators for V.

Thus a set S of vectors of V is a basis for V if S satisfies two properties:

  • Property B1 (Spanning) Span S = V, and
  • Property B2 (Independent) S is linearly independent.

Most important definition in linear algebra.

A set of vector S is a basis for the span of an other set of vector T if:

  • the span of S equal the span of T
  • S is a linearly independent set

If a set of vector S is a basis for a set of vector T, then the span of S is equal to the span of T.

Advertising

3 - Computation

def isABasis(L, i):
    '''
    Input:
        - L: list of vectors as instances of Vec class
        - i: integer in range(len(L))
    Output:
        False if the span of the vectors of L is the same
        as the span of the vectors of L, excluding L[i].
 
        True otherwise.
    '''
    if len(L)==1:
        return True
    litteL = [v for (j,v) in enumerate(L) if j != i] 
    A = coldict2mat(litteL)
    b = L[i]
    # The procedure solve(A, b) given a Mat A and a Vec b, 
    # returns a vector u such that A*u=b iff there is such a vector u 
    u = solve(A,b)
    # check that u is in fact a solution to the equation Ax = b.    
    # the solution returned by solve is approximate due to round-off error. 
    # We compute the residual  
    residual = b - A*u
    # and test if the dot product is close to the zero vector:
    if (residual*residual < 10**(-14)):
        return False
    else:
        return True

3.1 - Grow

Initialize S = 0;.
Repeat while possible: select a vector v in V that is not in Span S, and put it in S.

The Grow algorithm finds a basis for V if it terminates.

At each point in the execution of the algorithm, the set S is linearly independent.

4 - Size

All bases for a vector space have the same size.

Morphing Lemma: Let V be a vector space. Suppose S is a set of generators for V, and B is a basis for V. Then |B| ⇐ |S| (cardinality of B ⇐ cardinality of S)

Theorem:

  • Any basis for V is a smallest generating set for V.
  • All bases for a vector space V have the same size.

5 - Lemma

5.1 - Unique-Representation

Let

<math>a_1, \dots , a_n</math>

be a basis for V. For any vector

<math>v \in V</math>

, there is exactly one representation of v in terms of the basis vectors.

Proof: Let v be any vector in V. The vectors

<math>a_1, \dots , a_n</math>

span V, so there is at least one representation of v in terms of the basis vectors.

Suppose there are two such representations:

<MATH>v = \alpha_1 a_1 + \dots + \alpha_n a_n = \beta_1 a_1 + \dots + \beta_n a_n</MATH>

We get the zero vector by subtracting one from the other:

<MATH> \begin{array}{rcl} 0 & = & \alpha_1 a_1 + \dots + \alpha_n a_n − (\beta_1 a_1 + \dots + \beta_n a_n)\\ & = & (\alpha_1 - \beta_1) a_1 + \dots + (\alpha_n - \beta_n) a_n \end{array} </MATH>

Since the vectors

<math>a_1, \dots , a_n</math>

are linearly independent, the coefficient

<math>\alpha_1 - \beta_1, \dots, \alpha_n - \beta_n</math>

must all be zero, so the two representations are really the same.

Advertising

5.2 - Subset

Every finite set T of vectors contains a subset S that is a basis for Span T.

5.3 - Superset-Basis

Let V be a vector space. Let C be a linearly independent set of vectors belonging to V. Then V has a basis S containing all vectors in C.

Grow algorithm Computation

Initialize S to the empty set.
Repeat while possible: select a vector v in V (preferably in C) that is not in
Span S, and put it in S.

6 - Theorem

  • For finite D, every subspace of <math>F^D</math>

    contains a basis.

7 - Change of basis

7.1 - From a vector to its coordinate representation

Suppose we have a basis

<math>a_1, \dots , a_n</math>

for some vector space V. How do we go

  • from a vector b in V
  • to the coordinate representation u of b in terms of <math>a_1, \dots , a_n</math>

    ?

By linear-combinations definition of matrix-vector multiplication,

<MATH> \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix} * \begin{bmatrix} \begin{array}{rrr} & u & \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{rrr} & b & \end{array} \end{bmatrix} </MATH>

By Unique-Representation Lemma, u is the only solution to the equation

<MATH> \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix} * \begin{bmatrix} \begin{array}{rrr} & x & \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{rrr} & b & \end{array} \end{bmatrix} </MATH>

so we can obtain u by using a matrix-vector equation solver.

Advertising

7.2 - From a basis to another

Now suppose

<math>a_1, \dots , a_n</math>

is one basis for V and

<math>c_1, \dots , c_k</math>

is another.

<MATH> \begin{array}{lcr} f(x) = \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix} * \begin{bmatrix} \begin{array}{rrr} & x & \end{array} \end{bmatrix} & \text{ and } & g(y) = \begin{bmatrix} \begin{array}{r|r|r} c_1 & \dots & c_k \end{array} \end{bmatrix} * \begin{bmatrix} \begin{array}{rrr} & y & \end{array} \end{bmatrix} \end{array} </MATH>

Then both f and g are invertible functions. The function

<math>f^{-1} \circ g</math>

maps:

  • from coordinate representation of a vector in terms of <math>c_1, \dots , c_k</math>
  • to coordinate representation of a vector in terms of <math>a_1, \dots , a_n</math>

In particular, if

<math>V = \mathbb{F}^m</math>

for some m then

f invertible implies that

<math> \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix} </math>

is an invertible matrix.

g invertible implies that

<math> \begin{bmatrix} \begin{array}{r|r|r} c_1 & \dots & c_k \end{array} \end{bmatrix} </math>

is an invertible matrix.

Thus the function

<math>f_{-1} \circ g</math>

has the property:

<MATH> \begin{array}{lcr} (f_{-1} \circ g)(x) = \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix}^{-1} \begin{bmatrix} \begin{array}{r|r|r} c_1 & \dots & c_k \end{array} \end{bmatrix} * \begin{bmatrix} \begin{array}{rrr} & x & \end{array} \end{bmatrix} \end{array} </MATH>

Proposition: If

<math>a_1, \dots , a_n</math>

and

<math>c_1, \dots , c_k</math>

are basis for

<math>\mathbb{F}^m</math>

then multiplication by the matrix:

<MATH> B = \begin{bmatrix} \begin{array}{r|r|r} a_1 & \dots & a_n \end{array} \end{bmatrix}^{-1} \begin{bmatrix} \begin{array}{r|r|r} c_1 & \dots & c_k \end{array} \end{bmatrix} </MATH>

maps:

  • from the representation of a vector with respect to <math>c_1, \dots , c_k</math>
  • to the representation of that vector with respect to <math>a_1, \dots , a_n</math>

Conclusion: Given two bases of

<math>\mathbb{F}^m</math>

, there is a matrix B such that multiplication by B converts from one coordinate representation to the other.

Converting between vector itself and its coordinate representation is a special case: Think of the vector itself as coordinate representation with respect to standard basis.

Example: To map from coordinate representation with respect to

<math>[1, 2, 3], [2, 1, 0], [0, 1, 4]</math>

to coordinate representation with respect to

<math>[2, 0, 1], [0, 1,-1], [1, 2, 0]</math>

multiply by the matrix:

<MATH> \begin{bmatrix} \begin{array}{r|r|r} 2 & 0 & 1 \\ 0 & 1 & 2 \\ 1 & -1 & 0 \end{array} \end{bmatrix}^{-1} \begin{bmatrix} \begin{array}{r|r|r} 1 & 2 & 0 \\ 2 & 1 & 1 \\ 3 & 0 & 4 \end{array} \end{bmatrix} </MATH>

which is:

<MATH> \begin{bmatrix} \begin{array}{r|r|r} \frac{2}{3} & -\frac{1}{3} & -\frac{1}{3} \\ \frac{2}{3} & -\frac{1}{3} & -\frac{4}{3} \\ -\frac{1}{3} & \frac{2}{3} & \frac{2}{3} \end{array} \end{bmatrix} \begin{bmatrix} \begin{array}{r|r|r} 1 & 2 & 0 \\ 2 & 1 & 1 \\ 3 & 0 & 4 \end{array} \end{bmatrix} </MATH>

which is:

<MATH> \begin{bmatrix} \begin{array}{r|r|r} -1 & 1 & -\frac{5}{3} \\ -4 & 1 & -\frac{17}{3} \\ 3 & 9 & \frac{10}{3} \end{array} \end{bmatrix} </MATH>

8 - Example

8.1 - Not a basis

  • Let <math>V = Span \{[1, 0, 2, 0], [0,-1, 0,-2], [2, 2, 4, 4]\}</math>

    . Is

    <math>\{[1, 0, 2, 0], [0,-1, 0,-2], [2, 2, 4, 4]\}</math>

    a basis for V?

The set is spanning but is not independent:

<MATH>1 [1, 0, 2, 0] -1 [0,-1, 0,-2] -\frac{1}{2} [2, 2, 4, 4] = 0</MATH>

It's then not a basis.

However,

<math>\{[1, 0, 2, 0], [0,-1,0-2]\}</math>

is a basis:

  • Obvious that these vectors are independent because each has a nonzero entry where the other has a zero.
  • To show
<MATH>Span \{[1, 0, 2, 0], [0,-1,0,-2]\} = Span \{[1, 0, 2, 0], [0,-1,0,-2],[[2, 2, 4, 4]\}</MATH>

can use Superfluous-Vector Lemma:

<MATH>[2, 2, 4, 4] = 2 [1, 0, 2, 0]-2 [0,-1,0,-2] </MATH>

8.2 - A simple basis over <math>\mathbb{R}^3</math>: the standard generators

<MATH>e1 = [1, 0, 0], e2 = [0, 1, 0], e3 = [0, 0, 1]</MATH>
  • Spanning: For any vector <math>[x, y, z] \in \mathbb{R}^3</math>

    ,

<MATH>[x, y, z] = x [1, 0, 0] + y [0, 1, 0] + z [0, 0, 1]</MATH>
  • Independent: Suppose
<MATH>0 = \alpha_1 [1, 0, 0] + \alpha_2 [0, 1, 0] + \alpha_3 [0, 0, 1] = [\alpha_1, \alpha_2, \alpha_3]</MATH>

Then

<math>\alpha_1 = \alpha_2 = \alpha_3 = 0</math>

Instead of “standard generators”, we call them standard basis vectors. We refer to

<math>\{[1, 0, 0], [0, 1, 0], [0, 0, 1]\}</math>

as standard basis for

<math>\mathbb{R}^3</math>

. In general the standard generators are usually called standard basis vectors.

8.3 - Another basis for <math>\mathbb{R}^3</math>: [1, 1, 1], [1, 1, 0], [0, 1, 1]

  • Spanning: Can write standard generators in terms of these vectors:
<MATH> [1, 0, 0] = [1, 1, 1] - [0, 1, 1] [0, 1, 0] = [1, 1, 0] + [0, 1, 1] - [1, 1, 1] [0, 0, 1] = [1, 1, 1] - [1, 1, 0] </MATH>

Since e1, e2, e3 can be written in terms of these new vectors, every vector in Span {e1, e2, e3} is in span of new vectors. Thus

<math>\mathbb{R}^3</math>

equals span of new vectors.

  • Linearly independent: Write zero vector as linear combination:
<MATH>0 = x [1, 1, 1] + y [1, 1, 0] + z [0, 1, 1] = [x + y, x + y + z, x + z]</MATH>

Looking at each entry, we get

<MATH> \begin{array}{l} 0 = x + y \\ 0 = x + y + z \\ 0 = x + z \end{array} </MATH>
  • Plug x + y = 0 into second equation to get 0 = z.
  • Plug z = 0 into third equation to get x = 0.
  • Plug x = 0 into first equation to get y = 0.

Thus the linear combination is trivial.

9 - Documentation / Reference

linear_algebra/basis.txt · Last modified: 2017/11/30 11:00 by gerardnico