week1  825  827  829 
week 2  91  93  95 
week 3  98  910  912 
week 4 
915  917 
919 
week 5 
922 
924 
926 
week 6 
929 
101 
103 
week 7 
106 
108 
1010 
week 8 
1013 
1015 
1017 Exam 
week 9 
1020 
1022 
1024 
week 10 
1027 
1029 
1031 
week 11 
113 
115 
117 
week 12 
1110 
1112 
1114 
week 13 
1117 
1119 
1121 
week 14 NoClasses 
1124 
1126 
1128 
week 15 
121 
123 
125 
week 16 
128 
1210 
1212 



The geometry of complex arithmetic:
If z = a+bi = z(cos(t) +i sin(t)) and w = c+di = w(cos(s) +i sin(s)) then
z+w = (a+c)+(b+d)i which corresponds geometrically to the "vector " sum of z and w in the plane, and
zw = z(cos(t) +i sin(t)) w(cos(s) +i sin(s))= z
w (cos(t) +i sin(t))(cos(s) +i sin(s))
= z w (cos(t) cos(s)  sin(t)sin(s)
+ (sin(t) cos(s) + sin(s)cos(t)) i)
= z w (cos(t+s) + sin(t+s)
i)
So you use the product of the magnitudes of z and w to determine the magnitude of the product and use the sum of the angles to determine the angle of the product.
Notation: cos(t) + i sin(t) is somtimes written
as cis(t).
Note: If we consider the series for e^{x} = 1 + x + x^{2}/2!
+x^{3}/3! + ...
then e^{ix} = 1 + ix + (ix)^{2}/2! +(ix)^{3}/3!
+ ... = 1 + ix  x^{2}/2!  ix^{3}/3! + ...
... = cos(x) + i sin(x)
Thus e^{pi }= cos(p)
+ i sin(p)= 1. Thus ln(1) = p
i.
Matrices with complex number entries.
If r and s are complex numbers in the matrix A, then as n get large
if r < 1 and s < 1 the powers of A will get close to the
zero matrix , if r=s=1 the powers of A will always be A, and
otherwise the powers of A will diverge .
Polynomials with complex coefficients.
Because multiplication and addition make sense for complex numbers,
we can consider polynomials with coefficients that are complex numbers
and use a complex number for the variable, making a complex polynomial
a function from the complex numbers to the complex numbers.
This can be visualized using one plane for the domain of the polynomial
and a second plane for the codomain, target, or range of the polynomial.
The Fundamental Theorem of Algebra: If f is a non constant polynomial with complex number coefficients then there is at least on complex number z* where f(z*) = 0.


How are these questions related to Motivation Question I?
Do Examples F[X] = { f in F^{∞}, where f(n) = 0 for all but a finite number of n.} < F^{∞}
(Internal) Sums , Intersections, and Direct Sums of Subspaces
Suppose U1, U2, ... , Un are all subspaces of V.Definition: U1+ U2+ ... + Un = {v in V where v = u1+ u2+ ... + un for uk in Uk , k = 1,2,...,n} called the internal sum of the subspaces.
Facts: (i) U1+ U2+ ... + Un < V.
(ii) Uk < U1+ U2+ ... + Un for each k, k= 1,2,...,n.
(iii) If W<V and Uk < W for each k, k= 1,2,...,n, then U1+ U2+ ... + Un <W.
So ...
U1+ U2+ ... + Un is the smallest subspace of V that contains Uk for each k, k= 1,2,...,n.Examples:
U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+yz=0}. U1 + U2 = R^{3}.Let Uk = {f in P(F): f(x) = a_{k}x^{k} where a_{k }is in F} . Then U0+ U1+ U2+ ... + Un = {f : f (x) = a_{0} + a_{1}x + a_{2}x^{2} + ...+ a_{n}x^{n} where a_{0} ,a_{1} ,a_{2},...,a_{n }are in F}.
Definition: U1^U2^ ... ^ Un = {v in V where v is in Uk , for all k = 1,2,...,n} called the intersection of the subspaces.
Facts:(i) U1^ U2^ ... ^ Un < V.
(ii) U1^U2^ ... ^ Un < Uk for each k, k= 1,2,...,n.
(iii) If W<V and W < Uk for each k, k= 1,2,...,n, then W<U1^ U2^ ... ^ Un .
So ...
U1^ U2^ ... ^ Un is the largest subspace of V that is contained in Uk for each k, k= 1,2,...,n.
Examples: U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+yz=0}. U1 ^ U2 = {(x,y,z): x+y+2z=0 and 3x+yz=0}= ...
Let Uk = {f in P(F): f(x) = a_{k}x^{k} where a_{k }is in F} then Uj^Uk = {0} for j not equal to k....
91203
Direct Sums: Suppose U1, U2, ... , Un are all subspaces of V and U1+ U2+ ... + Un = V, we say V is the direct sum of U1, U2, ... , Un if for any v in V, the expression of v as v = u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk is unique, i.e., if v = u_{1}'+ u_{2}'+ ... + u_{n}' for u_{k}' in Uk then u_{1} = u_{1}', u_{2}=u_{2}', ... , u_{n}=u_{n}'. In these notes we will write V = U1# U2# ... # Un.
Examples:Uk = {v in F^{n}: v = (0,... 0,a,0, ... 0) where a is in F is in the kth place on the list.} Then U1# U2# ... # Un = V.Theorem: (Prop 1.8) V = U1# U2# ... # Un if and only if (i)U1+ U2+ ... + Un = V AND 0=u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk implies u_{1}=u_{2}=...=u_{n}=0.
Theorem: (Prop 1.9) V = U#W if and only if V = U+W and U^W={0}.
Examples using subspaces and direct sums in appplications:
Suppose A is a square matrix (n by n) with entries in the field F.
For c in F, let W_{c }= { v in F^{n} where vA = cv}.
91503
Fact: For any A and any c, W_{c}< F^{n }. [Comment: for most c, W_{c}= {0}. ]
Definition: If W_{c} is not the trivial subspace, then c is called an eigenvalue or characteristic value for the matrix A and nonzero elements of W_{c }are called eigen vectors or characteristic vectors for A.
Application 1 : Consider the coke and pepsi matrices:
Questions: For which c is W_{c} nontrivial?
Example A. vA = cv?_{ }where
A= (
5/6 1/6 1/4 3/4 )
Example B. vB = cv_{ }where
B= (
2/3 1/3 1/4 3/4 )
To answer this question we need to find (x,y) [not (0,0)] so that
Is R^{2} = W_{c1} + W_{c2} for these subspaces? Is this sum direct?
Example A
(x,y) (
5/6 1/6 1/4 3/4 ) = c(x,y)
Example B
(x,y) (
2/3 1/3 1/4 3/4 ) = c(x,y)
Focusing on Example B we consider for which c will the matrix equation have a nontrivial solution (x,y)?
We consider the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy.
Multiplying by 12 to get rid of the fractions and bringing the cx and cy to the left side we find that
(812c)x + 3 y = 0 and 4x + (912c)y = 0
Multiplying by 4 and (812c) then subtracting the first equation from the second we have
((812c)(912c)  12 )y = 0. For this system to have a nontrivial solution, it must be that
((812c)(912)c  12 ) = 0 or 72  (108+96) c+144c^2 12 = 0 or 60 204c +144c^2 = 0.
Clearly one root of this equation is 1, so factoring we have (1c)(60144c) = 0 and c = 1 and c = 5/12 are the two solutions... so there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W_{1} and W_{5/12} .
General Claim: If c is different from k, then W_{c} ^ W_{k} = {0}
Proof:?
generalize?
What does this mean for v_{n} when n is large?
Does the distribution of v_{n} when n is large depend on v_{0}?
91703
Application 2: For c a real number let
W_{c} = {f in C^{∞}(R) where f '(x)=c f(x)} < C^{∞}(R).
What is this subspace explicitly?
Let V={f in C^{∞}(R) where f ''(x)  f(x) = 0} < C^{∞}(R).
Claim: V = W_{1} # W_{1}
Begin? We'll come back to this later in the course!
If c is different for k, then W_{c} ^ W_{k }= {0}
Proof:...
Back to looking at things from the point of view of individual vectors:
Linear combinations:
Def'n. Suppose S is a set of vectors in a vector space V over the field F. We say that a vector v in V is a linear combination of vectors in S if there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars a_{1}, a_{2}, ..., a_{n} in F where v = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} .
Comment: For Axler: S is a finite set.
Def'n. Span (S) = {v in V where v is a linear combination of vectors in S}
Span (S) = V we say that S spans V. "finite dimensional" v.s.
Linear Independence.
Def'n. A set of vectors S is linearly dependent if there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars a_{1}, a_{2}, ..., a_{n} NOT ALL 0 in F where 0 = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} .
A set of vectors S is linearly independent if it is not linearly dependent.
Other ways to characterize linearly independent.
A set of vectors S is linearly independent if whenever there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars a_{1}, a_{2}, ..., a_{n} in F where 0 = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} , the scalars are all 0, i.e. a_{1 }= a_{2} = ... =a_{n} = 0 .
91903
Examples: Suppose A is an n by m matrix: the row space of A= span ( row vectors of A) , the column space of A = Span(column vectors of A).
Relate to R(A)
Recall R(A) = "the range space of A" = { w in F^{k} where for some v in F^{n}, vA= w } < F^{k}.
w is in R(A) if and only if w is a linear combination of the row vectors, i.e., R(A) = the row space of A.
If you consider Av instead of vA, the R*(A) = the column space of A.
"Infinite dimensional" v.s. examples: P(F), F^{∞}, C^{∞} (R)
P(F) was shown to be infinite dimensional. [ If p is in SPAN(p1,....,pn) then the degree of p is no larger than the maximum of the degrees of {p1,...pn}. So P(F) cannot equal SPAN(p1,...,pn) for any finite set of polynomials i.e, P(F) is NOT finite dimensional.
Some Standard examples.
2.4 Linear dependence Lemma : Suppose S is a finite linearly dependent set indexed by 1,2,.. , n and v1 is not 0, then for some index j, vj is in the span(v1,...v(j1)) and Span (S) = Span(S {vj}).
Proof: See LA p25.
2.6 Theorem: Suppose S is a finite set of vectors with V = Span (S) and T is a linearly independent set of vectors in V. Then T is also finite and n( T)< = n(S)
Proof: See LA p2526.
Comments:
 (2.4) shows how to constuct a basis for a non trivial finite dimensional v.s., V. Start with a finite set of vectors S that span V. We can assume S has some nonzero vector in it. Put that element first.
 If S is linearly independent you are done. If not, apply (2.4) repeatedly until the resulting set of vectors is linearly independent. This must happen since at worst you will be left with v1 which was not 0. Thus we have proven
Theorem 2.10: Every finite spanning list in a vector space can be reduced to a basis.
and the Corollary (2.11). Every finite dimensional vector space has a basis.
Comment:The proof of Theorem 2.6 also shows that given T, a linearly independent subset of V and S, a finite set where SPAN(S) = V, one can step by step replace the elements of S with elements of T at the beginning of the list of vectors, so that eventually you have a new set S' where Span(S') = V and T contained in S'. Now by applying repeatedly the Lemma to S', one will arrive at a set B that is a basis for V with T contained in B. This proves
Theorem 2.12: Every Linearly independent subset of a finite dimansional vector space can be extended to a basis of the vector space.
Prop: A Subspace of a finite dimensional vs is finite dimensional.
Problem 2.12: Suppose p_{0},...,p_{m} are in P_{m}(F) and p_{i}(2) = 0 for all i.
Prove {p_{0},...,p_{m}} is linearly dependent.
Proof: Suppose {p_{0},...,p_{m}} is linearly independent.
Notice that by the assumption for any coefficients
(a_{0}p_{0}+..+a_{m}p_{m} )(2) = a_{0}p_{0}(2)+..+a_{m}p_{m}(2) = 0and since u(x)= 1 has u(2) = 1, u (= 1) is not in the SPAN(p_{0},...,p_{m}).
Thus SPAN(p_{0},...,p_{m}) is not P_{m}(F).
But SPAN ( 1,x, ..., x^{m}) = P_{m}(F) .
By repeatedly applying Lemma 2.4 to these two sets of m+1 polynomials as in Theorem 2.6, we have SPAN (p_{0},...,p_{m})=P_{m}(F), a contradiction. So {p_{0},...,p_{m}} is not linearly independent.
End of proof.
Bases def'n.
Definition: A set B is called a basis for the vector space V over F if (i) B is linearly independent and (ii) SPAN( B) = V.
Prop. If V is finite dimensional vs and B and B' are bases for V, then n(B) = n(B').
Proof: fill in ... based on 2.6.
Define dimension of a finite dimensional v.s. over F.
922: Filled in much above on Bases and the proofs of theorems about bases.
924
Discuss dim({0}).
What is Span of the empty set? Characterize SPAN(S) = the intersection of all subspaces that contain S. Then Span (empty set) = Intersection of all subspaces= {0}.The empty set is linearly independent!... so The empty set is a basis for {0} and the dimension of {0} is 0!
2.8: bases and representation of vectors in a f.d.v.s.
Suppose B is a finite basis for V with its elements in a list, (u_{1}, u_{2}, ... , u_{n}) . If v is in V, then there are unique vectors scalars a_{1}, a_{2}, ..., a_{n} in F where v = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} . The scalars are called the coordinates of v w.r.t. B, and we will write v = [a_{1}, a_{2}, ..., a_{n}]_{B}.
Examples: In R^{2}, P_{4}(R).
Connect to Coke and Pepsi example: find a basis of eigen vectors using the B example for R^{2}. [Use the online technology]
Example B
(x,y) (
2/3 1/3
1/4 3/4 ) = c(x,y)
We considerd the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy and showed that
there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W_{1} and W_{5/12} .
Now we can use technology to find eigenvectors in each of these subspaces.
Matrix calculator, gave as a result that the eignevalue 1 had an eigenvector (1,4/3) while 5/12 had an eigenvector (1,1). These two vectors are a basis for R^{2}.
Dimension Results: Suppose Dim(V) = n, S a set of vectors with N(S) = n. Then
(1) If S is Linearly independent, then S is a basis.
(2) If Span(S) = V, then S is a basis.
Proof: (1) S is contained is a basis, B. If B is larger than S, then B has more than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
(2) S contains a basis, B. If B is smaller than S then B has less than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
IRMC
926
2.18: If U, W <V are finite dimensional, then so is U+W and
dim(U+W) = Dim(U) + Dim(W)  Dim(U^W).
Proof: (idea) build up bases of U and W from U^W.... then check that the union of these bases is a basis for U+W.
Linear Transformations: V and W vector spaces over F.
Definition: A function T:V > W is a linear transformation if for any x,y in V and in F, T(x+y) = T(x) + T(y) and T(ax) = a T(x).
Examples: T(x,y) = (3x+2y,x3y) is a linear transformation T: R2 > R2.
G(x,y) = (3x+2y, x^2 2y) is not a linear trasnformation.
G(1,1) = (5, 1) , G(2,2) = (10, 0)... 2*(1,1) = (2,2) but 2* (5,1) is not (10,0)!
Notice that T(x,y)can be thought of as the result of a matric multiplication
So the two key properties are the direct consequence of the properties of matrix multiplication.... (v+w)A= vA+wA and (cv)A = c(vA).
(x,y) (
3
1
2
2 )
For A a k by n matrix : T_{A} (left argument) and _{A}T (right) are linear transformations on F^{k} and F^{n}.
T_{A} (x) = x A for x in F^{k} and _{A}T(y) = A[y]^{tr} for y in F^{n} and [y]^{tr} indicates the entries of the vector treated as a one column matrix.
The set of all linear transformations from V to W is denoted L(V,W).
More notes on Chapter 1 and 2
1.9:V = U # W if and only if V = U+W and U^W={0}.
Proof: => suppose v is in U^W, then v=u in U and v=w in W, so 0 = uv. But since V= U#W, this means u=w = 0 so v=0, so U^W={0}.
Note: This argument extends to V as the direct sum of any family of subspaces.<= Suppose u is in U and w is in W and u+w = 0. Then, u = w so u is also in W, and thus u is is U^W={0}. So u=0 and then w= 0 . Since V=U+W, we have by 1.8, V=U#W. EOP
2.19 If V is f.d.v.s. and U1, ...Un are subspaces with V = U1 +...+ Un and
dim(V) = dim(U1)+...+ dim(Un) then V = U1 #...# UnProof outline: Choose bases for U1, ..., Un and let B be the union of these setes. Since V = U1 +...+ Un every vector in v is a linear combination of elements from B. But B has exactly dim(U1)+...+ dim(Un) = dim(V) elements in it, B is a basis for V. Now suppose 0=u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk. Then each u_{i }=can be expressed as a linear combination of the basis vectors for Ui, and the entire linear combination is 0 implies that each coefficient is 0 because B is a basis. So u_{1}=...=u_{n}=0 and V = U1 #...# Un. EOP
How do you find a basis for the SPAN(S) in R^{n}?
Outline of use of row operations...
Back to linear transformations:
Consequences of the definition: If T:V>W is a linear transformation, then for any x and y in V and a in F,
(i) T(0) = 0.
(ii) T(x) = T(x)
(iii) T(x+ay) = T(x) + aT(y).
Quick test: If T:V>W is a function and (iii) holds for any x and y in V and a in F, then the function is a linear transformation.
Visualize with Winplot?
Why this called a "linear" transformation:
The geometry of linear: A line in R2 is {(x,y): Ax +By = C where A and B are not both 0} = {(x,y): (x,y) = (a,b) + t(u,v)}= L, line through (a,b) in direction of (u,v).Suppose T is a linear transformation :
Let T(L) = L' = {(x'y'): (x',y')= T(x,y)}
T(x,y) = T(a,b) + t T(u,v).
If T(u,v) = (0,0) then L' = T(L) = {T(a,b)}.
If not then L' is also a line though T(a,b) in the direction of T(u,v).
Coke/Pepsi example B: T(x,y) =(2/3 x +1/4 y, 1/3 x+3/4 y)
T(v_{0}) = v_{1}, T(v_{1}) = v_{2}.... T(v_{k})=T(v_{k+1}).
T(v*)=v* means a nonzero v* is an eigenvector with eigenvalue 1. T(1, 4/3) = (1,4/3). Also T(3/7, 4/7) = T[(3/7)(1,4/3)] = 3/7T(1,4/3) =3/7(1,4/3) =(3/7,4/7).
T(1,1) =(5/12,5/12 )= (5/12)(1,1) means that (1,1) is an eigenvector with eigenvalue 5/12.
D... Differentiation: on polynomials, on ...Example: (D(f))(x) = f' (x) or D(f) = f'.
T(f)(x) = f''(x)  f(x) or T(f) = DD(f)  f = (DDId) f.
Wednesday 101
Theorem: T : V>W linear, B a basis, gives S(T):B >W.
Suppose S:B > W, then there is a unique linear transformation T(S):V>W such that S(T(S))=S.
Proof: Let T(S)(v) be defined as follows: Suppose v is expressed (uniquely) as a linear combination of elements of B, ie. v = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} ... then let T(v) = a_{1}S(u_{1})+ a_{2}S(u_{2})+ ... + a_{n}S(u_{n}) ....
This well defined since the representation of v is unique. Left to show T is linear. Clearly... if u is in B then S(T(S))(u) = S(u).
Example: T: P(F) > P(F).... S(x^{n}) = nx ^{n1}.
Or another example: S(x^{n}) = 1/(n+1) x ^{n+1}.
Algebraic stucture on L(V,W)
Definition of the sum and scalar multiplication:
T, U in L(V,W), a in F, (T+U)(v) = T(v) + U(v).
Fact:T+U is also linear.
(aT)(v) = a T(v) .
Fact:aT is also Linear.
Check: L(V,W) is a vector space over F.Composition: T:V > W and U : W > Z both linear, then UT:V>Z where UT(v) = U(T(v)) is linear.
Note: If T':V> W and U':W>Z are also linear, then U(T+T') = UT + UT' and (U+U') T = UT + UT'. If S:Z>Y is also linear then S(TU) = (ST)U.
Key focus: L(V,V) , the set of linear "operators" on V.... also called L(V).
If T and U are in L(V) then UT is also in L(V). This is the key example of what is called a "Linear Algebra"... a vector space with an extra internal operation usually described as the product. That satisfies the distributive and associative properties.
Key Spaces related to T:V>W
Null Space of T= kernel of T = {v in V where T(v) = 0 [ in W] }= N(T) < V
Range of T = Image of T = T(V) = {w in W where w = T(v) for some v in V} <W.
Recall definition of "injective" or "1:1" function.
Theorem: T is 1:1 (injective) if and only if N(T) = {0}
Proof: => Suppose T is 1:1. We now that T(0)=0 , so if T(v) = 0, then v = 0. Thus 0 is the only element of N(T) or N(T) = {0}.
<= Suppose N(T) = {0}. If T (v) = T(w) then T(vw) =T(v)T(w) = 0 so vw is in N(T).... ok, than must mean that vw = 0, so v=w and T is 1:1.
Friday 103
More details to follow on this lecture:
he first part of the lecture discussed the importance of the Null Space of T, N(T) is undertanding what T does in general.
Example 1. D:P(R) > P(R)... D(f) = f'. Then N(D) = { f: f(x) = C for some constant C.} [from calculus 109!]
Notice: If f'(x) = g'(x) the f(x) = g(x) + C for some C.
Proof: consider D(f(x)  g(x)) = Df(x)  Dg(x) = 0, so f(x) g(x) is in N(T).
Example 2: Solving a system of homogeneous
linear equations. This was connected to finding the null space of a
linear trasnformation connected to a matrix. Then what about a non homogeneous
system with the same matrix. Result: If z is a solution of the non homogeneous
system of linear equations and z ' is another solution, then z' = z + n where
n is a solution to the homogeneous system.
General Proposition: T:V>W. If b is a vector in W and a is in V with T(a) = b, then T^{1}(b} = {v in V: v = a +n where n is in N(T)} = a + N(T)
Comment: a + N(T) is called the coset of a mod N(T)...these are analogous to lines in R^{2}. More on this later in the course.
Major result of the day: Suppose T:V>W and V is
a finite dimensional v.s. over F. Then N(T) and R(T) are also finite dimensional
and Dim(V) = Dim (N(T)) + Dim(R(T)).
Proof: Done in class see text: Outline: start with a basis C for N(T)
and extend this to a basis B for V. Show that T(BC) is a basis
for R(T).
Next: Monday. Oct.6. Matrices and Linear transformations. (with Dr. B).
M_{B}^{B}(T)=  ( 
 ) 
M_{B}^{B}(T^{n})=[M_{B}^{B}(T)]^{n}=  ( 
 )^{n}  = 

M_{E}^{E}(T)=  ( 

) 
M_{B}^{E}(Id)=  ( 

) 
M_{B}^{B}(T)=  ( 

) 
The Division Algorithm, [proof?]