Date
|
What we discussed/How we spent our time
|
Jan 13
|
Syllabus. Text.
We discussed the meaning of "linear" (relating to lines)
and "algebra" (= "al-jabr" = restoring, from a 9th century book
on solving equations by al-Khwarizmi).
|
Jan 15
|
We discussed Sections 1.1 and 1.2: Linear systems, Gaussian elimination (GE),
back substitution, coefficient matrix, augmented matrix, pivot position,
pivot element, pivot column, elementary row operations,
row reduction, row echelon form. Solution sets.
We made the distinction between
consistent and inconsistent systems, and explained how to identify
which systems have these properties.
|
Jan 17
|
Read 1.3, 1.5, 2.1-2.3.
We named free variables and pivot(= basic) variables.
We discussed Gauss-Jordan elimination (GJE). We discussed
roundoff error (and the use of partial pivoting
to minimize it).
|
Jan 22
|
Read 3.1-3.6.
I distributed a practice quiz.
We discussed matrix algebra (addition, negation,
zero, scalar multiplication, transpose and matrix multiplication).
We noted that matrix multiplication is usually not commutative,
and that $(AB)^T=B^TA^T$. We saw that a system of linear
equations can be written in matrix form as $A{\mathbf x}={\mathbf b}$.
We began a discussion of elementary matrices.
|
Jan 24
|
Read 3.7, 3.9.
We discussed the identity matrix, inverses of square matrices,
and the word(s) "(non)singular". We showed how to set up a linear
system to solve for the inverse of a square matrix.
We continued our discussion of elementary matrices.
We showed that elementary matrices are invertible, and
we defined "row equivalence" and "column equivalence".
|
Jan 27
|
Read 2.4, 2.5.
We gave the formula for the inverse of a $2\times 2$
matrix, and showed that some matrices have left inverses
but no right inverse.
We explained why the system $A{\bf x}={\bf b}$ has a unique
solution for every ${\bf b}$ when $A$ is invertible.
We discussed homogeneous and nonhomogeneous linear systems,
in particular how the general solution to
$A{\bf x}={\bf b}$ is related to the general solution to
$A{\bf x}={\bf 0}$.
Quiz 1.
|
Jan 29
|
We reviewed the relationship between
homogeneous and nonhomogeneous linear systems.
Then we proved the following:
Thm. Let $A$ be an $m\times n$ matrix. TFAE.
(1) $A$ has a left inverse.
(2) The reduced row echelon form of $A$ has a pivot in
every column.
(3) $A{\bf x}={\bf 0}$ has a unique solution.
(4) $A{\bf x}={\bf b}$ has at most one solution for every
${\bf b}\in {\mathbb R}^m$.
|
Jan 31
|
We proved the following:
Thm. Let $A$ be an $m\times n$ matrix. TFAE.
(1) $A$ has a right inverse.
(2) The reduced row echelon form of $A$ has a pivot in
every row.
(3) $A{\bf x}={\bf b}$ has at least one solution for every
${\bf b}\in {\mathbb R}^m$.
We noted that if a matrix has a left inverse and a right inverse,
then they must be the same and the matrix must be square.
We extracted two things from the proof of the theorem,
(i) the fact that $(AB)^{-1}=B^{-1}A^{-1}$ and (ii) an algorithm
for computing the inverse of an invertible matrix:
$\left[A|I\right]\to \left[I\Big|A^{-1}\right]_{GJE}$.
|
Feb 3
|
We proved the following:
Thm. An $n\times n$ matrix is invertible iff it is a product
of $n\times n$ elementary matrices.
We used this in the proof of
Thm. Let $A$ and $B$ be $m\times n$ matrices. TFAE.
(1) $A$ is row equivalent to $B$.
(2) For any ${\bf x}$,
$A{\bf x}={\bf 0}$ iff
$B{\bf x}={\bf 0}$.
(3) $A$ and $B$ have the same reduced row echelon form.
In this latter theorem we proved (3)$\to$(1)$\to$(2) and half
of (2)$\to$(3). We postponed the remaining half in order to take
Quiz 2.
|
Feb 5
|
We finished the proof from the previous lecture
and said a few words about geometry.
|
Feb 7
|
Read 4.1.
We defined "real vector space" and "linear transformation".
Among our examples of vector spaces were
$\mathbb R^n$, $\mathbb P_n(t)$, $M_{m\times n}(\mathbb R)$
and $C^k([0,1])$. Among our examples of linear transformations were
$T_A\colon \mathbb R^n\to \mathbb R^m\colon {\bf x}\mapsto A{\bf x}$,
$T\colon \mathbb P_n(t)\to \mathbb P_n(t)\colon p(t)\mapsto p'(t)$, and
$T\colon M_{m\times n}(\mathbb R)\to M_{n\times m}(\mathbb R)
\colon A\mapsto A^T$.
|
Feb 10
|
In response to a question, we reviewed the algorithm
for finding right and left inverses.
We defined "linear combination", "span" and "subspace",
and gave examples. Quiz 3.
(This week's HW due date was pushed back to Friday.)
|
Feb 12
|
Read 4.3, 4.4.
We discussed dependence relations,
linear independence, and linear dependence. We proved
the equivalence of 2 definitions of "linear dependent set".
(Defn 1: $X$ is linearly dependent if some vector in $X$
is a linear combination of the others; Defn 2: $X$
is linearly dependent if it satisfies
a nontrivial dependence relation.)
We defined
"basis" and "dimension". We proved that
$\{{\bf e}_1,\ldots,{\bf e}_n\}$ is a basis for $\mathbb R^n$, hence
$\dim(\mathbb R^n)=n$. We proved that
$\{1,t,t^2,\ldots,t^n\}$ is a basis for $\mathbb P_n(t)$, hence
$\dim(\mathbb P_n(t))=n+1$. While examining examples
we learned that
no basis contains the zero vector, and that if $X$ spans $V$ and
${\bf x}\notin X$, then $X\cup\{{\bf x}\}$ cannot be independent.
|
Feb 14
|
We worked on practice problems.
|
Feb 17
|
We defined the four fundamental subspaces
and gave algorithms to find bases for N(A) and R(A).
We proved the rank+nullity theorem.
Quiz 4.
|
Feb 19
|
We discussed the nullspace algorithm and the
column space algorithm and
how to find bases for subspaces of $\mathbb R^n$.
|
Feb 21
|
With a guest lecturer, we worked on this handout.
|
Feb 24
|
A guest lecturer discussed extending a basis to a larger subspace,
finding a basis for a sum, and finding a basis for an intersection.
Lecture notes.
Quiz 5.
|
Feb 26
|
We reviewed for the midterm.
Review sheet.
|
Feb 28
|
Midterm. Solutions.
|
Mar 3
|
Read Section 4.7.
We worked out midterm Problem 3.
Then we embarked on a goal of proving that
every finitely generated real vector space is
isomorphic to $\mathbb R^n$ for some $n$.
In this lecture we defined "finitely generated", "isomorphism"
(+ "endomorphism", "automorphism"), and "coordinates relative
to a basis". We proved that any isomorphism between vector
spaces preserves and reflects the formation of linear combinations,
hence preserves and reflects independent sets, spanning sets,
bases and dimension.
|
Mar 5
|
Read Section 4.8.
Thm. Every finitely generated real vector space is
isomorphic to $\mathbb R^n$ for some $n$.
Thm. Every linear transformation between
finitely generated real vector spaces is
representable by a matrix.
|
Mar 7
|
We proved the first theorem listed from March 5
and worked on this handout.
|
Mar 10
|
We proved the second theorem listed from March 5,
discussed why
${}_{\mathcal B}[S\circ T]_{\mathcal D} =
{}_{\mathcal B}[S]_{\mathcal C}\cdot [T]_{\mathcal D}$,
and discussed why
${}_{\mathcal C}[T^{-1}]_{\mathcal B}=
{}_{\mathcal B}[T]_{\mathcal C}^{-1}$.
Quiz 6.
|
Mar 12
|
We talked about change of basis matrices.
|
Mar 14
|
We worked on this sheet of practice
problems. In particular, we discussed how to use Gaussian elimination
to change basis or to find a change of basis matrix:
If $\mathcal B$ and $\mathcal C$ are bases written in the $\mathcal E$-basis
and ${\bf u}$ is a vector written in the $\mathcal E$-basis,
then (i) to find $[{\bf u}]_{\mathcal B}$ apply
GE to $[{\mathcal B}|{\bf u}]$ to obtain
$[I|{\mathcal B}^{-1}{\bf u}]=[I| [{\bf u}]_{\mathcal B}]$, while (ii) to
find ${}_{\mathcal C}[I]_{\mathcal B}$ apply GE to
$[{\mathcal C}|{\mathcal B}]$ to obtain
$[I|{\mathcal C}^{-1}{\mathcal B}]=[I|\;{}_{\mathcal C}[I]_{\mathcal B}\;]$.
|
Mar 17
|
We discussed the use of linear algebra to solve network flow problems,
Quiz 7.
|
Mar 19
|
We discussed the use of linear algebra to balance chemical
equations.
|
Mar 21
|
We discussed the use of linear algebra to model
the movement of goods in a simple economy. Our discussion
introduced stochastic matrices, and we observed
how such matrices emerge from the study of Markov processes.
We also observed that $I-C$ is singular if $C$ is a
stochastic matrix.
|
Mar 31
|
We discussed length, distance and angle in real
vector spaces.
A handout.
Quiz 8.
|
Apr 2
|
We discussed bilinear forms in general and dot product in particular.
We proved the formula
$\cos(\theta)=({\bf u}\bullet{\bf v})/(\|{\bf u}\|\cdot \|{\bf v}\|)$.
We explained how to find the unit vector in a given direction.
We concluded by noting that length in complex vector
spaces must be computed a little differently than length
in real vector spaces.
|
Apr 4
|
We defined "inner product" and "norm".
After proving the Cauchy-Bunyakovsky-Schwarz inequality
we showed that any inner product on $V$
induces a norm on $V$.
|
Apr 7
|
Linear algebra over other scalar fields, part 1.
We defined "field", and gave examples of fields:
$\mathbb R$, $\mathbb Q$, $\mathbb Q[\sqrt{2}]$,
$\mathbb C = \mathbb R[i]$, and $\mathbb F_2$ (the field with
$2$ elements). Turning our attention exclusively to $\mathbb C$,
we examined arithmetic in $\mathbb C$, defined Re$(p+qi)=p$
and Im$(p+qi)=q$, and described how to represent complex numbers
as real vectors in $\mathbb R^2$.
Quiz 9.
|
Apr 9
|
Linear algebra over other scalar fields, part 2.
We defined $\overline{\alpha}$,
$|\alpha|$ and arg$(\alpha)$ for a complex
number $\alpha$. We described the geometric interpretation
of the arithmetical operations of $\mathbb C$.
We defined "antilinear function",
"sesquilinear form", and "complex inner product".
We explained how to compute length, distance
and angle in $\mathbb C^n$.
|
Apr 11
|
Read Sections 4.6, 5.5, 5.13.
We discussed the method of least squares for finding
approximate solutions to inconsistent systems
of the form $A{\bf x}={\bf b}$. We showed how to
use it to fit curves to data.
|
Apr 14
|
Read Section 5.4
We discussed the problem of projecting a vector ${\bf v}$ onto
a subspace $W\leq V$. We developed
and checked the Fourier expansion formula
$$
\textrm{proj}_W({\bf v}) = \sum_{i=1}^n \langle {\bf u}_i, {\bf v}\rangle
{\bf u}_i
$$
where $({\bf u}_1,\ldots,{\bf u}_n)$ is an orthonormal basis
for $W$. If $\langle {\bf u},{\bf v}\rangle := {\bf u}^H{\bf v}$
and $U:=[{\bf u}_1 \cdots {\bf u}_n]$, then this can be written
$\textrm{proj}_{R(U)}({\bf v})=UU^H{\bf v}$. If working over
$\mathbb R$ instead, then $\textrm{proj}_{R(U)}({\bf v})=UU^T{\bf v}$.
Quiz 10.
|
Apr 16
|
Read Section 5.5
We discussed Gram-Schmidt orthonormalization.
The following image from wikipedia explains it:

|
Apr 18
|
Read Section 5.6
We practiced Gram-Schmidt orthonormalization,
then discussed orthogonal and unitary matrices.
|
Apr 21
|
Read Section 6.1, 7.1
We described rotations in 3-space, and argued that
a nonzero vector ${\bf v}$ is an axis for the rotation given by matrix
$A$ iff $A{\bf v}={\bf v}$ iff ${\bf v}$ lies in the nullspace
of $(A-I)$.
Quiz 11.
|
Apr 23
|
We discussed the eigenvalue equation $A{\bf v}=\lambda{\bf v}$.
We explained why the e-values of $A$ are the roots of
the characteristic polynomial, $\textrm{det}(\lambda I-A)$.
We showed how a complete set of e-vectors can be used
to diagonalize $A$: if $A$ is $n\times n$ and
has e-vectors $({\bf v}_1,\ldots,{\bf v}_n)$ which form
a basis for (complex) $n$-space, then for $S = [{\bf v}_1 \ldots {\bf v}_n]$
we have $S^{-1}AS$ = a diagonal matrix with diagonal entries
$\lambda_1,\ldots,\lambda_n$.
|
Apr 25
|
The determinant is the unique alternating, $n$-linear
form on $\mathbb R^n$ that takes the value $1$
on the standard basis. We discussed the Laplace expansion,
why $\det(A^T)=\det(A)$ and how to compute the
determinant of block diagonal matrices.
We worked on a
handout.
|
Apr 28
|
We explained why the determinant of a matrix is zero if
the columns (or rows) are dependent.
We explained why the determinant of an upper triangular matrix
is the product of the diagonal entries.
We discussed the adjugate matrix and the equation
$A\cdot \textrm{adj}(A) = \det(A)\cdot I$. We also mentioned
how to compute $\det(A)$ through Gaussian elimination.
No quiz.
|
Apr 30
|
We discussed diagonalization of matrices.
A square matrix is diagonalizable iff the geometric
multiplicity of each e-value equals the
algebraic multiplicity of the e-value.
Diagonalization can be used to
raise matrices to powers easily. We explained why
$
\left[
\begin{array}{cc}
1&1\\1&0
\end{array}\right]^n\cdot
\left[
\begin{array}{c}
1\\0
\end{array}\right]
=
\left[
\begin{array}{c}
F_{n+1}\\F_n
\end{array}\right]
$ where $F_n$ is the $n$th Fibonacci number,
so our work on powers of matrices
produces a formula for the
$n$th Fibonacci number.
|
May 2
|
We reviewed for the final. One question
with a lengthy answer was:
why does every rotation in 3-space have an axis?
The key parts of the answer were: an orthogonal matrix
has determinant 1, $\det(A)$ = product of e-values of $A$,
all e-values of an orthogonal matrix satisfy $|\lambda|=1$,
complex e-values of a real matrix come in conjugate pairs.
|