Course

Home

Syllabus

Lecture Topics

Homework

Policies



Links

Problem of the Month

Math Club (QED)

Summer Research in Mathematics

Putnam Competition

Math Department Tutor List


Math 2135-002: Intro to Linear Algebra, Spring 2021


Lecture Topics


Date
What we discussed/How we spent our time
Jan 15
Syllabus. Policies. Text.

The main topics of this course will be:

(1) Systems of linear equations.
(2) Matrix arithmetic.
(3) Vectors and vector spaces.
(4) Linear transformations.
(5) Determinant.
(6) Orthogonality and least squares.

Jan 20
We discussed the problem of solving systems of not-necessarily linear equations.

The main topics covered were:

(1) An equation $F(x,y)=0$ in two variables represent a constraint on the pair $(x,y)$. Solving a *system* of equations $\{F(x,y)=0, G(x,y)=0\}$ involves finding all pairs that satisfy all constraints. Geometrically, this corresponds to finding the intersection points among the solution sets of the individual equations.

(2) What is a linear combination?

(3) What is a system of linear equations?

(4) Elementary row operations = the allowable steps in Gaussian elimination:
  (a) swapping two equations,
  (b) scaling an equation by a nonzero value,
  (c) adding a multiple of one equation to another.

(5) Pivots.

(6) Row echelon form.

(7) Linear systems with real coefficients can have 0 solutions, 1 solution, or infinitely many solutions.

We covered the first 10 slides from the author's first set of slides.

Jan 22
We reviewed Gaussian elimination, discussed terminology, and introduced the matrix form of a linear system.

The main topics covered were:

(1) Coefficient matrix of a linear system.

(2) Augmented matrix of a linear system.

(3) $k$th row pivot.

(4) Pivot column, pivot variable, free variable.

(5) (Reduced) row echelon form.

(6) Parametric form of solution set.

Jan 25
We reviewed Gaussian elimination again, including two forms of expressing solutions: parametric form and vector form. Then we began discussing matrices and vectors and their arithmetic. In the last 10 minutes we took Quiz 0.
Jan 27
We started a discussion of matrices, vectors, and their arithmetic.

This included the following topics:

(1) Matrix notation.

(2) Column vectors and row vectors.

(3) The additive arithmetic of matrices and vectors (addition, negation, scaling, and zero).

(4) We began a discussion of matrix multiplication, noting in particular that if $A$ is $m\times n$, $B$ is $p\times q$, then $AB$ is defined if and only if $n=p$. If $n=p$, then $AB$ is an $m\times q$ matrix.

(5) We proved that matrix addition is commutative and sketched the proofs that matrix addition and multiplication are associative.

(6) We gave an example to show that matrix multiplication need not be commutative.

Jan 29
We started class by working on this handout.

Then we discussed the following topics:

(1) Matrix multiplication.

(2) Identity matrices.

(3) $2$-sided and $1$-sided inverses.

(4) The axioms of rings.

(5) The fact that $M_n(\mathbb R)$ is a ring.

(6) Each linear system describes a matrix equation of the form $A{\bf x}={\bf b}$.

Feb 1
We discussed how each step of Gaussian Elimination corresponds to left multiplication by an elementary matrix. We introduced the notation for elementary matrices: $P_{ij}, E_{ii}(r), E_{ij}(r)$.

We took Quiz 1.

Feb 3
We started discussing linear geometry. This included:

(1) Geometric interpretation of vectors in space.

(2) Dot product of two vectors in $\mathbb R^n$.

(3) How to compute the Pythagorean length of a vector in $\mathbb R^n$: $\|{\bf u}\|^2 = {\bf u}\boldsymbol{\cdot} {\bf u}$.

(4) How to compute the angle between vectors in $\mathbb R^n$: $\cos(\theta)={\bf u}\boldsymbol{\cdot}{\bf v}/\|{\bf u}\|\cdot\|{\bf v}\|$.

(5) Geometric interpretation of addition, negation, scaling, and ${\bf 0}$. (Parallelogram rule for addition.)

(6) Linear combinations of vectors.

(7) Definition of ``linear transformation''.

(8) Thm. A function from $\mathbb R^n$ to $\mathbb R^m$ is a linear transformation if and only if it has the form $T({\bf x})=A\cdot {\bf x}$ for some $m\times n$ matrix $A$.

(9) The problem of finding the solutions to $A\cdot {\bf x} = {\bf b}$ is the problem of finding the preimage of ${\bf b}$ under the linear transformation $T({\bf x})=A\cdot {\bf x}$.

Feb 5
We worked on these practice problems. Then we discussed:

(1) Definition of ``span of a set of vectors''.

(2) A solution to $A{\bf x}={\bf b}$ is a vector of coefficients witnessing that ${\bf b}$ belongs to the span of the columns of $A$.

(3) Definition of ``independent set of vectors''.

Feb 8
We continued to discuss Linear Geometry (Chapter 1, Section 2) We discussed:

(1) The intuition behind dimension.

(2) A computational way to determine if a set $V\subseteq \mathbb R^m$ spans $\mathbb R^m$. (Create a matrix $A$ whose columns are the vectors in $V$. Perform Gaussian Elimination on $A$. The span of $V$ is $\mathbb R^m$ if and only if the final row does not consist of zeros. Equivalently, every row of $A$ contains a pivot.)

(3) An algebraic characterization of the property in (2): $\mathbb R^m$ is the span of the columns of $A$ if and only if $A$ has a right inverse.

We took Quiz 2.

Feb 10
We continued to discuss Linear Geometry (Chapter 1, Section 2) We compared the following two sets of statements about a set $V\subseteq \mathbb R^m$ and the matrix $A$ whose columns are the vectors in $V$:

First set of statements (equivalent to each other):

(1) $V$ spans $\mathbb R^m$.
(2) The RRE form of $A$ has no zero row.
(3) The RRE form of $A$ has a pivot in every row.
(4) $A{\bf x}={\bf b}$ has at least one solution for every ${\bf b}\in \mathbb R^m$.
(5) $A$ has a right inverse.

Second set of statements (equivalent to each other, but not equivalent to the above five statements):

(1) $V$ is independent.
(2) The RRE form of $A$ has no free variables.
(3) The RRE form of $A$ has a pivot in every column.
(4) $A{\bf x}={\bf b}$ has at most one solution for every ${\bf b}\in \mathbb R^m$.
(5) $A$ has a left inverse.

We defined ``basis'' (= independent spanning set) and ``dimension'' (= size of a basis).

Feb 12
We showed that the standard basis for $\mathbb R^m$ is linearly independent and spans $\mathbb R^m$. We then showed that every linearly independent spanning set for $\mathbb R^m$ has size $m$, thereby justifying our definition of dimension.

We described algorithms for finding left or right inverses for matrices that have them.

We began a discussion of fields (read pages 153-154) and vector spaces (read pages 83-92).

Feb 15
We continued our discussion of fields and vector spaces. One principal focus was our discussion of the concept of ``subspace'' of a vector space.

We took Quiz 3.

Feb 17
Wellness Day!
Feb 19
Midterm Review Sheet!

We worked through examples to compute the dimension of some spaces and some of their subspaces.

Some of the examples considered were:
(1) If $A$ is an $m\times n$ matrix, then the solution set of the homogeneous equation $A{\bf x}={\bf 0}$ is a subspace of $\mathbb R^n$. The dimension of this subspace equals the number of free variables of the system.
(2) $M_{m\times n}(\mathbb R)$ has dimension $mn$. The collection of symmetric $2\times 2$ matrices ($M^t=M$) over $\mathbb R$ is a subspace of $M_{2\times 2}(\mathbb R)$ of dimension $3$. The collection of antisymmetric $2\times 2$ matrices ($M^t=-M$) over $\mathbb R$ is a subspace of $M_{2\times 2}(\mathbb R)$ of dimension $1$.
(3) The polynomial functions $\{1,x,x^2,x^3,\ldots\}$ form an infinite independent subset of the real vector space $C(\mathbb R)$ of continuous functions $f:\mathbb R\to \mathbb R$. The existence of an infinite independent subset is enough to show that $C(\mathbb R)$ cannot be finite dimensional. (So it is infinite dimensional.)
(4) Let $S$ be the subspace of $C(\mathbb R)$ spanned by $\{\cos(x+r)\;|\;r\in \mathbb R\}$. We used the trig identity $\cos(x+r)=\cos(r)\cos(x)-\sin(r)\sin(x)$ to argue that $\{\cos(x), \sin(x)\}$ is a basis for $S$. Hence $S$ is $2$-dimensional.

Feb 22
Midterm Review Sheet!

Let $\mathbb V$ be a vector space. A subset $G\subseteq \mathbb V$ generates $\mathbb V$ if $\mathbb V = \textrm{span}(G)$. $\mathbb V$ is finitely generated if it has a finite spanning set.

We explained why, if $\mathbb V$ is a finitely generated vector space,

(1) Any spanning subset contains a basis.

(2) Any independent subset can be enlarged to a basis.

(3) All bases have the same cardinality, which we call $\dim(\mathbb V)$.

The same is true even if $\mathbb V$ is not finitely generated, but the proof in the non-finitely generated case requires some background from set theory.

We took Quiz 4.

Feb 24
Midterm Review Sheet!

We reviewed for the midterm exam.

(No new HW assigned for the week of Feb 24-March 3!)

Feb 26
Midterm!

Answers.

Mar 1
No quiz today!

We discussed how to write a vector in coordinates relative to a basis. We defined linear transformation (= homomorphism of vector spaces), and isomorphism. We explained why the transpose map is an isomorphism from $M_{2\times 2}(\mathbb R)$ to itself.

Mar 3
No HW due today!

We discussed isomorphisms of social networks (as an example of isomorphism between nonalgebraic structures), and then isomorphisms between vector spaces (as an example of isomorphism between algebraic structures). In general, if $\mathbb A$ and $\mathbb B$ are structures of the same type, then an isomorphism from $\mathbb A$ to $\mathbb B$ is
(i) a homomorphism (= structure preserving map) $T\colon \mathbb A\to \mathbb B$, for which there is a function $S\colon \mathbb B\to \mathbb A$ such that
(ii) $S$ is the inverse of $T$: $S(T(a))=a$ and $T(S(b))=b$ for every $a\in \mathbb A$ and $b\in \mathbb B$, and
(iii) $S$ is also a homomorphism.

Using social networks, we gave examples of $T$ and $S$ satisfying (i) and (ii) but not (iii). But we stated that if $\mathbb A$ and $\mathbb B$ are algebraic structures of the same type, and $T$ and $S$ exist satisfying (i) and (ii), then (iii) must also be satisfied. Thus, an isomorphism of vector spaces is an invertible linear transformation. (That is, it is a 1-1, onto linear transformation.)

Mar 5
We reviewed coordinates relative to a basis. Then we discussed matrices for linear transformations.
Mar 8
We started by finding a matrix for the trace map $\textrm{tr}\colon M_{2\times 2}(\mathbb R)\to \mathbb R^1$.

We then spoke about existence and uniqueness of matrix representations for linear transformations. We discussed the Universal Mapping Property (any set function defined on a basis, mapping into a space, uniquely extends to a linear transformation).

We took Quiz 5.

Mar 10
Handout on the classification of vector spaces and on the fact that every linear transformation is determined by its behavior on a basis. This handout includes the proof of the Universal Mapping Property.

We explained why matrix multiplication is linear, and why (given appropriate bases) every linear map can be represented by a matrix. We started discussing how to find a change of basis matrix ${}_{\mathcal C}[\textrm{id}]_{\mathcal B}$ from basis $\mathcal B$ to basis $\mathcal C$. The algorithm was to perform Gaussian Elimination on the matrix $[{\mathcal C}|{\mathcal B}]$ to obtain $[I|X]$, and then take ${}_{\mathcal C}[\textrm{id}]_{\mathcal B}$ to be $X=[{\mathcal C}^{-1}]\cdot [{\mathcal B}]$.
Here, when I write an ordered basis ${\mathcal B}$ with square brackets $[{\mathcal B}]$, I mean the matrix whose columns are the vectors in ${\mathcal B}$ written in the correct order.

Mar 12
We verified the correctness of the procedure to compute a change of basis matrix.

We discussed the canonical factorization of an arbitrary function $f\colon A\to B$ ($f=\iota\circ \overline{f}\circ \nu$), and discussed related terminology (domain, codomain, image, coimage, natural map, inclusion map, induced map, preimage). We then translated this terminology into the setting of linear transformations. We proved that the image of a linear transformation is a subspace of the codomain of the transformation.

Mar 15
Snow day! (University is closed.)
Mar 17
We discussed how to compute ordered bases and dimension for the spaces $\textrm{im}(T)$ and $\ker(T)$ when $T\colon \mathbb V\to \mathbb W$ is a linear transformation, and ordered bases $\mathcal B$ and $\mathcal C$ for $\mathbb V$ and $\mathbb W$ are given. We proved the Rank+Nullity theorem.

We took Quiz 6.

Mar 19
We worked through these slides, and justified our algorithms for computing bases and dimension for image and kernel. We also explained an algorithm for extending a partial basis to a basis.
Mar 22
We explained why the inclusion order on subspaces of $\mathbb V$ has the following properties.

(1) It is a lattice, $\textrm{Sub}(\mathbb V)$. (The greatest lower bound of $S, T\leq \mathbb V$ is $S\cap T$, the least upper bound of $S, T\leq \mathbb V$ is $S+T=\textrm{span}(S\cup T)$.)

(2) We examined $\textrm{Sub}(\mathbb V)$ when $\mathbb V$ is a $\mathbb F_2$-space of dimension $2$ or $3$.

(3) We explained why $\textrm{Sub}(\mathbb V)$ is graded by dimension.

(4) We explained why $\textrm{Sub}(\mathbb V)$ is complemented. (Given a basis $\mathcal B$ for $\mathbb V$ and a basis $\mathcal C$ for $S$, the column space algorithm applied to $[{\mathcal C}|{\mathcal B}]$ yields a basis for $\mathbb V$ that includes all vectors from $\mathcal C$. Delete the vectors from $\mathcal C$, and what remains will be a basis for a complement $S^{\prime}$ of $S$.)

(5) We stated but did not prove the following dimension formula, which is analogous to the principle of inclusion and exclusion: $\dim(S+T)=\dim(S)+\dim(T)-\dim(S\cap T)$.

Mar 24
(1) We reviewed why any subspace $\mathbb S\leq \mathbb V$ has a (not-necessarily-unique) complement $\mathbb S^{\prime}$, and reviewed the algorithm for how to compute a basis for $\mathbb S^{\prime}$ from a basis for $\mathbb V$ and a basis for $\mathbb S$.

(2) We explained why, if we restrict a linear transformation $T\colon \mathbb V\to \mathbb W$ to a subspace $\mathbb S\leq \mathbb V$, then $T|_{\mathbb S}$ is 1-1 iff $\mathbb S\cap \ker(T)=\{0\}$, $T|_{\mathbb S}$ maps $\mathbb S$ onto $\textrm{im}(T)$ iff $\mathbb S + \ker(T)=\mathbb V$, and $T|_{\mathbb S}$ is an isomorphism from $\mathbb S$ to $\textrm{im}(T)$ iff $\mathbb S$ is a complement of $\ker(T)$.

(3) We sketched the proof that $\dim(S+T)=\dim(S)+\dim(T)-\dim(S\cap T)$.

Mar 26
Read pages 326-363.

We discussed signed volume and the right hand rule. We explained why signed volume in $\mathbb R^m$ is

(1) a scalar-valued function $f\colon (\mathbb R^m)\times \cdots \times (\mathbb R^m)\to \mathbb R$,

(2) multilinear,

(3) alternating, and

(4) normalized to $1$ on the unit hypercube $({\bf e}_1,\ldots,{\bf e}_m)$.

I claimed that there is a unique function satisfying (1)-(4), but did not prove this yet. We did prove that a multilinear function over a scalar field where $2\neq 0$ is alternating if and only it is multilinear. Thus, over $\mathbb R$, item (3) is equivalent to the slightly weaker property (3)', which says that $f$ is antisymmetric.

Mar 29
We outlined the argument that explains why an $n$-linear, alternating, function $D$ defined on $n$-dimensional space is determined by the value of $D({\bf e}_1,\ldots,{\bf e}_n)$.

We took Quiz 7. Solutions.

Mar 31
We discussed permutations, permutation matrices, and the permutation expansion of the determinant, which is $\det [a_{ij}] = \sum_{\pi} \textrm{sign}(\pi)a_{\pi(1)1}\cdots a_{\pi(n)n}$. Here the sign of $\pi$ is $(-1)^k$ where $k$ is any number with the property that $\pi$ can be represented as a composition of $k$ transpositions. One legitimate value for $k$ is the Cauchy number of $\pi$.
Apr 2
Based on the permutation expansion of the determinant, we argued that $\det(A)=\det(A^t)$. Then we defined the $ij$-submatrix $A(i|j)$ of a matrix $A$, the $ij$-minor, the $ij$-cofactor, and the Laplace expansion of the determinant.
Apr 5
We worked out $\det(A)$ in different ways using the Laplace expansion. We introduced the adjugate matrix and explained why $A\cdot \textrm{adj}(A)=\det(A)\cdot I$. We explained why $\det(A)\neq 0$ if $A$ is invertible.

We took Quiz 8.

Apr 7
We explained how to compute the determinant of $A$ via Gaussian elimination. Then we summarized the most important facts about determinants:

(1) $\det\colon M_{n\times n}(\mathbb F)\to \mathbb F$ is the unique $n$-ary function (of the columns) that is multilinear, alternating, and normalized to $1$ on the standard basis/identity matrix.

(2) If $f\colon M_{n\times n}(\mathbb F)\to \mathbb F$ is any $n$-ary function (of the columns) that is multilinear and alternating, then $$f({\bf v}_1,\ldots,{\bf v}_n)=\det([{\bf v}_1,\ldots,{\bf v}_n])\cdot f({\bf e}_1,\ldots,{\bf e}_n).$$

(3) Any alternating linear function is multilinear. The converse is true if the characteristic of $\mathbb F$ is not $2$.

(4) The determinant may be computed by the permutation expansion, the Laplace expansion, or Gaussian elimination.

(5) $\det(AB)=\det(A)\cdot \det(B)$.

(6) $\det(A)=\det(A^t)$.

(7) $\det(A)\neq 0$ iff the columns of $A$ are independent iff the rows of $A$ are independent.

(8) If $U$ is upper triangular, then $\det(U)$ is the product of the diagonal entries of $U$. (Same type of remark for lower triangular matrices.)

Apr 9
We began to discuss eigenvectors. (See Chapter 5, Section II, Subsection 3.)

In all of todays discussion, we considered linear transformations $T\colon \mathbb V\to \mathbb V$ from a space to itself. Usually we assumed that $\mathbb V = \mathbb R^n$ and that $T(x) = Ax$ where $A$ is some $n\times n$ matrix. The most important points were:

(1) $v$ is an eigenvector for $A$ if it is a nonzero vector such that there exists a scalar $\lambda$ (lambda) such that $Av=\lambda v$. ($v$ is a ``preserved direction'', or ``axis'' for $T(x)=Ax$.) $\lambda$ is called the eigenvalue for $v$, and $v$ is also called a ``$\lambda$-eigenvector''.

(2) If $A$ is diagonal, then $\mathbb V$ has an ordered basis consisting of eigenvectors for $A$.

(3) Conversely, if $\mathbb V$ has an ordered basis consisting of eigenvectors for $A$, then with respect to that basis the matrix for $A$ will be diagonal.

(4) A nonzero vector $v$ is a $\lambda$-eigenvector for $A$ iff $Av=\lambda v$ iff $(\lambda I - A)v = 0$ iff $v\in \textrm{null}(\lambda I-A)$. That is, $v$ is a nonzero vector in $V_{\lambda}=\textrm{null}(\lambda I-A)$. $V_{\lambda}$ is called the $\lambda$-eigenspace of $A$.

(5) A scalar $\lambda$ is an e-value for $A$ iff $\lambda I-A$ has nontrivial nullity iff $\lambda I-A$ is not of full rank iff $\lambda I-A$ is singular iff $\det(\lambda I-A) = 0$.

(6) The characteristic polynomial of $A$ is defined to be $\chi_A(\lambda) = \det(\lambda I-A)$. (Here $\chi$ is the Greek letter chi, for characteristic, and $\lambda$ is considered to be a variable.) If $A$ is $n\times n$, then the characteristic polynomial of $A$ has degree $n$ and leading coefficient $1$. The e-values of $A$ are the roots of $\chi_A(\lambda)$.

(7) If $A=[a]$, then $\chi_A(\lambda) = \lambda-a$.

(8) If $A$ is $2\times 2$, then $\chi_A(\lambda) = \lambda^2-\textrm{tr}(A)\lambda+\det(A)$.

Apr 12
We discussed these slides about eigenvectors, eigenvalues, the characteristic equation, and diagonalization. Some of this was a review of the previous lecture, but we went further to discuss how these quantities are calculated. The main new items were:

(1) Eigenspaces for distinct eigenvalues of $A$ are ``disjoint'' (or ``independent'').
(2) If $T(x)=Ax$ has a basis ${\mathcal B}$ of e-vectors, and $C=[{\mathcal B}]$, then $C^{-1}AC = \Lambda$ is the matrix for $A$ with respect to the basis ${\mathcal B}$, and it is a diagonal matrix with diagonal values equal to the e-values for $A$.

We took Quiz 9.

Apr 14
We discussed diagonalization of square matrices, continuing to follow these slides. The main points included:

(1) If $\mathcal B$ is a basis for which $D={}_{\mathcal B}A_{\mathcal B}$ is diagonal, then $D$ and $A$ have the same e-values, and they appear as the diagonal entries of $D$.
(2) One may diagonalize $A$ by conjugating it ($C^{-1}AC$) by a matrix $C=[{\mathcal B}]$ where ${\mathcal B}$ is a basis consisting of e-vectors for $A$.
(3) If $D=C^{-1}AC$ is a diagonal form for $A$, then $D^k=C^{-1}A^kC$ and $A^k=CD^kC^{-1}$.
(4) $A$ will fail to be diagonalizable over the scalar field $\mathbb F$ if $\mathbb F$ does not contain all of the e-values of $A$. But this problem may be overcome by extending $\mathbb F$ to its algebraic closure, $\overline{\mathbb F}$.
(5) The other reason that $A$ may fail to be diagonalizable is that the algebraic multiplicity of some e-value of $A$ is strictly larger than the geometric multiplicity of the e-value.

Apr 16
We completed these slides. The main new points were:

(1) The Cayley-Hamilton Theorem.
(2) Definition of the minimal polynomial of a matrix over a given scalar field ($\textsf{minpoly}_{A,\mathbb F}(\lambda)$).
(3) $A$ is diagonalizable over $\mathbb F$ iff the minimal polynomial of $A$ over $\mathbb F$ factors over $\mathbb F$ into distinct linear factors.
(4) The use of matrices to solve linear recurrences.
(5) The Binet formula for the Fibonacci numbers.

Apr 19
We discussed more applications of e-vectors in these new slides.

We took Quiz 10.

Apr 21
We discussed

(1) The structure of an endomorphism of a finite-dimensional vector space.
(2) Similarity.
(3) Algebraically closed fields. The Fundamental Theorem of Algebra.
(4) Similar matrices have the same characteristic polynomials and therefore the same e-values.
(5) The algebraic multiplicity of an e-value is at least as large as the geometric multiplicity.
(6) Distinct e-spaces are independent. Any concatenation of bases for the e-spaces is an independent set, and it is a basis for the space iff the algebraic multiplicity equals the geometric multiplicity for each e-value.

Apr 23
Final review sheet.

We discussed generalized e-vectors, generalized e-spaces, and Jordan form.

Apr 26
Final review sheet.

We finished the discussion of the Jordan form.