Summer 2024
CMSE/MTH 314 Matrix Algebra with Computational Applications
- Course website - D2L
-
Lecture:
- Instructor: Son Tu – tuson@msu.edu
- Wells Hall A332 (TR 9:10 am - 12:00 pm).
- Syllabus PDF
- Schedule PDF
- Office hours:
-
- Wednesday 9-11 am on Zoom or in person C330 Wells Hall
- By appointment (via email)
- https://msu.zoom.us/j/96240880040
- Meeting ID: 962 4088 0040
- Passcode: 632835
Materials
- All materials are hosted on D2L.
- Some lecture notes are posted here for convenience.
- CMSE/MTH 314 Matrix Algebra with Computational Applications
-
Materials
- Lecture 11 · 2024-06-18 · Diagonalization & Singular value decomposition
- Lecture 10 · 2024-06-13 · Eigenvalue & Markov's problem
- Lecture 09 · 2024-06-11 · Gram-Schmidt process
- Lecture 08 · 2024-06-06 · Change of basis
- Lecture 07 · 2024-06-04 · Vector space 2
- Lecture 06 · 2024-05-30 · Vector space I
- Lecture 05 · 2024-05-28 · Linear transformation
- Lecture 04 · 2024-05-23 · Determinant
- Lecture 03 · 2024-05-21 · Inverse of Matrix
Lecture 11 · 2024-06-18 · Diagonalization & Singular value decomposition
- Diagonalization
-
Singular value decompostion
- Theorem: If $A$ is of size $n\times n$ is symmetric and orthogonal, then $A$ can be written as $A = PDP^T$, where $P$ is orthogonal and $D$ is diagonal.
- Notes: when computing $U,V$, we need to normalized every eigenvector (column) to norm $1$.
-
Facts for orthogonal matrix $A$
- Column vectors are an orthogonal set
- Row vectors are an orthogonal set
- $A^{-1}$ is orthorgonal
- $A^T$ is orthorgonal (since $A^{-1} = A^T$)
Lecture 10 · 2024-06-13 · Eigenvalue & Markov's problem
Lecture 09 · 2024-06-11 · Gram-Schmidt process
Lecture 08 · 2024-06-06 · Change of basis
Key takeaways:
- Let $\mathcal{E} = (e_1,\ldots, e_n)$ be a (standard) basis for $V$ (think of $\R^n$)
- Let $\mathcal{B} = (b_1,\ldots, b_n)$ be a basis for $V$.
- Let $\mathcal{C} = (c_1,\ldots, c_n)$ be a basis for $V$.
For $x\in V$ then
$$ [x]_\mathcal{B} = \begin{bmatrix} x_1\\ \vdots\\ x_n \end{bmatrix} \qquad\text{means} \qquad x =[x]_\mathcal{E} = x_1b_1 + \ldots + x_nb_n. $$
In other words, this is the key equation:
$$ [ x ] _ { \mathcal{E} } = [ b_1, \ldots, b_n ] \cdot [ x ] _ {\mathcal{B}} = [c_1, \ldots, c_n] \cdot [ x ] _ {\mathcal{C}}. $$
Then we can convert between bases as
$$ \fbox{$ \displaystyle [ x ] _ { \mathcal{C} } = [ c_1,\ldots, c_n ] ^ {-1} \cdot [b_1,\ldots, b_n] \cdot [ x ] _ { \mathcal{B} } . $} $$
Lecture 07 · 2024-06-04 · Vector space 2
Key takeaways: Given a matrix $A$ of size $m\times n$
-
Nullspace: set of all solutions of the equation $Ax=0$.
Other notation: $\mathrm{null}(A)$ or $\mathrm{ker}(T)$ where $T(x)=Ax$. -
Column space: set of linear combinations of the columns of $A$.
- $\mathrm{col}(A)$ is a subset of $\R^m$
- $\mathrm{col}(A)$ is also range of $T$ where $Tx=Ax$
-
Row space: set of linear combinations of all rows of $A$ (each row is a vector in $\R^n$)
- $\mathrm{row}(A) = \mathrm{col}(A^T)$
-
Finding bases for row space, column space, null space and dimension of a space
-
Row space: rwo equivalent transformation
Start from $A$, do RREF, whatever we get at the end, their rows form a basis for $\mathrm{row}(A)$ - Note: row operation $A$ to $B$ makes $\mathrm{col}(A)=\mathrm{col}(B)$ (as vector spaces)
-
Column space:
- Start from $A$, do RREF to get $B$
- See which columns of $B$ are dependent on the others (non-pivot columns), cross them out
- The rest of the columns form a basis for $\mathrm{col}(A)$
-
Row space: rwo equivalent transformation
- $\mathrm{rank}(A) = \mathrm{dim}(\mathrm{col}(A)) = \mathrm{dim}(\mathrm{row}(A))$ : number of leading (pivots) in RREF
-
Rank-Nullity theorem:
- $\mathrm{rank}(A) + \mathrm{dim}(\mathrm{null}(A)) = n$
- $\mathrm{rank}(A) + \mathrm{dim}(\mathrm{null}(A^T)) = m$
Lecture 06 · 2024-05-30 · Vector space I
Lecture 05 · 2024-05-28 · Linear transformation
Key takeaways:
- Linear transformation (they are matrix tranformation if $T:\R^n\to\R^m$)
- Given a linear transformation $T:\R^n\to\R^m$, find the matrix representation $Tx=Ax$.
-
Elementary transformations in 2D:
- Shears (vertically, horizontally)
- Projecttions (to $x_1$ axis, to $x_2$ axis)
- Contractions and Expansions (vertically, horizontally)
- Reflections (5 types)
- Rotations
- A special nonlinear transformation: translation.
- How to combine transformation (the order matters).
Lecture 04 · 2024-05-23 · Determinant
Key takeaways:
- Algorithm to compute determinant: cofactor expansion
- Relation between determinants of matrices after row operations.
$$ A = \begin{bmatrix} 1 & -4 & 2 \\ -2 & 8 & -4 \\ -1 & 7 & 0 \end{bmatrix} $$
import numpy as np A = np.array([[1,-4,2], [-2,8, -4], [-1, 7, 0]]) det = np.linalg.det(A)
Lecture 03 · 2024-05-21 · Inverse of Matrix
Key takeaways:
- Finding the inserve of a matrix.
- Finding a series of elementary matrices in the row operations.
Python code to do the augmented matrix and perfoming row operations
Let
$$ A = \begin{bmatrix} 0 & 1 & 2 \\ 1 & 0 & 3 \\ 4 &-3 & 8 \end{bmatrix}. $$
To find the inverse of $A$, we form the augmented matrix
$$ [A|B] = \begin{bmatrix} 0 & 1 & 2 & | & 1 & 0 & 0\\ 1 & 0 & 3 & | & 0 & 1 & 0\\ 4 &-3 & 8 & | & 0 & 0 & 1 \end{bmatrix}. $$
Then we perform row operations to find the reduced row echelon form of $A$ (rref
),
then the matrix obtained on the right will be $A^{-1}$, given the rref(A) = I_3
. Otherwise $A$ is not invertible.
import numpy as np import sympy as sym A = sym.Matrix([ [0, 1, 2], [1, 0, 3], [4,-3, 8] ]) # stack A|B to form the augmented matrix AB = np.column_stack((A,np.eye(3))) # convert AB to sympy AB = sym.Matrix(AB) # perform the row operations to find the reduced echelon form of A AB_rref = AB.rref()[0] # select the matrix on the right hand side Ainv = AB_rref[:, 3:6] Ainv
We obtain
$$ A ^ { -1 } = \begin{bmatrix} -4.5 & 7 & -1.5 \\ -2 & 4 & -1 \\ 1.5 & -2 & 0.5 \end{bmatrix} $$
© 2016-2024 Son N. T. Tu. Powered by Ark. Last updated: October 02, 2024.