Ax = b : is the vector b in the column space of A?

$Ax = \lambda x$: eigenvector gives direction so that Ax keeps the direction of x. $A^2x = \lambda^2 x$

$Av = \sigma u$: SVD. Which part of that data matrix is important?

Ax is a linear combination of columns of A. Column space of A

3 indepedent columns in R^3 produce an invertible matrix.

A basis for a subspace is a full set of independent vectors. The rank of a matrix is the dimension of its column space.

A = CR. R = rref(A). The nubmer of independent columns equals independent rows

One column u and one row v, and all nonzero matrix $uv^T$ has rank 1

Outer product approach is important in data science because we are looking for important parts of A

A = LU: Elimniation, L is lower triangular, and U is upper triangular

A = QR: orthogonalizing. $Q^TQ=I$ and R is upper triangular

$S=Q\Lambda Q^T$: S is symmetric. Eigenvalues on the diagonal of lambda. Orthonormal eigenvectors in the columns of Q

$A=X\Lambda X^{-1}$: diagonalization when A is n by n with n independent eigenvectors. Eigenvalues of A on the diagonal of lambda. Eigenvectors of A in the columns of X.

$A=U\Sigma V^T$: SVD. Orthonormal singular vectors in U and V. Singular values in sigma

Orthogonal matrix: $Q^T=Q^{-1}$. $q_i * q_j = 1$ if i = j, otherwise 0

$Sq = \lambda q$ Eigenvector q and eigenvalue lambda. Each eigenvalue and eigenvector contribute a rank one piece to S

Columns in $Q\Lambda$ are $\lambda _1q_1$ to $\lambda _nq_n$, because lambda is a diagonal matrix

nullspace: N(A) contains all solutions x to Ax = 0

left nullspace: $N(A^T)$ contains all solutions y to $A^Ty=0

r independent equations Ax = 0 have n - r independent solutions

incidence matrix

A has 1 and -1 on every row, to show the end node and the start node for each edge

The r edges of a tree give r independent rows of A : rank = r = n - 1

There are rn - r = rn - n + 1 independent small loops in the graph

Rank

Rank of AB <= rank of A, because every column of AB is a combination of the columns of A

Rank of A + B <= (rank of A) + (rank of B)

Rank of $A^TA$ = rank of $AA^T$ = rank of A = rank of $A^T$, because $A^TA$ and A have the same nullspace.

If A is m by r and B is r by n, both with rank r, then AB also has rank r, because r = rank of ($A^TABB^T) <= rank of (AB) <= rank A = r. However this does not hold for BA

Elimination factored A = LU into a lower triangular L times an upper triangular U

Zero cannot be the first pivot. If there is a nonzero number lower down in column 1, its row can be the pivot row. Good codes will choose the largest number to be the pivot.

Every invertible n by n matrix A leads to P A = LU : P = permutation. The inverse of every permutation matrix P is its transpose $P^T$

Orthogonal Matrices and Subspaces

Orthogonal vectors x and y: $x^Ty= x_1y_1 + .. + x_ny_n = 0$

Orthogonal subspaces R and N. Every vector in the space R is orthogonal to every vector in N.

Orthonormal basis: Orthogonal basis of unit vectors. Every $v_i^Tv_i = 1$

Tall thin matrices Q with orthonormal columns $Q^TQ= I$. If this Q multiplies any vector x, the length of the vector does not change   Qx   =   x  

$(Qx)^TQx = x^TQ^TQx = X^Tx$

Orthogonal matrices” are square with orthonormal columns, $Q^T=Q^-1$

Pythagoras Law for right triangles $   x-y   ^2 = (x-y)^T(x-y)=x^Tx - y^Tx - x^Ty + y^Ty =   x   ^2 +   y   ^2$

The row space of A is orthogonal to the nullspace of A. The column space of A is orthogonal to the nullspace of $A^T$

if $P^2 = P = P^T$, then Pb is the orthogonal projection of b onto the column space of P

Multiplying orthogonal matrices produces an orthogonal matrix. $(Q_1Q_2)^T(Q_1Q_2) = I$. Rotation times rotation = rotation. Reflection times reflection = rotation. Rotation times reflection = reflection

Suppose then by n orthogonal matrix Q has columns $q_1…q_n$, Every vector v can be written as a combination of the basis vectors. Coefficients in an orthonormal basis is just $c_i= q_iv$, i.e., v=Qc

$Q^Tv=Q^TQc=c$