# Recap: Linear Algebra

Chapter 4 Determinants

# Chapter 4 Determinants

## 1. Uses of Determinants

• Invertibility

If $|\begin{array}{c}A\end{array}|=0$$\begin{vmatrix} A \end{vmatrix} = 0$, then $A$$A$ is singular. If $|\begin{array}{c}A\end{array}|\ne 0$$\begin{vmatrix} A \end{vmatrix} ≠ 0$, then $A$$A$ is invertible.

• Volume

The determinant of $A$$A$ equals the volume of a box in n-dimensional space. The edges of the box come from the rows of $A$$A$. The columns of $A$$A$ would give an entirely different box with the same volume.

• Pivots

The determinant gives a formula for each pivot. From the formula $determinant = ± (product\ of\ the\ pivots)$, it follows that regardless of the order of elimination, the product of the pivots remains the same apart from the sign.

• Dependence

The determinant measures the dependence of ${A}^{-1}b$$A^{-1}b$ on each element of $b$$b$.

## 2. Properties of Determinants

• The determinant of the identity matrix is 1.
• The determinant changes sign when two rows are exchanged.

The determinant of every permutation matrix is $|\begin{array}{c}P\end{array}|=±1$$\begin{vmatrix} P \end{vmatrix} = ±1$.

• The determinant depends linearly on the first row.
$\begin{array}{}& |\begin{array}{cc}a+{a}^{\prime }& b+{b}^{\prime }\\ c& d\end{array}|=|\begin{array}{cc}a& b\\ c& d\end{array}|+|\begin{array}{cc}{a}^{\prime }& {b}^{\prime }\\ c& d\end{array}|\end{array}$
$\begin{array}{}& |\begin{array}{cc}ta& tb\\ c& d\end{array}|=t|\begin{array}{cc}a& {b}^{\prime }\\ c& d\end{array}|\end{array}$

• If two rows of $A$$A$ are equal, then $|\begin{array}{c}A\end{array}|=0$$\begin{vmatrix} A \end{vmatrix} = 0$.
• Subtracting a multiple of one row from another row leaves the same determinant.
$\begin{array}{}& |\begin{array}{cc}a-lc& b-ld\\ c& d\end{array}|+|\begin{array}{cc}a& b\\ c& d\end{array}|\end{array}$

The usual elimination steps do not affect the determinant.

• If $A$$A$ has a row of zeros, then $|\begin{array}{c}A\end{array}|=0$$\begin{vmatrix} A \end{vmatrix}=0$ .
• *If $A$$A$ is triangular, then $|\begin{array}{c}A\end{array}|$$\begin{vmatrix} A \end{vmatrix}$ is the product of the diagonal entries.
• The determinant of $AB$$AB$ is the product of $detA$$detA$ times $detB$$detB$.
$\begin{array}{}& |\begin{array}{c}A\end{array}||\begin{array}{c}B\end{array}|=|\begin{array}{c}AB\end{array}|\end{array}$
$\begin{array}{}& |\begin{array}{c}{A}^{-1}\end{array}|=\frac{1}{|\begin{array}{c}A\end{array}|}\end{array}$
• The transpose of $A$$A$ has the same determinant as $A$$A$ itself.
$\begin{array}{}& |\begin{array}{c}A\end{array}|=|\begin{array}{c}{A}^{T}\end{array}|\end{array}$
• LDU factorization.
$\begin{array}{}& PA=LDU\end{array}$
$\begin{array}{}& |\begin{array}{c}A\end{array}|=±|\begin{array}{c}P\end{array}||\begin{array}{c}A\end{array}|=±|\begin{array}{c}L\end{array}||\begin{array}{c}D\end{array}||\begin{array}{c}U\end{array}|=±|\begin{array}{c}D\end{array}|\end{array}$

## 3. Applications of Determinants

• Computation of ${A}^{-1}$$A^{-1}$.
$\begin{array}{}& {\left[\begin{array}{cc}a& b\\ c& d\end{array}\right]}^{-1}=\frac{1}{|\begin{array}{c}A\end{array}|}\left[\begin{array}{cc}{C}_{11}& {C}_{12}\\ {C}_{21}& {C}_{22}\end{array}\right]=\frac{1}{ad-bc}\left[\begin{array}{cc}d& -b\\ -c& a\end{array}\right]\end{array}$
• Cramer's Rule.

# Chapter 5 Eigenvalues and Eigenvectors

## 1. Introduction

The eigenvalues are the most important feature of practically any dynamic system. Until now we have focused on the problem $Ax=b$$Ax=b$, now we consider the new problem $Ax=\lambda x$$Ax=\lambda x$. This will still be solved by simplifying a matrix, but the basic step is no longer to subtract a multiple of one row from another: Elimination changes the eigenvalues.

### (1) Solution of $Ax=\lambda x$$Ax=\lambda x$

This is a nonlinear equation since both $A$$A$ and $\lambda$$\lambda$ are unknown. We first discover $\lambda$$\lambda$:

$\begin{array}{}& \left(A-\lambda I\right)x=0.\end{array}$

The vector $x$$x$ is in the nullspace of $A-\lambda I$$A-\lambda I$.

The number $\lambda$$\lambda$ is chosen so that $A-\lambda I$$A-\lambda I$ has a nullspace.

Of course, every matrix has a nullspace, but we want a nonzero eigenvector $x$$x$. The vector $x=0$$x=0$ satisfies $Ax=\lambda x$$Ax=\lambda x$, but it is useless in solving differential equations. We are interested only in those particular values $\lambda$$\lambda$ for which there is a nonzero eigenvector $x$$x$. That is, the nullspace of $A-\lambda I$$A-\lambda I$ must contain vectors other than 0. In short, $A-\lambda I$$A-\lambda I$ must be singular.

$\begin{array}{}& |\begin{array}{c}A-\lambda x\end{array}|=0\end{array}$

Each $\lambda$$\lambda$ is associated with eigenvectors $x$$x$:

### (2) Checks on Eigenvalues

The sum of eigenvalues equals the sum of the diagonal entries:

$\begin{array}{}& {\lambda }_{1}+...+{\lambda }_{n}={a}_{11}+...+{a}_{nn}=trace\left(A\right)\end{array}$

Furthermore, the product of eigenvalues equals the determinant of $A$$A$.

From example** of eigenvalues, the diagonal entries and the eigenvalues are the same only in triangular matrices. Normally, they are completely different.

### (3) Examples

• Everything is clear when $A$$A$ is a diagonal matrix:

$A$$A$ acts like a multiple of the identity on each eigenvector:

The action of $A$$A$ is determined by its eigenvectors and eigenvalues.

• The eigenvalues of a projection matrix are 1 or 0.

A zero eigenvalue signals that $A$$A$ is singular (not invertible); its determinant is zero. Invertible matrices have all $\lambda \ne 0$$\lambda ≠0$.

• **The eigenvalues are on the main diagonal when $A$$A$ is triangular:
$\begin{array}{}& \begin{array}{c}A=\left[\begin{array}{ccc}1& 4& 5\\ 0& \frac{3}{4}& 6\\ 0& 0& \frac{1}{2}\end{array}\right]\end{array}\end{array}$
$\begin{array}{}& |\begin{array}{c}A-\lambda I\end{array}|=|\begin{array}{ccc}1-\lambda & 4& 5\\ 0& \frac{3}{4}-\lambda & 6\\ 0& 0& \frac{1}{2}-\lambda \end{array}|=\left(1-\lambda \right)\left(\frac{3}{4}-\lambda \right)\left(\frac{1}{2}-\lambda \right)\end{array}$

This follows from property* of determinants.

The eigenvalues are $\lambda =1$$\lambda=1$, $\lambda =\frac{3}{4}$$\lambda=\frac{3}{4}$ and $\lambda =\frac{1}{2}$$\lambda=\frac{1}{2}$, which are the diagonal entries of $A$$A$.

## 2. Diagonalization of a Matrix

### (1) Eigenvectors Diagonalize a Matrix

Suppose the $n×n$$n\times n$ matrix $A$$A$ has $n$$n$ linearly independent eigenvectors. If these vectors are the columns of a matrix $S$$S$, then ${S}^{-1}AS$$S^{-1}AS$ is a diagonal matrix $\mathrm{\Lambda }$$\Lambda$. The eigenvalues of $A$$A$ are on the diagonal of $\mathrm{\Lambda }$$\Lambda$:

$\begin{array}{}& \begin{array}{c}{S}^{-1}AS=\mathrm{\Lambda }=\left[\begin{array}{cccc}{\lambda }_{1}& & & \\ & {\lambda }_{2}& & \\ & & ..& \\ & & & {\lambda }_{n}\end{array}\right]\end{array}\end{array}$

We call $S$$S$ the "eigenvector matrix" and $\mathrm{\Lambda }$$\Lambda$ the "eigenvalue matrix".

$Proof.$$Proof.$

$\begin{array}{}& \begin{array}{c}AS=A\left[\begin{array}{cccc}|& |& & |\\ {x}_{1}& {x}_{2}& ...& {x}_{n}\\ |& |& & |\end{array}\right]=\left[\begin{array}{cccc}|& |& & |\\ {\lambda }_{1}{x}_{1}& {\lambda }_{2}{x}_{2}& ...& {\lambda }_{n}{x}_{n}\\ |& |& & |\end{array}\right]\end{array}\end{array}$

We split the matrix $AS$$AS$ into product $S\mathrm{\Lambda }$$S\Lambda$:

$\begin{array}{}& \left[\begin{array}{cccc}|& |& & |\\ {\lambda }_{1}{x}_{1}& {\lambda }_{2}{x}_{2}& ...& {\lambda }_{n}{x}_{n}\\ |& |& & |\end{array}\right]=\left[\begin{array}{cccc}|& |& & |\\ {x}_{1}& {x}_{2}& ...& {x}_{n}\\ |& |& & |\end{array}\right]\left[\begin{array}{cccc}{\lambda }_{1}& & & \\ & {\lambda }_{2}& & \\ & & ..& \\ & & & {\lambda }_{n}\end{array}\right]\end{array}$

Therefore, $AS=S\mathrm{\Lambda }$$AS=S\Lambda$. $S$$S$ is invertible, because its columns are assumed to be independent.

### (2) Remarks

• If a matrix has no repeated eigenvalues, then its eigenvectors are automatically independent.

Any matrix with distinct eigenvalues can be diagonalized.

• The diagonalizing matrix $S$$S$ is not unique.
• The order of the eigenvectors in $S$$S$ and the eigenvalues in $\mathrm{\Lambda }$$\Lambda$ is automatically the same.
• Not all matrices possess n linearly independent eigenvectors.

Not all matrices are diagonalizable.

Diaginalizability of $A$$A$ depends on enough eigenvectors. (n independent eigenvectors)

Invertibility of $A$$A$ depends on nonzero eigenvalues. (no zero eigenvalues)

• There is no connection between diagonalizability and invertibility.
• Diagonalization can fail only if there are repeated eigenvalues.

But it does not always fail. $A=I$$A=I$ has repeated eigenvalues $1,1,...,1$$1, 1, ..., 1$ but it is already diagonal.

## 3. Power and Products

• The eigenvalues of ${A}^{k}$$A^k$ has the same eigenvectors as $A$$A$, and eigenvalues ${\lambda }_{1}^{k},{\lambda }_{2}^{k},...,{\lambda }_{n}^{k}$$\lambda_1^k, \lambda_2^k,..., \lambda_n^k$.
• If $A$$A$ is invertible, the eigenvalues of ${A}^{-1}$$A^{-1}$ are $1/{\lambda }_{i}$$1/\lambda_i$.