Similarity
Jim Hefferon ยท GFDL + CC BY-SA 2.5 ยท Linear Algebra
An eigenvector of a linear map is a nonzero vector whose direction is unchanged by the map: Av = lambda v. The scalar lambda is the eigenvalue. Finding eigenvalues reduces to solving det(A - lambda I) = 0. When a matrix has enough independent eigenvectors, it can be diagonalized: the map becomes just scaling along each eigenvector axis.
Eigenvalues and eigenvectors
A nonzero vector v is an eigenvector of A with eigenvalue lambda if Av = lambda v. The vector gets scaled but not rotated. Eigenvalues are the roots of the characteristic polynomial det(A - lambda I) = 0. Each eigenvalue has an eigenspace: the set of all eigenvectors for that eigenvalue, plus zero.
Characteristic polynomial
The characteristic polynomial of an n-by-n matrix A is det(A - lambda I), a polynomial of degree n in lambda. Its roots are the eigenvalues. For a 2-by-2 matrix, this is lambda^2 - trace(A)*lambda + det(A) = 0. The trace equals the sum of eigenvalues, and the determinant equals their product.
Diagonalization
If an n-by-n matrix has n linearly independent eigenvectors, it is diagonalizable: A = P D P^(-1), where D is the diagonal matrix of eigenvalues and P is the matrix whose columns are eigenvectors. In the eigenvector basis, the map is just scaling along each axis. Not all matrices are diagonalizable; those with repeated eigenvalues may fail.
When diagonalization fails
A matrix with a repeated eigenvalue may not have enough independent eigenvectors to diagonalize. The matrix ((2 1)(0 2)) has eigenvalue 2 with algebraic multiplicity 2 but geometric multiplicity 1 (only one independent eigenvector). Hefferon covers Jordan normal form as the next-best thing: block-diagonal with 1s on the superdiagonal of deficient blocks.
Notation reference
| Symbol | Scheme | Python | Meaning |
|---|---|---|---|
| Av = lambda v | (mat-vec A v) | A @ v == lam * v | Eigenvalue equation |
| det(A - lambda I) = 0 | (char-poly ...) | np.linalg.eigvals | Characteristic polynomial |
| A = PDP^(-1) | change of basis | P @ D @ inv(P) | Diagonalization |
| tr(A) | sum of diagonal | np.trace(A) | Sum of eigenvalues |
| Jordan form | block diagonal | scipy jordan_form | Best form when not diagonalizable |
Neighbors
Adjacent chapters
- Ch.4 Determinants โ eigenvalues come from det(A - lambda I) = 0
- ๐ค ML Ch.6 — PCA is eigendecomposition of the covariance matrix
- ๐ Control Ch.7 — eigenvalues determine stability in state space
- โณ Geometry Ch.3 — linear transformations as geometric operations
Cross-references
- Sato 2023 โ enriched categories use eigenvalue-like structure in their hom-objects
- Milewski Ch.1 โ similar matrices represent the same morphism in different bases
Foundations (Wikipedia)
Translation notes
Hefferon's final chapter also covers complex eigenvalues, the Cayley-Hamilton theorem, and Jordan normal form in detail. This page covers the core arc from eigenvalues through diagonalization. For the full treatment of similarity and canonical forms, see the original text.