Optimization
Active Calculus · CC BY-SA · activecalculus.org
Find the best point. Critical points are where the gradient vanishes. The second derivative test classifies them as maxima, minima, or saddle points. Lagrange multipliers handle optimization with constraints: find the best point on a surface, not just in open space.
Critical points in several variables
A critical point of f(x, y) is where both partial derivatives are zero: grad(f) = (0, 0). These are the only candidates for local maxima and minima in the interior of the domain.
Second derivative test
At a critical point, compute the Hessian determinant D = f_xx*f_yy − (f_xy)^2. If D > 0 and f_xx > 0, it is a local minimum. If D > 0 and f_xx < 0, a local maximum. If D < 0, a saddle point.
Lagrange multipliers
To optimize f(x, y) subject to a constraint g(x, y) = 0, find points where grad(f) = lambda * grad(g). At the optimum, the gradient of the objective is parallel to the gradient of the constraint. Lambda is the Lagrange multiplier: it measures how much the constraint costs you.
Notation reference
| Symbol | Meaning |
|---|---|
| ∇f = 0 | Critical point condition |
| D = f_xx f_yy − f_xy² | Hessian determinant |
| ∇f = λ∇g | Lagrange condition |
| λ | Lagrange multiplier (shadow price of constraint) |
Neighbors
Calculus sequence
- Hedges 2018 — selection functions are the categorical structure behind optimization
- 📐 Hefferon Ch.4 Determinants — the Jacobian determinant measures how gradients scale under change of variables
- 🤖 ML Ch.3 — optimization via gradient descent, the ML workhorse
- 📐 Linear Algebra Ch.5 — eigenvalues determine whether a critical point is a minimum
- 💰 Economics Ch.6 — marginal utility maximization uses the same Lagrangian tools
Foundations (Wikipedia)