Optimal Control
Wikipedia · Linear-quadratic regulator · CC BY-SA 4.0
Optimal control finds the input that minimizes a cost function. The LQR (linear quadratic regulator) minimizes a weighted sum of state deviation and control effort. The solution comes from the Riccati equation. The tradeoff: penalize state error more for faster response, penalize effort more for smaller actuators.
The cost function
Define a cost J that sums two things over time: how far the state is from zero (weighted by matrix Q) and how much control effort you use (weighted by matrix R). J = integral of (x'Qx + u'Ru) dt. Larger Q means "care more about tracking." Larger R means "care more about saving energy." The optimal control law is a linear gain: u = -Kx, where K is computed from the Riccati equation.
LQR: the linear case
For a linear system x' = Ax + Bu with quadratic cost, the optimal gain K = R^(-1) B' P where P solves the algebraic Riccati equation: A'P + PA - PBR^(-1)B'P + Q = 0. The result is a constant-gain state feedback law. No online optimization needed: compute K once, apply u = -Kx forever.
Neighbors
- ∫ Calculus Ch. 12 — Optimization — LQR is optimization over function space