← back to Control Theory

Optimal Control

Wikipedia · wpLinear-quadratic regulator · CC BY-SA 4.0

Optimal control finds the input that minimizes a cost function. The LQR (linear quadratic regulator) minimizes a weighted sum of state deviation and control effort. The solution comes from the Riccati equation. The tradeoff: penalize state error more for faster response, penalize effort more for smaller actuators.

The cost function

Define a cost J that sums two things over time: how far the state is from zero (weighted by matrix Q) and how much control effort you use (weighted by matrix R). J = integral of (x'Qx + u'Ru) dt. Larger Q means "care more about tracking." Larger R means "care more about saving energy." The optimal control law is a linear gain: u = -Kx, where K is computed from the Riccati equation.

optimal initial state aggressive (high Q) gentle (high R) state x1 state x2 J = constant

LQR: the linear case

For a linear system x' = Ax + Bu with quadratic cost, the optimal gain K = R^(-1) B' P where P solves the wpalgebraic Riccati equation: A'P + PA - PBR^(-1)B'P + Q = 0. The result is a constant-gain state feedback law. No online optimization needed: compute K once, apply u = -Kx forever.

Scheme
Neighbors