← back to algorithms

Dynamic Programming

Nievergelt & Hinrichs · Ch. 9 · Algorithms and Data Structures

Dynamic programming trades space for time. When a problem has optimal substructure (the best solution contains best sub-solutions) and overlapping subproblems (the same sub-solutions recur), store them in a table instead of recomputing.

C A T C A R 0 1 2 3 1 0 1 2 2 1 0 1 3 2 1 1 Edit distance: CAR to CAT = 1 (replace R with T) Diagonal = match/replace, horizontal = insert, vertical = delete

Fibonacci: recursion vs DP

Naive recursion computes fib(n) in O(2^n). Memoization stores each value once: O(n) time, O(n) space. Bottom-up iteration: O(n) time, O(1) space.

Scheme

Edit distance

Given two strings, find the minimum number of insertions, deletions, and replacements to transform one into the other. The DP table has size m x n. Each cell depends on three neighbors.

Scheme
Neighbors

Cross-references

  • ⚙ Ch.2 — recursion: DP is recursion plus a table
  • ✎ Proofs Ch.4 — induction proves the correctness of DP recurrences
  • 🤖 ML Ch.3 — the Bellman equation in reinforcement learning is dynamic programming
  • 🔢 Discrete Math Ch.2 — recurrence relations are the mathematical structure DP solves