Ito calculus extends the chain rule to functions of Brownian motion. The extra term — the ½σ²f″ correction — exists because Brownian motion has nonzero quadratic variation. This is the engine behind every option pricing formula.
Brownian motion properties: quadratic variation
For ordinary smooth functions, the sum of squared increments goes to zero as the partition gets finer. Brownian motion is different: its quadratic variation over [0,T] converges to T, not zero. This is why (dB)² = dt instead of 0, and why Ito calculus needs an extra term.
Scheme
; Quadratic variation of Brownian motion; Sum of (B(t_{i+1}) - B(t_i))^2 -> T as n -> infinity; Simulate B(t) increments: each ~ N(0, dt) โ sqrt(dt) * {+1,-1}
(define (quad-var n seed)
(define dt (/ 1.0 n))
(define step (sqrt dt))
(define (qv k s acc)
(if (= k 0) acc
(let* ((new-s (modulo (+ (* 1103515245 s) 12345) (expt 231)))
(incr (* step (if (= (modulo new-s 2) 0) 1-1))))
(qv (- k 1) new-s (+ acc (* incr incr))))))
(qv n seed 0))
; Each (sqrt(dt))^2 = dt, and n * dt = 1
(display "n=10: QV = ") (display (quad-var 1042)) (newline)
(display "n=100: QV = ") (display (quad-var 10042)) (newline)
(display "n=1000: QV = ") (display (quad-var 100042)) (newline)
(display "Theory: QV = T = 1.0")
Python
import random
importmathdef quadratic_variation(n, T=1.0):
random.seed(42)
dt = T / n
qv = 0for _ inrange(n):
dB = math.sqrt(dt) * random.choice([1, -1])
qv += dB ** 2return qv
for n in [10, 100, 1000, 10000]:
print(f"n={n:>5}: QV = {quadratic_variation(n):.6f}")
print("Theory: QV = T = 1.000000")
Ito's lemma — the chain rule for stochastic calculus
If X(t) satisfies dX = μ dt + σ dB, then for any twice-differentiable f(x) with no explicit time dependence, Ito's lemma gives df(X) = f′(X) dX + ½ f″(X) σ² dt. (If f also depends on t, add an f₀ dt term.) The extra ½σ²f″ dt term is the Ito correction. It exists because (dB)² = dt, not zero.
Scheme
; Ito's lemma example: f(x) = x^2; If dX = sigma * dB, then:; df = 2X dX + (1/2)(2)(sigma^2) dt; df = 2X sigma dB + sigma^2 dt;; Compare: ordinary calculus gives df = 2x dx (no correction); Verify numerically: E[B(t)^2] = t (not 0); because the Ito correction sigma^2 * dt accumulates
(define sigma 1.0)
(define n 1000)
(define dt (/ 1.0 n))
; Simulate B(T)^2 and compare to T
(define (simulate-B-squared seed)
(define (run k s x)
(if (= k 0) (* x x)
(let* ((new-s (modulo (+ (* 1103515245 s) 12345) (expt 231)))
(dB (* (sqrt dt) (if (= (modulo new-s 2) 0) 1.0-1.0))))
(run (- k 1) new-s (+ x dB)))))
(run n seed 0.0))
; Average of B(1)^2 over several paths
(define trials (map (lambda (s) (simulate-B-squared (* s 31337)))
(list 12345678910)))
(define avg (/ (apply + trials) (length trials)))
(display "E[B(1)^2] โ ") (display avg) (newline)
(display "Theory: 1.0 (the Ito correction)")
Python
import random
importmathdef simulate_B_squared(n=1000, T=1.0):
dt = T / n
x = 0.0for _ inrange(n):
dB = math.sqrt(dt) * random.choice([1.0, -1.0])
x += dB
return x ** 2
random.seed(42)
trials = [simulate_B_squared() for _ inrange(10000)]
avg = sum(trials) / len(trials)
print(f"E[B(1)^2] โ {avg:.4f}")
print("Theory: 1.0 (the Ito correction)")
Geometric Brownian motion
Geometric Brownian motion (GBM) models stock prices: dS = μS dt + σS dB. Applying Ito's lemma to ln(S) gives d(ln S) = (μ - ½σ²) dt + σ dB. So log-returns are normally distributed, and S(t) = S(0) exp((μ - ½σ²)t + σB(t)). The ½σ² drag is the Ito correction in action.
importmath
S0, mu, sigma, T = 100, 0.08, 0.2, 1.0def gbm_exact(B_T):
return S0 * math.exp((mu - 0.5*sigma**2)*T + sigma*B_T)
for b in [-0.5, 0.0, 0.5, 1.0]:
print(f"B(1) = {b:+.1f}: S = {gbm_exact(b):.2f}")
print(f"E[S(1)] = {S0 * math.exp(mu * T):.2f}")
print(f"Ito drag: mu - 0.5*sigma^2 = {mu - 0.5*sigma**2:.4f}")
Stochastic differential equations
An SDE dX = a(X,t) dt + b(X,t) dB specifies how a process evolves under both deterministic drift a and random diffusion b. The Euler-Maruyama method discretizes: X(t+dt) ≈ X(t) + a·dt + b·√dt·Z where Z ~ N(0,1). This is the stochastic analog of Euler's method for ODEs.
Scheme
; Euler-Maruyama for the Ornstein-Uhlenbeck process; dX = theta*(mu - X) dt + sigma dB; Mean-reverting: pulls toward mu
(define theta 2.0) ; speed of reversion
(define mu-ou 1.0) ; long-run mean
(define sigma-ou 0.3)
(define dt 0.01)
(define steps 100)
(define (euler-maruyama seed)
(define (step k s x)
(if (= k 0) x
(let* ((new-s (modulo (+ (* 1103515245 s) 12345) (expt 231)))
(z (if (= (modulo new-s 2) 0) 1.0-1.0))
(drift (* theta (- mu-ou x) dt))
(diffusion (* sigma-ou (sqrt dt) z))
(new-x (+ x drift diffusion)))
(step (- k 1) new-s new-x))))
(step steps seed 0.0)) ; start at x=0; Should converge toward mu = 1.0
(display "Path 1 final: ") (display (euler-maruyama 42)) (newline)
(display "Path 2 final: ") (display (euler-maruyama 137)) (newline)
(display "Path 3 final: ") (display (euler-maruyama 999)) (newline)
(display "Long-run mean: ") (display mu-ou)
Python
import random
importmath# Ornstein-Uhlenbeck: dX = theta*(mu - X)dt + sigma*dB
theta, mu_ou, sigma_ou = 2.0, 1.0, 0.3
dt, steps = 0.01, 200def euler_maruyama(seed, x0=0.0):
random.seed(seed)
x = x0
for _ inrange(steps):
z = random.gauss(0, 1)
x += theta * (mu_ou - x) * dt + sigma_ou * math.sqrt(dt) * z
return x
finals = [euler_maruyama(i) for i inrange(1000)]
avg = sum(finals) / len(finals)
print(f"Average final value: {avg:.4f}")
print(f"Long-run mean: {mu_ou}")
print(f"Std of finals: {(sum((x-avg)**2 for x in finals)/len(finals))**0.5:.4f}")
print(f"Theory std: {sigma_ou / math.sqrt(2*theta):.4f}")
Neighbors
∫ Calculus Ch.1 — ordinary chain rule is the starting point; Ito adds a correction
📉 Finance II Ch.1 — stochastic processes that Ito calculus operates on