A recursive procedure can generate an iterative process. The shape of the process — stack growth vs constant space — matters more than whether the procedure calls itself.
Linear recursion — factorial
The naive factorial builds a chain of deferred multiplications. Each call adds a frame to the stack. Space grows linearly with n. This is a linear recursive process.
Scheme
; Linear recursive process; Each call defers a multiplication: n * (factorial (- n 1)); Stack grows to depth n before contracting.
(define (factorial n)
(if (= n 1)
1
(* n (factorial (- n 1)))))
(display "6! = ") (display (factorial 6)) (newline)
(display "10! = ") (display (factorial 10)) (newline)
(display "20! = ") (display (factorial 20))
Python
# Linear recursive factorialdef factorial(n):
if n == 1:
return1return n * factorial(n - 1)
print(f"6! = {factorial(6)}")
print(f"10! = {factorial(10)}")
print(f"20! = {factorial(20)}")
Iterative process — factorial with accumulator
Same recursive procedure, different process shape. The accumulator carries the running product. Each call passes all state forward — no deferred operations, constant space. Scheme guarantees tail-call optimization, so this runs in O(1) space.
Scheme
; Iterative process via tail recursion.; State is carried in the accumulator โ no deferred operations.
(define (factorial n)
(define (iter product counter)
(if (> counter n)
product
(iter (* counter product) (+ counter 1))))
(iter 11))
(display "6! = ") (display (factorial 6)) (newline)
(display "10! = ") (display (factorial 10)) (newline)
(display "20! = ") (display (factorial 20))
; Same results, but constant stack space.
Python
# Iterative factorial โ a loop is the natural Python formdef factorial(n):
product = 1for counter inrange(1, n + 1):
product *= counter
return product
print(f"6! = {factorial(6)}")
print(f"10! = {factorial(10)}")
print(f"20! = {factorial(20)}")
# Python doesn't optimize tail calls, so loops are preferred.
Tree recursion — Fibonacci
Fibonacci via the naive definition branches at each step. The call tree has exponential nodes — the same subproblems are recomputed many times. It's beautiful and wasteful. The iterative version below runs in linear time and constant space.
Scheme
; Tree recursion: two branches per call.; fib(n) computes fib(n-1) + fib(n-2) redundantly.
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1)) (fib (- n 2))))))
(display "fib(10) = ") (display (fib 10)) (newline)
; Iterative Fibonacci: O(n) time, O(1) space.
(define (fib-iter n)
(define (iter a b count)
(if (= count 0)
b
(iter (+ a b) a (- count 1))))
(iter 10 n))
(display "fib-iter(10) = ") (display (fib-iter 10)) (newline)
(display "fib-iter(30) = ") (display (fib-iter 30))
Python
# Tree recursion vs iterative Fibonaccidef fib(n):
if n == 0: return0if n == 1: return1return fib(n - 1) + fib(n - 2)
print(f"fib(10) = {fib(10)}")
# Iterative versiondef fib_iter(n):
a, b = 1, 0for _ inrange(n):
a, b = a + b, a
return b
print(f"fib_iter(10) = {fib_iter(10)}")
print(f"fib_iter(30) = {fib_iter(30)}")
Orders of growth
The key question: how do time and space scale with input size? Linear recursive factorial: Θ(n) time, Θ(n) space. Iterative factorial: Θ(n) time, Θ(1) space. Tree Fibonacci: Θ(φn) time, Θ(n) space. The shape of the process determines the resource profile.
Scheme
; Exponentiation โ two approaches.; Recursive: O(n) time, O(n) space
(define (expt-rec b n)
(if (= n 0) 1 (* b (expt-rec b (- n 1)))))
; Iterative: O(n) time, O(1) space
(define (expt-iter b n)
(define (iter product counter)
(if (= counter 0)
product
(iter (* b product) (- counter 1))))
(iter 1 n))
; Fast exponentiation: O(log n) time via successive squaring
(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))
(define (square x) (* x x))
(display "2^10 (recursive) = ") (display (expt-rec 210)) (newline)
(display "2^10 (iterative) = ") (display (expt-iter 210)) (newline)
(display "2^10 (fast) = ") (display (fast-expt 210)) (newline)
(display "2^20 (fast) = ") (display (fast-expt 220))
Python
# Three approaches to exponentiationdef expt_rec(b, n):
if n == 0: return1return b * expt_rec(b, n - 1)
def expt_iter(b, n):
product = 1for _ inrange(n):
product *= b
return product
def fast_expt(b, n):
if n == 0: return1if n % 2 == 0: return fast_expt(b, n // 2) ** 2return b * fast_expt(b, n - 1)
print(f"2^10 (recursive) = {expt_rec(2, 10)}")
print(f"2^10 (iterative) = {expt_iter(2, 10)}")
print(f"2^10 (fast) = {fast_expt(2, 10)}")
print(f"2^20 (fast) = {fast_expt(2, 20)}")
Scheme guarantees tail-call optimization: a tail-recursive procedure runs in constant space. Python does not — deep recursion hits the stack limit. The Python translations use explicit loops where SICP uses tail recursion. The even? predicate becomes n % 2 == 0. SICP's Θ notation (tight bound) is more precise than big-O (upper bound), but they coincide for these examples.