An algorithm's cost is not a single number. It is a growth rate: how the cost scales as the input grows. Big-O, big-Omega, and big-Theta classify these rates. They let you compare algorithms without benchmarking.
Big-O: upper bounds
f(n) is O(g(n)) if there exist constants c and n_0 such that f(n) ≤ c · g(n) for all n ≥ n_0. It says: f grows no faster than g.
# Big-O: f(n) = 3n + 5 is O(n)def f(n): return3*n + 5def g(n): return4*n
print("n f(n) 4n f<=4n?")
for n in [1, 5, 10, 100, 1000]:
print(f"{n} {f(n)} {g(n)} {f(n) <= g(n)}")
Big-Omega and Big-Theta
Big-Omega is the mirror: f(n) is Ω(g(n)) if f grows at least as fast as g. Big-Theta is the sandwich: f(n) is Θ(g(n)) when both O and Ω hold. It means f and g grow at exactly the same rate.
The hierarchy: O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ). An algorithm in a lower class will always win for large enough n, regardless of constant factors.
Scheme
; Growth rate comparison at n = 1000
(define n 1000)
(display "O(1) = 1") (newline)
(display "O(log n) = ")
(display (inexact->exact (round (/ (log n) (log 2))))) (newline)
(display "O(n) = ")
(display n) (newline)
(display "O(n log n) = ")
(display (inexact->exact (round (* n (/ (log n) (log 2)))))) (newline)
(display "O(n^2) = ")
(display (* n n)) (newline)
(display "O(2^n) = way too big to print")