Memory Management
Wikipedia · Memory management · CC BY-SA 4.0
The OS gives each process the illusion of a private, contiguous address space. Behind the scenes, paging maps virtual addresses to scattered physical frames. The page table is the translation dictionary. The TLB is the cache that makes it fast.
Address spaces
Each process sees virtual addresses starting from 0. The hardware (MMU) translates these to physical addresses at runtime. No process can see another's memory unless the OS explicitly shares a region.
Base and limit registers
The simplest scheme: the base register holds the starting physical address, the limit register holds the size. Every access is checked: if the virtual address exceeds the limit, trap. Simple, but no sharing and external fragmentation.
Segmentation
Divide memory into logical segments (code, stack, heap). Each segment has its own base and limit. The address is (segment, offset). Problem: variable-size segments cause external fragmentation.
Paging
Divide virtual memory into fixed-size pages and physical memory into same-size frames. The page table maps page numbers to frame numbers. No external fragmentation. Internal fragmentation is at most one page per process.
Translation Lookaside Buffer (TLB)
Every memory access requires a page table lookup, which is itself a memory access. The TLB is a small, fast hardware cache of recent page-to-frame mappings. TLB hit: one memory access. TLB miss: two (or more, with multi-level page tables).
Neighbors
- ⚙ Algorithms Ch.5 — hash tables and address translation: memory management uses page tables as hash maps from virtual to physical addresses
- 🔢 Discrete Math Ch.1 — set theory: memory allocation partitions the address space into disjoint segments
- 🪄 SICP Ch.6 — environment frames and memory: the OS heap and stack are the runtime realization of lexical environments
Foundations (Wikipedia)