← back to Operating Systems

I/O Systems

Wikipedia · wpInput/output · CC BY-SA 4.0

The OS mediates all communication between CPU and devices. Three strategies, in order of sophistication: polling (CPU waits), interrupts (device signals CPU), and DMA (device transfers data directly to memory). The device driver translates generic OS calls into device-specific commands.

Polling vs interrupts

Polling: the CPU repeatedly checks a status register. Simple, but burns CPU cycles. Good only for fast devices where data arrives immediately. Interrupts: the device sends a signal when ready. The CPU does other work until then. Better for slow devices, but interrupt handling has overhead.

Direct Memory Access (DMA)

DMA lets a device transfer data to/from memory without CPU involvement. The CPU sets up the transfer (source, destination, count), then the DMA controller handles it. When done, the controller interrupts the CPU. Bulk transfers (disk, network) would be impractical without DMA.

Device drivers

A device driver is the OS module that knows the specifics of a particular hardware device. The OS provides a uniform interface (open, read, write, close). The driver translates these into the actual hardware commands (port I/O, register writes, command queues).

Disk scheduling

When multiple I/O requests target different locations on a spinning disk, the order of service matters. The disk arm must physically move to each track. Minimizing total arm movement reduces latency.

FCFS: serve in arrival order. Simple, but the arm may zigzag. SCAN (elevator): the arm sweeps one direction, servicing requests, then reverses. C-SCAN: like SCAN, but only services requests in one direction, then jumps back to the start.

SCAN (elevator) — head starts at track 50, moving right 0 50 100 150 199 20 40 80 110 160 start:50 end
Scheme
Neighbors
  • 🎛 Control Ch.6 — frequency response: interrupt-driven I/O uses polling rate vs. latency tradeoffs analogous to control system bandwidth
  • ⚙ Algorithms Ch.7 — priority queues: I/O schedulers use priority queues to merge and reorder disk requests for throughput
  • 🎰 Probability Ch.11 — queuing theory: I/O scheduling is applied queuing theory with service times and arrival rates

Foundations (Wikipedia)