← back to cognitive science

Bayesian Models of Cognition

Lovelace textbook · CC BY-SA 4.0 · computationalcognitivescience.github.io/lovelace/home

Concept learning is hypothesis testing. Given a few examples, learners infer which concept generated them by computing a posterior over a structured hypothesis space. The size principle favors smaller, tighter hypotheses: a concept that could generate fewer examples gets more credit for generating the ones you saw. Abstract knowledge helps rather than hurts. This is the blessing of abstraction.

Concept learning as hypothesis testing

You see the numbers 2, 4, 8. What is the rule? "Powers of two" is a tighter hypothesis than "even numbers," which is tighter than "all numbers." Bayesian inference naturally favors the tightest hypothesis consistent with the data, because the likelihood of generating exactly those examples is higher under a smaller set.

all numbers even numbers powers of 2 2 4 8 Tighter hypotheses get more likelihood credit (size principle)
Scheme

Causal reasoning

Bayesian models extend to causal reasoning. Given a causal graph (A causes B, B causes C), you can infer causes from effects by inverting the generative model with Bayes' theorem. Observing wet grass, you infer rain is more likely. Observing that the sprinkler is on reduces the evidence for rain. This "explaining away" falls out naturally from the posterior computation.

Scheme

The blessing of abstraction

Hierarchical Bayesian models learn at multiple levels simultaneously. Abstract knowledge (e.g., "animals in this ecosystem tend to be small") constrains lower-level inference (e.g., "this new species is probably small too"). More abstract hypotheses are learnable from fewer examples because they constrain many lower-level hypotheses at once. Abstraction does not cost you data efficiency. It buys you data efficiency.

Scheme

Notation reference

Term Meaning
Size principleP(data|H) = (1/|H|)^n; smaller hypotheses get more credit
Explaining awayObserving one cause reduces the posterior of competing causes
Hierarchical BayesPriors at one level are learned from data at another
Blessing of abstractionAbstract knowledge helps rather than hurts data efficiency
Neighbors

Translation notes

The Lovelace textbook walks through Tenenbaum's number game in detail and includes interactive sliders for hypothesis spaces. This page extracts the core principles: the size principle as the source of Bayesian Occam's razor, causal reasoning as posterior inference over generative models, and the blessing of abstraction as a scaling argument for hierarchical models. The textbook also covers iterated learning and cultural transmission, which connect to Chapter 7.

Read the original: Lovelace, Chapter 3.