← back to reading

🧠 Computational Cognitive Science

Based on the Lovelace textbook, licensed CC BY-SA 4.0.

How minds compute: Bayesian inference, neural networks, reinforcement learning, and the architectures that tie them together. For a framework that decomposes these into jksix functional roles, see The Natural Framework.

Rain Sprinkler Wet Grass P(S|R) P(W|R) P(R) A Bayesian network: causes point to effects, probabilities propagate.
Chapter
1. What is Computational Cognitive Science? Models as precise theories of cognition, tested at Marr's computational, algorithmic, and implementational levels 🧠
2. Probability and Bayes Bayes' theorem inverts conditional probabilities, turning evidence into updated beliefs 🧠
3. Bayesian Models of Cognition Concept learning and causal reasoning as probabilistic inference over structured hypotheses 🧠
4. Neural Networks Perceptrons, backpropagation, and how distributed representations learn features from data 🧠
5. Reinforcement Learning Agents learn policies by maximizing reward, balancing exploration against exploitation 🧠
6. Decision Making Utility theory, prospect theory, and why bounded rationality beats unbounded optimization 🧠
7. Language and Communication Probabilistic models of language production, comprehension, and pragmatic inference 🧠
8. Cognitive Architecture Production systems like ACT-R and Soar that unify memory, learning, and action selection — see also 🧠

📺 Video lectures: Yale PSYC 110: Introduction to Psychology (Paul Bloom)

Neighbors