Hierarchical Bayesian models (HBMs) extend standard Bayesian inference to multi-level data structures where individual subjects are assumed to be drawn from a population distribution. Parameters at the individual level are constrained by group-level (hyper)parameters, producing "shrinkage" that regularizes noisy individual estimates toward the group mean. This approach has transformed cognitive modeling by enabling principled inference about individual differences.
The Hierarchical Structure
Individual level: θᵢ ~ Normal(μ, σ) for each subject i
Data level: yᵢ ~ f(θᵢ) for each observation
Posterior: P(θ₁,...,θₙ, μ, σ | data) ∝ Π P(yᵢ|θᵢ) · Π P(θᵢ|μ,σ) · P(μ,σ)
Advantages for Cognitive Modeling
When fitting cognitive models (DDM, reinforcement learning, etc.) to individual subjects, HBMs provide several advantages: (1) regularization — individual estimates are "shrunk" toward the group mean, improving estimates for subjects with limited data; (2) borrowing strength — information from well-measured subjects helps constrain poorly-measured subjects; (3) natural handling of individual differences — the population distribution directly describes between-subject variability in model parameters.
HBMs have become the standard approach for fitting computational models to behavioral data, implemented through MCMC sampling (JAGS, Stan) or variational inference methods.