A central question in learning theory is whether organisms attend equally to all stimuli or selectively allocate attention to the most informative cues. Attentional models propose that learning involves not just updating stimulus-outcome associations but also adjusting the attention (or "associability") given to different stimuli. This attentional process operates as a meta-learning mechanism that modulates the rate of primary learning.
Two Competing Principles
Pearce-Hall (1980): αᵢ increases when outcomes are surprising (|λ − ΣV| is large)
Mackintosh: attend to what predicts well (predictiveness)
Pearce-Hall: attend to what is uncertain (uncertainty)
These two principles make opposite predictions in key situations. After a cue becomes a reliable predictor, Mackintosh's model increases its associability (attention stays high), while Pearce-Hall decreases it (surprise is low, so attention drops). Empirical evidence supports both mechanisms, suggesting a hybrid model where different attentional processes operate on different timescales or under different conditions.
Formal Integration
Le Pelley (2004) and others have proposed hybrid models incorporating both predictiveness-driven and uncertainty-driven attention. The ALCOVE model of category learning implements attention learning through gradient descent, providing a connectionist implementation of selective attention. These models have been influential in understanding attentional biases in anxiety, addiction, and learned helplessness.