REM (Retrieving Effectively from Memory), developed by Richard Shiffrin and Mark Steyvers (1997), extends the SAM framework by introducing a Bayesian decision mechanism for recognition memory. Each stored trace is a vector of feature values drawn from a geometric distribution, and recognition decisions are based on the likelihood ratio that a test probe was generated by a stored trace versus being a new item.
Feature Representation
Items are represented as vectors of integer-valued features, where each feature value is drawn from a geometric distribution with parameter g: P(value = v) = (1−g)^(v−1) · g. Low values are common and high values are rare. This ecological assumption is central to the model, as rare feature matches provide stronger evidence for a match than common feature matches.
Encoding and Storage Errors
When an item is studied, each feature is stored with probability u* (which increases with study time). Stored features may be copied correctly with probability c, or incorrectly with probability 1−c, in which case a random value is drawn from the geometric distribution. Features not stored remain at zero. This noisy encoding process produces traces that partially match both old and new items, creating the core computational problem that the Bayesian mechanism solves.
Bayesian Recognition
At test, the model computes the likelihood ratio for each trace: the probability that the match pattern between probe and trace would arise if the trace stored the probe item, divided by the probability if it stored a different item. Features are compared one by one, and the ratios across features are multiplied:
where δ = 1 when probe and trace features match and 0 otherwise, and g(j) is the base rate of feature j's value. The odds that the probe is old is then the average of the individual likelihood ratios: Φ = (1/n) Σᵢ λ(i).
Predictions and Strengths
REM naturally accounts for the mirror effect in recognition memory: low-frequency words produce both higher hit rates and lower false alarm rates than high-frequency words. This arises because rare features (high feature values) are more diagnostic. The model also explains list-length effects, list-strength effects, and the receiver operating characteristic (ROC) in recognition.
REM was one of the first memory models to propose that recognition decisions are approximately optimal in a Bayesian sense. The observer computes likelihood ratios and responds "old" when the odds exceed a criterion. This rational analysis connects memory models to ideal-observer theory in signal detection.