The Receiver Operating Characteristic (ROC) curve is a graphical tool borrowed from signal processing that has become central to psychological measurement. By plotting hit rate (y-axis) against false alarm rate (x-axis) across multiple criterion settings, the ROC curve traces out the full range of an observer's performance possibilities.
Constructing ROC Curves
In rating-scale experiments, observers provide confidence ratings rather than binary yes/no responses. Each possible confidence threshold yields a different hit rate–false alarm rate pair, generating multiple points on the ROC curve. The resulting curve reveals both overall sensitivity (area under the curve) and the shape of the underlying distributions.
The area under the ROC curve (often denoted Az or AUC) provides a nonparametric measure of sensitivity. An AUC of 0.5 indicates chance performance, while 1.0 indicates perfect discrimination. For equal-variance Gaussian SDT, AUC = Φ(d′/√2), where Φ is the standard normal CDF.
Equal vs. Unequal Variance
Under the standard equal-variance Gaussian SDT model, ROC curves are symmetric when plotted in z-coordinates (zROC). However, empirical zROC curves in recognition memory are typically linear with slopes less than 1.0 (around 0.8), indicating that the old-item distribution has greater variance than the new-item distribution. This unequal-variance model has important implications for calculating sensitivity: simple d′ underestimates sensitivity in such cases, and the measure da (which accounts for the variance ratio) should be used instead.
ROC analysis has become essential in medical diagnosis, machine learning, and any field where the tradeoff between hit rate and false alarm rate matters for evaluating decision systems.