Threshold theory, the oldest formal model of sensory detection, proposes that a stimulus must exceed a discrete internal threshold to be detected. Below threshold, the observer has no sensory information; above threshold, detection occurs with certainty (in the strict version) or with high probability (in the modified version). This all-or-none conception of detection dominated psychophysics from Fechner through the mid-20th century.
High-Threshold Theory
P(False Alarm) = g
p = probability of exceeding threshold (sensory sensitivity)
g = guessing rate when below threshold
The high-threshold model assumes that signal trials sometimes exceed the threshold (with probability p) and sometimes do not. When the threshold is not exceeded, the observer guesses "yes" with probability g. This model predicts that the ROC curve is a straight line from (g, p + (1−p)·g) to (0, p) — a prediction that is empirically violated, as actual ROC curves are consistently bowed.
SDT's Challenge
Signal Detection Theory replaced threshold theory by proposing continuous distributions rather than a discrete threshold. SDT naturally produces curved ROC functions and accounts for the full range of hit rate/false alarm rate combinations observed empirically. The failure of threshold theory's linear ROC prediction was one of the strongest arguments for adopting SDT as the standard framework for detection and discrimination.
Nevertheless, threshold-like models persist in some domains. Two-high-threshold theory (allowing false alarms to also arise from a threshold process) produces better ROC fits and remains used in some memory recognition analyses.