The sensitivity index d′ (d-prime) is the most widely used measure in Signal Detection Theory. It quantifies the distance between the means of the noise and signal-plus-noise distributions in standard deviation units. A d′ of 0 indicates chance performance (the two distributions completely overlap), while values above 2 indicate excellent discrimination ability.
Computation
Computing d′ requires converting hit rate and false alarm rate to z-scores using the inverse of the standard normal cumulative distribution function (probit transform). Because z-scores of 0 and 1 are undefined, a log-linear correction is commonly applied: add 0.5 to each cell count and add 1 to each row total before computing rates.
FAR_corrected = (false_alarms + 0.5) / (noise_trials + 1)
d′ = Φ⁻¹(HR) − Φ⁻¹(FAR)
Interpretation
Because d′ is measured in standard deviation units of the underlying distributions, it has a clear interpretation: d′ = 1 means the signal distribution is shifted one standard deviation above the noise distribution. Typical values in psychophysical experiments range from 0.5 (difficult discrimination) to 4.0 (easy discrimination). In memory recognition, d′ values of 1–2 are common.
The power of d′ lies in its invariance to criterion shifts. If an observer becomes more liberal or conservative — shifting their criterion without any change in perceptual sensitivity — d′ remains constant. This property makes it superior to simple percentage correct, which conflates sensitivity and bias.