If total reaction time is the sum of durations of successive processing stages, then the observed RT distribution is the convolution of the component stage distributions. Deconvolution — the mathematical inverse of convolution — aims to recover these component distributions from the observed data. This is a powerful but technically challenging approach that can reveal the latent temporal structure of cognitive processing.
The Convolution Framework
Under the assumption that processing stages are serial and independent, the distribution of total RT is the convolution of the distributions of individual stage durations:
f_RT(t) = f₁ * f₂ * ... * fₙ (convolution)
In Fourier domain: F_RT(ω) = F₁(ω) × F₂(ω) × ... × Fₙ(ω)
Deconvolution: Fᵢ(ω) = F_RT(ω) / [∏ⱼ≠ᵢ Fⱼ(ω)]
The convolution theorem states that convolution in the time domain corresponds to multiplication in the frequency domain. This means that if we can estimate the Fourier transform (or characteristic function) of the observed RT distribution and of all but one component, we can solve for the remaining component by division in the frequency domain. This is the mathematical foundation of deconvolution.
Methods of Deconvolution
Parametric deconvolution: The most common approach assumes a specific distributional form for each component. The ex-Gaussian model, for example, assumes RT = Gaussian + Exponential, which is the convolution of a normal distribution (representing early processing) and an exponential distribution (representing a later decision or retrieval stage). Parameters are estimated via maximum likelihood or method of moments.
Fourier deconvolution: A nonparametric approach that estimates the characteristic function of the observed distribution, divides by the characteristic function of a known component, and inverse-transforms to recover the unknown component. This method is sensitive to noise, requiring regularization techniques such as kernel smoothing or Tikhonov regularization.
Deconvolution is mathematically ill-posed: small errors in the estimated distributions can lead to large errors in the recovered components. This is because division in the frequency domain amplifies noise at high frequencies. Practical deconvolution therefore requires either strong parametric assumptions (which regularize the problem by constraining the solution space) or explicit regularization techniques that smooth the frequency-domain estimates. The choice between parametric and nonparametric methods involves a tradeoff between model assumptions and estimation stability.
Applications in RT Research
De Jong (1991) used deconvolution to separate the response activation and response selection components in the Eriksen flanker task, showing that the flanker compatibility effect arises primarily in the activation stage. Yantis, Meyer, and Smith (1991) applied Fourier deconvolution to decompose visual search RT distributions into target-detection and response-execution components.
A particularly elegant application involves the difference-of-distributions approach: if two conditions share all processing stages except one, the difference between their RT distributions isolates the duration distribution of the differing stage. This logic has been used to isolate memory retrieval time distributions from encoding and response execution contributions.
Modern Developments
Recent work has connected deconvolution methods to Bayesian estimation frameworks. Rouder, Lu, Speckman, Sun, and Jiang (2005) developed hierarchical Bayesian methods for estimating the parameters of the ex-Gaussian and other convolutional models, providing credible intervals for component parameters. These methods handle small sample sizes more gracefully than classical deconvolution and naturally accommodate individual differences through hierarchical structure.