In item response theory, the concept of information replaces and generalizes classical test theory's reliability coefficient. While reliability is a single number summarizing measurement precision across the entire score range, the information function describes precision as a continuous function of the latent trait θ. An item provides maximum information near its difficulty level and diminishing information as the examinee's ability moves away from that point. This trait-dependent precision is one of IRT's most powerful features.
Definition and Derivation
For the 2PL: I_i(θ) = a_i² × P_i(θ) × Q_i(θ)
where P_i(θ) = probability of correct response
Q_i(θ) = 1 − P_i(θ)
a_i = discrimination parameter
The item information function is derived from the Fisher information for the item's contribution to the likelihood. It equals the squared first derivative of the log-likelihood with respect to θ, or equivalently, the negative expected second derivative. For the 2PL model, this simplifies to a² × P(θ) × Q(θ), which is maximized when P(θ) = Q(θ) = 0.50 — that is, when θ equals the item difficulty b. At this point, the maximum information is a²/4.
Test Information and Standard Error
Under the assumption of local independence — that item responses are conditionally independent given θ — the test information function is simply the sum of item information functions across all items:
SE(θ) = 1 / √I(θ)
Reliability at θ: ρ(θ) ≈ 1 − 1/I(θ)
The standard error of the ability estimate is the reciprocal of the square root of the test information. This provides a direct, ability-specific measure of measurement precision. A test may measure very precisely in the middle of the ability range (where most items provide information) but poorly at the extremes. This insight is invisible in classical test theory, which reports a single reliability coefficient.
The item information function is the engine of computerized adaptive testing. At each step, the CAT algorithm selects the item that provides maximum information at the examinee's current estimated ability level. By concentrating measurement where it matters most, adaptive tests achieve the same precision as conventional tests with far fewer items — typically 50% fewer. The information function also guides test assembly: automated test assembly algorithms select item sets that meet content specifications while maximizing information at critical score points (e.g., near pass/fail cut-scores).
Information under Different Models
The form of the information function depends on the IRT model. Under the 1PL (Rasch) model, all items have equal discrimination, so all item information functions have the same height (a²/4) and differ only in location. Under the 3PL model, the guessing parameter reduces information, especially at lower ability levels, because correct responses from low-ability examinees are ambiguous — they may reflect ability or guessing.
For polytomous items (scored in more than two categories), information functions are computed from the category response functions using the same Fisher information principle. The graded response model, partial credit model, and generalized partial credit model each yield distinct information function shapes. Polytomous items generally provide more information per item than dichotomous items because each response carries more statistical information about the latent trait. The item information function thus provides a unified, model-based metric for evaluating and comparing items regardless of their scoring format.