The Causal Effect of Answer Changing on Multiple-Choice Items
The causal effect of changing initial answers on final scores is a long-standing puzzle in the educational and psychological measurement literature. This paper formalizes the question using the standard framework for causal inference, the potential outcomes framework. Our clear definitions of the treatment and corresponding counterfactuals, expressed with potential outcomes, allow us to estimate the causal effect of answer changing even without any study designs or modeling examinees' answer change behaviors. We separately define the average treatment effect and the average treatment effect on the treated, and show that each effect can be directly computed from the proportions of examinees' answer changing patterns. Our findings show that the traditional method in the literature of comparing the proportions of "wrong to right" and "right to wrong" patterns--a method which has recently been criticized--indeed correctly estimates the sign of the average answer changing effect but only for those examinees who actually changed their initial responses; this does not take into account those who retained their responses. We illustrate our procedures by reanalyzing van der Linden, Jeon, and Ferrara's (2011) data. The results show that the answer changing effect is heterogeneous such that it is positive to examinees who changed their initial responses but is negative to those who did not change the responses. We discuss theoretical and practical implications of our findings.
READ FULL TEXT