Judging the Judges: Evaluating the Performance of International Gymnastics Judges
Judging a gymnastics routine is a noisy process, and the performance of judges varies widely. In collaboration with the Fédération Internationale de Gymnastique (FIG) and Longines, we are designing and implementing an improved statistical engine to analyze the performance of gymnastics judges during and after major competitions like the Olympic Games and the World Championships. The engine, called the Judge Evaluation Program (JEP), has three objectives: (1) provide constructive feedback to judges, executive committees and national federations; (2) assign the best judges to the most important competitions; (3) detect bias and outright cheating. Using data from international gymnastics competitions held during the 2013--2016 Olympic cycle, we first develop a marking score evaluating the accuracy of the marks given by gymnastics judges. Judging a gymnastics routine is a random process, and we can model this process very accurately using heteroscedastic random variables. The marking score scales the difference between the mark of a judge and the theoretical performance of a gymnast as a function of the standard deviation of the judging error estimated from data for each apparatus. This dependence between judging variability and performance quality has never been properly studied. We then study ranking scores assessing to what extent judges rate gymnasts in the correct order, and explain why we ultimately chose not to implement them. We also study outlier detection to pinpoint gymnasts who were poorly evaluated by judges. Finally, we discuss interesting observations and discoveries that led to recommendations and rule changes at the FIG.
READ FULL TEXT