Systematic Overestimation of Machine Learning Performance in Neuroimaging Studies of Depression

12/13/2019
by   Claas Flint, et al.
0

We currently observe a disconcerting phenomenon in machine learning studies in psychiatry: While we would expect larger samples to yield better results due to the availability of more data, larger machine learning studies consistently show much weaker performance than the numerous small-scale studies. Here, we systematically investigated this effect focusing on one of the most heavily studied questions in the field, namely the classification of patients suffering from Major Depressive Disorder (MDD) and healthy controls. Drawing upon a balanced sample of N = 1,868 MDD patients and healthy controls from our recent international Predictive Analytics Competition (PAC), we first trained and tested a classification model on the full dataset which yielded an accuracy of 61 of various sizes (N=4 to N=150) from the population and showed a strong risk of overestimation. Specifically, for small sample sizes (N=20), we observe accuracies of up to 95 up to 75 sufficiently large test sets effectively protect against performance overestimation whereas larger datasets per se do not. While these results question the validity of a substantial part of the current literature, we outline the relatively low-cost remedy of larger test sets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset