Addressing catastrophic forgetting for medical domain expansion

03/27/2021
by   Praveer Singh, et al.
0

Model brittleness is a key concern when deploying deep learning models in real-world medical settings. A model that has high performance at one institution may suffer a significant decline in performance when tested at other institutions. While pooling datasets from multiple institutions and re-training may provide a straightforward solution, it is often infeasible and may compromise patient privacy. An alternative approach is to fine-tune the model on subsequent institutions after training on the original institution. Notably, this approach degrades model performance at the original institution, a phenomenon known as catastrophic forgetting. In this paper, we develop an approach to address catastrophic forget-ting based on elastic weight consolidation combined with modulation of batch normalization statistics under two scenarios: first, for expanding the domain from one imaging system’s data to another imaging system’s, and second, for expanding the domain from a large multi-institutional dataset to another single institution dataset. We show that our approach outperforms several other state-of-the-art approaches and provide theoretical justification for the efficacy of batch normalization modulation. The results of this study are generally applicable to the deployment of any clinical deep learning model which requires domain expansion.

READ FULL TEXT

page 4

page 8

page 15

page 16

research
09/25/2019

Towards continuous learning for glioma segmentation with elastic weight consolidation

When finetuning a convolutional neural network (CNN) on data from a new ...
research
11/21/2022

Towards continually learning new languages

Multilingual speech recognition with neural networks is often implemente...
research
09/10/2017

Institutionally Distributed Deep Learning Networks

Deep learning has become a promising approach for automated medical diag...
research
06/16/2023

Catastrophic Forgetting in the Context of Model Updates

A large obstacle to deploying deep learning models in practice is the pr...
research
11/11/2021

Kronecker Factorization for Preventing Catastrophic Forgetting in Large-scale Medical Entity Linking

Multi-task learning is useful in NLP because it is often practically des...
research
10/17/2022

Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator

When a deep learning model is sequentially trained on different datasets...
research
03/19/2020

Lifelong Learning with Searchable Extension Units

Lifelong learning remains an open problem. One of its main difficulties ...

Please sign up or login with your details

Forgot password? Click here to reset