A Primer in BERTology: What we know about how BERT works

02/27/2020
by   Anna Rogers, et al.
0

Transformer-based models are now widely used in NLP, but we still do not understand a lot about their inner workings. This paper describes what is known to date about the famous BERT model (Devlin et al. 2019), synthesizing over 40 analysis studies. We also provide an overview of the proposed modifications to the model and its training regime. We then outline the directions for further research.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset