BERT Family Eat Word Salad: Experiments with Text Understanding

01/10/2021 ∙ by Ashim Gupta, et al. ∙ 6

In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language. We define simple heuristics to construct such examples. Our experiments show that state-of-the-art models consistently fail to recognize them as ill-formed, and instead produce high confidence predictions on them. Finally, we show that if models are explicitly trained to recognize invalid inputs, they can be robust to such attacks without a drop in performance.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.