Towards Trustworthy Deception Detection: Benchmarking Model Robustness across Domains, Modalities, and Languages

04/23/2021
by   Maria Glenski, et al.
0

Evaluating model robustness is critical when developing trustworthy models not only to gain deeper understanding of model behavior, strengths, and weaknesses, but also to develop future models that are generalizable and robust across expected environments a model may encounter in deployment. In this paper we present a framework for measuring model robustness for an important but difficult text classification task - deceptive news detection. We evaluate model robustness to out-of-domain data, modality-specific features, and languages other than English. Our investigation focuses on three type of models: LSTM models trained on multiple datasets(Cross-Domain), several fusion LSTM models trained with images and text and evaluated with three state-of-the-art embeddings, BERT ELMo, and GloVe (Cross-Modality), and character-level CNN models trained on multiple languages (Cross-Language). Our analyses reveal a significant drop in performance when testing neural models on out-of-domain data and non-English languages that may be mitigated using diverse training data. We find that with additional image content as input, ELMo embeddings yield significantly fewer errors compared to BERT orGLoVe. Most importantly, this work not only carefully analyzes deception model robustness but also provides a framework of these analyses that can be applied to new models or extended datasets in the future.

READ FULL TEXT

page 7

page 8

research
07/22/2021

Evaluation of contextual embeddings on less-resourced languages

The current dominance of deep neural networks in natural language proces...
research
02/04/2023

A New cross-domain strategy based XAI models for fake news detection

In this study, we presented a four-level cross-domain strategy for fake ...
research
02/14/2021

indicnlp@kgp at DravidianLangTech-EACL2021: Offensive Language Identification in Dravidian Languages

The paper presents the submission of the team indicnlp@kgp to the EACL 2...
research
11/21/2022

L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi

Sentence representation from vanilla BERT models does not work well on s...
research
09/23/2018

Mind Your Language: Abuse and Offense Detection for Code-Switched Languages

In multilingual societies like the Indian subcontinent, use of code-swit...
research
06/15/2021

Knowledge-Rich BERT Embeddings for Readability Assessment

Automatic readability assessment (ARA) is the task of evaluating the lev...
research
07/26/2016

Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM Encoder-Decoder

We present Tweet2Vec, a novel method for generating general-purpose vect...

Please sign up or login with your details

Forgot password? Click here to reset