Investigating Cross-Domain Behaviors of BERT in Review Understanding

06/27/2023
by   Albert Lu, et al.
0

Review score prediction requires review text understanding, a critical real-world application of natural language processing. Due to dissimilar text domains in product reviews, a common practice is fine-tuning BERT models upon reviews of differing domains. However, there has not yet been an empirical study of cross-domain behaviors of BERT models in the various tasks of product review understanding. In this project, we investigate text classification BERT models fine-tuned on single-domain and multi-domain Amazon review data. In our findings, though single-domain models achieved marginally improved performance on their corresponding domain compared to multi-domain models, multi-domain models outperformed single-domain models when evaluated on multi-domain data, single-domain data the single-domain model was not fine-tuned on, and on average when considering all tests. Though slight increases in accuracy can be achieved through single-domain model fine-tuning, computational resources and costs can be reduced by utilizing multi-domain models that perform well across domains.

READ FULL TEXT
research
03/15/2023

Cross-domain Sentiment Classification in Spanish

Sentiment Classification is a fundamental task in the field of Natural L...
research
08/15/2021

Maps Search Misspelling Detection Leveraging Domain-Augmented Contextual Representations

Building an independent misspelling detector and serve it before correct...
research
04/18/2022

Ingredient Extraction from Text in the Recipe Domain

In recent years, there has been an increase in the number of devices wit...
research
07/21/2021

Improved Text Classification via Contrastive Adversarial Training

We propose a simple and general method to regularize the fine-tuning of ...
research
08/28/2022

Cross-domain Cross-architecture Black-box Attacks on Fine-tuned Models with Transferred Evolutionary Strategies

Fine-tuning can be vulnerable to adversarial attacks. Existing works abo...
research
04/17/2021

Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models

There is growing evidence that pretrained language models improve task-s...
research
07/08/2022

ABB-BERT: A BERT model for disambiguating abbreviations and contractions

Abbreviations and contractions are commonly found in text across differe...

Please sign up or login with your details

Forgot password? Click here to reset