Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction

12/27/2021
by   Matthew Matero, et al.
0

Many works in natural language processing have shown connections between a person's personal discourse and their personality, demographics, and mental health states. However, many of the machine learning models that predict such human traits have yet to fully consider the role of pre-trained language models and contextual embeddings. Using a person's degree of depression as a case study, we do an empirical analysis on which off-the-shelf language model, individual layers, and combinations of layers seem most promising when applied to human-level NLP tasks. Notably, despite the standard in past work of suggesting use of either the second-to-last or the last 4 layers, we find layer 19 (sixth-to last) is the most ideal by itself, while when using multiple layers, distributing them across the second half(i.e. Layers 12+) of the 24 layers is best.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2019

Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?

Natural language processing (NLP) tasks tend to suffer from a paucity of...
research
09/11/2022

Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models

We investigate the extent to which verb alternation classes, as describe...
research
08/27/2018

Dissecting Contextual Word Embeddings: Architecture and Representation

Contextual word representations derived from pre-trained bidirectional l...
research
08/16/2019

Learning Conceptual-Contexual Embeddings for Medical Text

External knowledge is often useful for natural language understanding ta...
research
11/14/2019

Sparse associative memory based on contextual code learning for disambiguating word senses

In recent literature, contextual pretrained Language Models (LMs) demons...
research
03/21/2019

Linguistic Knowledge and Transferability of Contextual Representations

Contextual word representations derived from large-scale neural language...
research
08/16/2018

Predicting Human Trustfulness from Facebook Language

Trustfulness -- one's general tendency to have confidence in unknown peo...

Please sign up or login with your details

Forgot password? Click here to reset