Language Models have a Moral Dimension

03/08/2021
by   Patrick Schramowski, et al.
0

Artificial writing is permeating our lives due to recent advances in large-scale, transformer-based language models (LMs) such as BERT, its variants, GPT-2/3, and others. Using them as pretrained models and fine-tuning them for specific tasks, researchers have extended the state of the art for many NLP tasks and shown that they not only capture linguistic knowledge but also retain general knowledge implicitly present in the data. These and other successes are exciting. Unfortunately, LMs trained on unfiltered text corpora suffer from degenerate and biased behaviour. While this is well established, we show that recent improvements of LMs also store ethical and moral values of the society and actually bring a “moral dimension” to surface: the values are capture geometrically by a direction in the embedding space, reflecting well the agreement of phrases to social norms implicitly expressed in the training texts. This provides a path for attenuating or even preventing toxic degeneration in LMs. Since one can now rate the (non-)normativity of arbitrary phrases without explicitly training the LM for this task, the moral dimension can be used as “moral compass” guiding (even other) LMs towards producing normative text, as we will show.

READ FULL TEXT
research
10/22/2020

Language Models are Open Knowledge Graphs

This paper shows how to construct knowledge graphs (KGs) from pre-traine...
research
04/15/2020

Coreferential Reasoning Learning for Language Representation

Language representation models such as BERT could effectively capture co...
research
09/03/2019

Language Models as Knowledge Bases?

Recent progress in pretraining language models on large textual corpora ...
research
02/10/2020

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

It has recently been observed that neural language models trained on uns...
research
12/11/2019

BERT has a Moral Compass: Improvements of ethical and moral values of machines

Allowing machines to choose whether to kill humans would be devastating ...
research
04/15/2022

Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models

Text analysis of social media for sentiment, topic analysis, and other a...
research
07/01/2021

Leveraging Domain Agnostic and Specific Knowledge for Acronym Disambiguation

An obstacle to scientific document understanding is the extensive use of...

Please sign up or login with your details

Forgot password? Click here to reset