BERTTM: Leveraging Contextualized Word Embeddings from Pre-trained Language Models for Neural Topic Modeling

05/16/2023
by   Zheng Fang, et al.
0

With the development of neural topic models in recent years, topic modelling is playing an increasingly important role in natural language understanding. However, most existing topic models still rely on bag-of-words (BoW) information, either as training input or training target. This limits their ability to capture word order information in documents and causes them to suffer from the out-of-vocabulary (OOV) issue, i.e. they cannot handle unobserved words in new documents. Contextualized word embeddings from pre-trained language models show superiority in the ability of word sense disambiguation and prove to be effective in dealing with OOV words. In this work, we developed a novel neural topic model combining contextualized word embeddings from the pre-trained language model BERT. The model can infer the topic distribution of a document without using any BoW information. In addition, the model can infer the topic distribution of each word in a document directly from the contextualized word embeddings. Experiments on several datasets show that our model outperforms existing topic models in terms of both document classification and topic coherence metrics and can accommodate unseen words from newly arrived documents. Experiments on the NER dataset also show that our model can produce high-quality word topic representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2020

Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence

Topic models extract meaningful groups of words from documents, allowing...
research
01/11/2023

Topics in Contextualised Attention Embeddings

Contextualised word vectors obtained via pre-trained language models enc...
research
10/23/2020

Topic Modeling with Contextualized Word Representation Clusters

Clustering token-level contextualized word representations produces outp...
research
03/15/2022

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

State-of-the-art NLP systems represent inputs with word embeddings, but ...
research
10/14/2021

Neural Attention-Aware Hierarchical Topic Model

Neural topic models (NTMs) apply deep neural networks to topic modelling...
research
09/19/2017

MetaLDA: a Topic Model that Efficiently Incorporates Meta information

Besides the text content, documents and their associated words usually c...
research
02/06/2023

Efficient and Flexible Topic Modeling using Pretrained Embeddings and Bag of Sentences

Pre-trained language models have led to a new state-of-the-art in many N...

Please sign up or login with your details

Forgot password? Click here to reset