Clustering Contextualized Representations of Text for Unsupervised Syntax Induction

by   Vikram Gupta, et al.

We explore clustering of contextualized text representations for two unsupervised syntax induction tasks: part of speech induction (POSI) and constituency labelling (CoLab). We propose a deep embedded clustering approach which jointly transforms these representations into a lower dimension cluster friendly space and clusters them. We further enhance these representations by augmenting them with task-specific representations. We also explore the effectiveness of multilingual representations for different tasks and languages. With this work, we establish the first strong baselines for unsupervised syntax induction using contextualized text representations. We report competitive performance on 45-tag POSI, state-of-the-art performance on 12-tag POSI across 10 languages, and competitive results on CoLab.


page 1

page 2

page 3

page 4


HHMM at SemEval-2019 Task 2: Unsupervised Frame Induction using Contextualized Word Embeddings

We present our system for semantic frame induction that showed the best ...

Masked Part-Of-Speech Model: Does Modeling Long Context Help Unsupervised POS-tagging?

Previous Part-Of-Speech (POS) induction models usually assume certain in...

Unsupervised Semantic Frame Induction using Triclustering

We use dependency triples automatically extracted from a Web-scale corpu...

Duality Regularization for Unsupervised Bilingual Lexicon Induction

Unsupervised bilingual lexicon induction naturally exhibits duality, whi...

Text Mining Through Label Induction Grouping Algorithm Based Method

The main focus of information retrieval methods is to provide accurate a...

Unsupervised Bilingual Lexicon Induction Across Writing Systems

Recent embedding-based methods in unsupervised bilingual lexicon inducti...

Inducing a Semantically Annotated Lexicon via EM-Based Clustering

We present a technique for automatic induction of slot annotations for s...