Multi^2OIE: Multilingual Open Information Extraction based on Multi-Head Attention with BERT

09/17/2020
by   Youngbin Ro, et al.
0

In this paper, we propose Multi^2OIE, which performs open information extraction (open IE) by combining BERT with multi-head attention. Our model is a sequence-labeling system with an efficient and effective argument extraction method. We use a query, key, and value setting inspired by the Multimodal Transformer to replace the previously used bidirectional long short-term memory architecture with multi-head attention. Multi^2OIE outperforms existing sequence-labeling systems with high computational efficiency on two benchmark evaluation datasets, Re-OIE2016 and CaRB. Additionally, we apply the proposed method to multilingual open IE using multilingual BERT. Experimental results on new benchmark datasets introduced for two languages (Spanish and Portuguese) demonstrate that our model outperforms other multilingual systems without training data for the target languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2019

Small and Practical BERT Models for Sequence Labeling

We propose a practical scheme to train a single multilingual sequence la...
research
06/21/2021

Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling

Multi-head attention has each of the attention heads collect salient inf...
research
06/24/2022

DetIE: Multilingual Open Information Extraction Inspired by Object Detection

State of the art neural methods for open information extraction (OpenIE)...
research
05/18/2023

mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences

We present our work on developing a multilingual, efficient text-to-text...
research
11/10/2020

To What Degree Can Language Borders Be Blurred In BERT-based Multilingual Spoken Language Understanding?

This paper addresses the question as to what degree a BERT-based multili...
research
04/19/2021

Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model

In this work, we show the process of building a large-scale training set...
research
02/14/2021

Query-by-Example Keyword Spotting system using Multi-head Attention and Softtriple Loss

This paper proposes a neural network architecture for tackling the query...

Please sign up or login with your details

Forgot password? Click here to reset