Deriving Machine Attention from Human Rationales

08/28/2018
by   Yujia Bao, et al.
0

Attention-based models are successful when trained on large amounts of data. In this paper, we demonstrate that even in the low-resource scenario, attention can be learned effectively. To this end, we start with discrete human-annotated rationales and map them into continuous attention. Our central hypothesis is that this mapping is general across domains, and thus can be transferred from resource-rich domains to low-resource ones. Our model jointly learns a domain-invariant representation and induces the desired mapping between rationales and attention. Our empirical results validate this hypothesis and show that our approach delivers significant gains over state-of-the-art baselines, yielding over 15

READ FULL TEXT
research
11/04/2017

Deep Stacking Networks for Low-Resource Chinese Word Segmentation with Transfer Learning

In recent years, neural networks have proven to be effective in Chinese ...
research
11/14/2022

High-Resource Methodological Bias in Low-Resource Investigations

The central bottleneck for low-resource NLP is typically regarded to be ...
research
08/25/2019

Multi-task Learning for Low-resource Second Language Acquisition Modeling

Second language acquisition (SLA) modeling is to predict whether second ...
research
03/23/2018

Leveraging translations for speech transcription in low-resource settings

Recently proposed data collection frameworks for endangered language doc...
research
08/31/2021

LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

Most existing NER methods rely on extensive labeled data for model train...
research
12/15/2017

A Novel Approach for Effective Learning in Low Resourced Scenarios

Deep learning based discriminative methods, being the state-of-the-art m...

Please sign up or login with your details

Forgot password? Click here to reset