What can we learn from Semantic Tagging?

08/29/2018
by   Mostafa Abdou, et al.
0

We investigate the effects of multi-task learning using the recently introduced task of semantic tagging. We employ semantic tagging as an auxiliary task for three different NLP tasks: part-of-speech tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where negative transfer between tasks is less likely. Our findings show considerable improvements for all tasks, particularly in the learning what to share setting, which shows consistent gains across all tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2019

Transductive Auxiliary Task Self-Training for Neural Multi-Task Models

Multi-task learning and self-training are two common ways to improve a m...
research
11/05/2016

A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks

Transfer and multi-task learning have traditionally focused on either a ...
research
08/13/2018

Multi-Task Learning for Sequence Tagging: An Empirical Study

We study three general multi-task learning (MTL) approaches on 11 sequen...
research
08/09/2019

Artificially Evolved Chunks for Morphosyntactic Analysis

We introduce a language-agnostic evolutionary technique for automaticall...
research
07/11/2020

Deep or Simple Models for Semantic Tagging? It Depends on your Data [Experiments]

Semantic tagging, which has extensive applications in text mining, predi...
research
05/23/2017

Sluice networks: Learning what to share between loosely related tasks

Multi-task learning is partly motivated by the observation that humans b...
research
02/28/2015

The NLP Engine: A Universal Turing Machine for NLP

It is commonly accepted that machine translation is a more complex task ...

Please sign up or login with your details

Forgot password? Click here to reset