Transductive Auxiliary Task Self-Training for Neural Multi-Task Models

08/16/2019
by   Johannes Bjerva, et al.
0

Multi-task learning and self-training are two common ways to improve a machine learning model's performance in settings with limited training data. Drawing heavily on ideas from those two approaches, we suggest transductive auxiliary task self-training: training a multi-task model on (i) a combination of main and auxiliary task training data, and (ii) test instances with auxiliary task labels which a single-task version of the model has previously generated. We perform extensive experiments on 86 combinations of languages and tasks. Our results are that, on average, transductive auxiliary task self-training improves absolute accuracy by up to 9.56 multi-task model for dependency relation tagging and by up to 13.03 semantic tagging.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset