Turning transformer attention weights into zero-shot sequence labelers

03/26/2021
by   Kamil Bujel, et al.
0

We demonstrate how transformer-based models can be redesigned in order to capture inductive biases across tasks on different granularities and perform inference in a zero-shot manner. Specifically, we show how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. We compare against a range of diverse and previously proposed methods for generating token-level labels, and present a simple yet effective modified attention layer that significantly advances the current state of the art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset