TTTTTackling WinoGrande Schemas

03/18/2020
by   Sheng-Chieh Lin, et al.
0

We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande Challenge by decomposing each example into two input text strings, each containing a hypothesis, and using the probabilities assigned to the "entailment" token as a score of the hypothesis. Our first (and only) submission to the official leaderboard yielded 0.7673 AUC on March 13, 2020, which is the best known result at this time and beats the previous state of the art by over five points.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2017

DeepNorm-A Deep Learning Approach to Text Normalization

This paper presents an simple yet sophisticated approach to the challeng...
research
08/18/2021

De-identification of Unstructured Clinical Texts from Sequence to Sequence Perspective

In this work, we propose a novel problem formulation for de-identificati...
research
04/21/2020

Learning large logic programs by going beyond entailment

A major challenge in inductive logic programming (ILP) is learning large...
research
10/28/2020

CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models

Copy mechanisms are employed in sequence to sequence models (seq2seq) to...
research
09/18/2019

Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes

In sequence modeling tasks the token order matters, but this information...
research
11/13/2019

Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling

Defining words in a textual context is a useful task both for practical ...
research
01/21/2019

Chemical Names Standardization using Neural Sequence to Sequence Model

Chemical information extraction is to convert chemical knowledge in text...

Please sign up or login with your details

Forgot password? Click here to reset