On the scaling of polynomial features for representation matching

02/20/2018
by   Siddhartha Brahma, et al.
0

In many neural models, new features as polynomial functions of existing ones are used to augment representations. Using the natural language inference task as an example, we investigate the use of scaled polynomials of degree 2 and above as matching features. We find that scaling degree 2 features has the highest impact on performance, reducing classification error by 5 models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2019

Multi-turn Inference Matching Network for Natural Language Inference

Natural Language Inference (NLI) is a fundamental and challenging task i...
research
12/22/2014

Pragmatic Neural Language Modelling in Machine Translation

This paper presents an in-depth investigation on integrating neural lang...
research
12/05/2018

On the Probabilistic Degree of OR over the Reals

We study the probabilistic degree over reals of the OR function on n var...
research
04/20/2023

Interventional Probing in High Dimensions: An NLI Case Study

Probing strategies have been shown to detect the presence of various lin...
research
05/05/2023

Degrees of Second and Higher-Order Polynomials

Second-order polynomials generalize classical first-order ones in allowi...
research
05/25/2023

Revisiting Non-Autoregressive Translation at Scale

In real-world systems, scaling has been critical for improving the trans...
research
08/01/2019

Simple and Effective Text Matching with Richer Alignment Features

In this paper, we present a fast and strong neural approach for general ...

Please sign up or login with your details

Forgot password? Click here to reset