Training for Fast Sequential Prediction Using Dynamic Feature Selection

10/30/2014
by   Emma Strubell, et al.
0

We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. We present experiments in left-to-right part-of-speech tagging on WSJ, demonstrating that we can preserve accuracy above 97

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2016

Static and Dynamic Feature Selection in Morphosyntactic Analyzers

We study the use of greedy feature selection methods for morphosyntactic...
research
09/27/2022

Totally-ordered Sequential Rules for Utility Maximization

High utility sequential pattern mining (HUSPM) is a significant and valu...
research
03/03/2022

Parallel feature selection based on the trace ratio criterion

The growth of data today poses a challenge in management and inference. ...
research
06/08/2022

Performance, Transparency and Time. Feature selection to speed up the diagnosis of Parkinson's disease

Accurate and early prediction of a disease allows to plan and improve a ...
research
04/14/2019

BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer

Modeling users' dynamic and evolving preferences from their historical b...
research
08/30/2015

Feature Selection via Binary Simultaneous Perturbation Stochastic Approximation

Feature selection (FS) has become an indispensable task in dealing with ...

Please sign up or login with your details

Forgot password? Click here to reset