
Translating a Math Word Problem to an Expression Tree
Sequencetosequence (SEQ2SEQ) models have been successfully applied to ...
read it

Neural Tree Indexers for Text Understanding
Recurrent neural networks (RNNs) process input text sequentially and mod...
read it

SemanticallyAligned Universal TreeStructured Solver for Math Word Problems
A practical automatic textual math word problems (MWPs) solver should be...
read it

Bidirectional Recursive Neural Networks for TokenLevel Labeling with Structure
Recently, deep architectures, such as recurrent and recursive neural net...
read it

Quantifying the vanishing gradient and long distance dependency problem in recursive neural networks and recursive LSTMs
Recursive neural networks (RNN) and their recently proposed extension re...
read it

On Solving Word Equations via Program Transformation
The paper presents an experiment of solving word equations via specializ...
read it

A constrained recursion algorithm for batch normalization of treesturctured LSTM
Treestructured LSTM is promising way to consider longdistance interact...
read it
Solving Math Word Problems by Scoring Equations with Recursive Neural Networks
Solving math word problems is a cornerstone task in assessing language understanding and reasoning capabilities in NLP systems. Recent works use automatic extraction and ranking of candidate solution equations providing the answer to math word problems. In this work, we explore novel approaches to score such candidate solution equations using treestructured recursive neural network (TreeRNN) configurations. The advantage of this TreeRNN approach over using more established sequential representations, is that it can naturally capture the structure of the equations. Our proposed method consists in transforming the mathematical expression of the equation into an expression tree. Further, we encode this tree into a TreeRNN by using different TreeLSTM architectures. Experimental results show that our proposed method (i) improves overall performance with more than 3 stateoftheart, and with over 18 more complex reasoning, and (ii) outperforms sequential LSTMs by 4 points on such more complex problems.
READ FULL TEXT
Comments
There are no comments yet.