
Translating a Math Word Problem to an Expression Tree
Sequencetosequence (SEQ2SEQ) models have been successfully applied to ...
read it

Bidirectional Recursive Neural Networks for TokenLevel Labeling with Structure
Recently, deep architectures, such as recurrent and recursive neural net...
read it

Recognizing and Verifying Mathematical Equations using Multiplicative Differential Neural Units
Automated mathematical reasoning is a challenging problem that requires ...
read it

Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems
Math Word Problem (MWP) solving needs to discover the quantitative relat...
read it

SemanticallyAligned Universal TreeStructured Solver for Math Word Problems
A practical automatic textual math word problems (MWPs) solver should be...
read it

Bidirectional TreeStructured LSTM with Head Lexicalization
Sequential LSTM has been extended to model tree structures, giving compe...
read it

On Solving Word Equations via Program Transformation
The paper presents an experiment of solving word equations via specializ...
read it
Solving Math Word Problems by Scoring Equations with Recursive Neural Networks
Solving math word problems is a cornerstone task in assessing language understanding and reasoning capabilities in NLP systems. Recent works use automatic extraction and ranking of candidate solution equations providing the answer to math word problems. In this work, we explore novel approaches to score such candidate solution equations using treestructured recursive neural network (TreeRNN) configurations. The advantage of this TreeRNN approach over using more established sequential representations, is that it can naturally capture the structure of the equations. Our proposed method consists in transforming the mathematical expression of the equation into an expression tree. Further, we encode this tree into a TreeRNN by using different TreeLSTM architectures. Experimental results show that our proposed method (i) improves overall performance with more than 3 stateoftheart, and with over 18 more complex reasoning, and (ii) outperforms sequential LSTMs by 4 points on such more complex problems.
READ FULL TEXT
Comments
There are no comments yet.