1 Introduction
Solving Math Word Problems (MWPs) is the task of obtaining mathematical solutions from natural language text descriptions. Recent studies leverage sequencetosequence (seq2seq) neural networks (NNs) for solving MWPs, which take in the text as the input and decode the corresponding humanannotated equation reference, which can further calculate the answer value
(Wang et al., 2017). While promising results have been reported for singleunknown variable problems by designing task specialized encoder and decoder architectures Wang et al. (2018, 2019); Xie and Sun (2019); Liu et al. (2019); Guan et al. (2019); Zhang et al. (2020, 2020); Shen and Jin (2020), using pretrained models Tan et al. (2021); Liang et al. (2021) and leveraging auxiliary tasks Liu et al. (2020); Shen et al. (2021); Li et al. (2022), various studies for a more challenging setting, MWPs with multipleunknowns have recently been developed Upadhyay and Chang (2017); Qin et al. (2020); Cao et al. (2021); Qin et al. (2021).For human students in practice, they intuitively use diverse reasoning logic to solve MWPs. Students could consider the MWP solution from different aspects by considering diverse equivalence relations in the MWP. As we show in the upper of Figure 1, we can solve this problem in at least two different reasoning logic: As shown on the left side, the equation set is formed by the first reasoning logic of “considering the equivalence relation of the two sums of the cheeseburger and pizza calories given in the question”; or as shown in the right side, we can follow a second reasoning logic “considering first only the equivalence relation of caloric content of the cheeseburger by offsetting the calories from the pizza”. Such diverse reasoning logic could lead to diverse equation expressions, that the solution equation is written in various mathematically equivalent forms, such as expression 1 and expression 2 in the example. However, previous studies share a longlasting limitation that they force the solver to decode a fixed equation expression supervised by human annotation. The fixed equation expression supervision used in previous studies ignores diverse mathematical reasoning, which is especially common for human students in multipleunknown problems and complex singleunknown problems.
Meanwhile, directly introducing diverse equation expressions to the seq2seq framework in a data augmentation manner could further aggravate the issue of expression bias, which refers to the discrepancy between the annotated equation expression and the model’s correct prediction expression. As shown in the middle of Figure 1, even if the model makes the correct prediction of the problem, the training loss accumulated by diverse expressions could be enormous. Wang et al. (2018) propose an equation normalization that reorders the variables in the equations as close as possible to their order in the input text. While their method could reduce the expression bias issue, they ignore the inherent diverse mathematical reasoning and limits to considering singleunknown problems.
Enlightened by recent methods in controlled text generation, which uses a control code to influence the style and topic of subsequent generated text
Keskar et al. (2019); Shin et al. (2020), we propose a new training paradigm, where a control code guides the decoding process to consider one type of mathematical reasoning logic and decode the corresponding equation expression. As shown in the bottom Figure 1, the <sol> control code guides the model to consider the direct solution of each individual unknown and . Not only can it reduce the expression bias problem since the control code can provide guidance for the reasoning logic, but also training on the diverse equation expressions guided by the control codes can lead to better interpretation of the MWPs by considering diverse reasoning logic. We design various control codes for both singleunknown and multipleunknown settings to allow the model to understand different reasoning orders. We conduct experiments on a singleunknown benchmark Math23K and two multipleunknown benchmarks DRAW1K and HMWP. Experimental results show that our method improves the performance of both settings, with a more significant improvement in the challenging multipleunknown setting.2 Methodology
For each math word problem holding an original equation set , we generate new equation expressions based on five types of diverse mathematical reasoning logic considering the ordering logic of given variables and unknown variables . and denote the ordered indices that the variables appear in the text. We then assign a corresponding control code to the equation expressions. The MWP solving model takes in the text and control code, and then is trained to predict the corresponding equation expression.
2.1 Control Codes
We consider the diverse mathematical reasoning logic in two aspects. The first aspect considers diverse reasoning orders of given variables, which reflects in the diverse expressions of the commutative law and solution form. For example, could be transformed to the solution form which does not effect the mathematical equivalency. This approach is valid for both multiunknown and singleunknown problems. The second aspect considers diverse reasoning orders of unknown variables, which reflects in the diverse expressions of equivalent equation sets. For example, swapping the equation order in the equation set does not affect the mathematical equivalency. This approach is valid for multiunknown problems.
We preprocess the equation annotations with Sympy Meurer et al. (2017) so that they follow a predefined order similar to Wang et al. (2018). Then we generate different types of equation expressions based on these preprocessed equations.
For the first aspect, we consider three types of diverse equation expressions.

Commutative Law of Addition <add> We traverse the equation in prefix order, and swap the left and right subtrees of the addition operators. For example, would be swapped two times. We first swap the two subtrees and of the first addition operator to , and then swap the two subtrees and of the second operator to .

Commutative Law of Multiplication <mul> Similarly, we traverse the equation in prefix order, and swap the left and right subtrees of the multiplication operators. For example, from to .

Solution Form <sol> We consider a mathematical reasoning method that directly considers the solution of each unknown variable. For example, from to .
For the second aspect, we consider two types of diverse equation expressions.

Equation Swapping <equ> We swap the multipleunknown equations in sequential order, which means given a list of equations , we swap them to the order .

Unknown Variable Swapping <var> Similarly, we swap the multiple unknown variables in sequential order, which means given a list of unknown variables in the equation , we change the correspondence between them and the unknown variables in the original question, that the unknown variables in the new equation follows denotes and denotes for other indices. For example, from to .
To incorporate the control codes for guiding the equation expression decoding, we follow studies in controlled text generation Keskar et al. (2019) and append a special token to the encoder input. We use an independent special token for each expression category as the control code, such as <add>, including <orig> for the example of the original equation expression. We use the prediction of the original equation expression control code <orig> for test inference since it has the most training examples.
2.2 MWP solving model
Solving multipleunknown problems usually requires equation sets, which are challenging to generate. To tackle this problem, we follow the decoding target paradigm of Qin et al. (2020), which introduces a Universal Expression Tree (UET) to represent multipleunknown equation sets uniformly as an expression tree by using a dummy node as the head of the equation set. UET can also handle singleunknown problems in a unified manner.
For the solver model, we use two strong baseline models for experiments. For the first model, we leverage a seq2seq pretrained language model BART Lewis et al. (2020); Shen et al. (2021) as the solver model, which has reported promising results for text generation tasks. The encoder takes in the textual input and generates highquality representations of the problem text. The decoder generates the UET based on these representations.
For the second model, we follow Li et al. (2022) and use BERTGTS as MWP solving model. We leverage the contextual pretrained language model BERT as the encoder, and use a Goaldriven treestructured MWP solver (GTS) Xie and Sun (2019)
based on LongShortTermMemory networks (LSTM) as the decoder.
Model  Math23K  DRAW  HMWP 

GTS Xie and Sun (2019)  75.6  39.9  44.6 
G2T Zhang et al. (2020)  77.4  41.0  45.1 
SAUSolver Qin et al. (2020)    39.2  44.8 
BART Shen et al. (2021)  80.4  32.1  41.5 
BERTGTS Li et al. (2022)  82.6  42.2  48.3 
Controlled BART  82.3  45.3  47.9 
Controlled BERTGTS  84.4  50.2  53.1 
Model  Math23K  DRAW  HMWP 

BERTGTS  82.6  42.2  48.3 
+ <add>  83.0  46.8  50.8 
+ <mul>  83.3  47.6  51.9 
+ <sol>    46.3  50.5 
+ <equ>    48.3  50.1 
+ <var>    47.4  50.1 
All  84.4  50.2  53.1 
 code  83.3  49.6  49.6 
3 Experiments
3.1 Datasets
We evaluate our proposed method on one singleunknown Chinese dataset Math23K Wang et al. (2017) and two multipleunknown datasets, DRAW1K Upadhyay and Chang (2017) in English and HMWP Qin et al. (2020) in Chinese. We show the statistics of overall data size, single and multiple unknown problem size, and the usage of control codes of the datasets in Figure 2. The five control code methods are enumerated for each example to generate new equation expressions. While <sol> is applicable for both singleunknown and multipleunknown problems, the annotation schema in Math23K uses the Solution Form, which corresponds to <orig>, that no more further equation expressions are generated for <sol>. We use from 1.87 to 6.15 times of original data examples size for training on the three datasets.
3.2 Results
We show our experimental results on the three datasets in Table 1. We compare our results with three models: GTS uses an LSTM encoder and decoder, which considers tree structure information during decoding; G2T uses a Graph Neural Network that considers quantity information as the encoder and similar tree decoder; SAUSolver introduces a semanticallyalignment to the target vocabulary of the equations to improve the GTS decoder. As we can see, our method outperforms the baseline for both models on all datasets. The accuracy of different models gains improvement from 1.8% to 1.9% for singleunknown problems and from 4.8% to 13.2% for multipleunknown problems. The results demonstrate the effectiveness of our method, especially for multipleunknown problems.
3.3 Ablation Study
We conduct further analysis on the more effective model BERTGTS. In Table 2, we show the ablation study using different control codes. As shown in the Table, using each control code individually can improve the model’s prediction. <mul> is particularly effective for all datasets since it has an extensive example size for each dataset. Using all control codes together further boosts the model performance by providing diverse mathematical reasoning logic as guidance.
We also show the results of removing the control codes and solely using the diverse equation expressions in a data augmentation manner in Table 2. Solely introducing diverse mathematical reasoning logic can also improve the model performance compared to the baseline model. However, the expression bias problem limits the performance since training loss could accumulate for diverse equation expressions. By incorporating control codes to guide the decoding process, our method can consider diverse reasoning logic and reduce the expression bias problem in the meantime.
3.4 Study on Variable Size
We show the performance on different given variable sizes of the BERTGTS baseline model and our controlled equation generation method on Math23K in Figure 3. As the variable size grows, the problem becomes more complex, and the performance gap between our method and the baseline becomes more significant. Our method can incorporate diverse equation expressions to help the model learn mathematical reasoning logic.
4 Conclusion and Future Work
In this paper, we introduce diverse mathematical reasoning logic to the seq2seq MWP solver framework using five control codes to guide the solver to predict the corresponding equation expression in a controlled equation generation manner. The approach allows the solver to benefit from diverse reasoning logic beyond the humanannotated fixed solution equation. Meanwhile, the controlled equation generation training paradigm reduces the expression bias problem caused by diverse equation expressions. Experimental results show the effectiveness of our method, outperforming strong baselines on singleunknown (Math23K) and multipleunknown (DRAW1K, HMWP) datasets.
There exists other controlled equation generation strategies such as such as adding brackets to merge subtraction terms (e.g. from to ) or combining current control codes to form a new type of equation expression, which potentially could lead to more than 10 controlled equation generation strategies. In addition, considering the prediction of multiple control codes in addition to <orig> could further improve the performance results, for example, applying ensemble learning methods such as major voting, or designing rankers to choose a optimal prediction among the prediction of multiple control codes. These problems could be considered as future work of this study.
References

A bottomup dag structure extraction model for math word problems.
In
Proceedings of the AAAI Conference on Artificial Intelligence
, Vol. 35, pp. 39–46. Cited by: §1.  An improved coarsetofine method for solving generation tasks. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, Sydney, Australia, pp. 178–185. External Links: Link Cited by: §1.
 Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Cited by: §1, §2.1.
 BART: denoising sequencetosequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 7871–7880. External Links: Link, Document Cited by: §2.2.
 Seeking patterns, not just memorizing procedures: contrastive learning for solving math word problems. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2486–2496. Cited by: §1, §2.2, Table 1.
 MWPbert: a strong baseline for math word problems. External Links: 2107.13435 Cited by: §1.
 Reverse operation based data augmentation for solving math word problems. arXiv preprint arXiv:2010.01556. Cited by: §1.

Treestructured decoding for solving math word problems.
In
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)
, Hong Kong, China, pp. 2370–2379. External Links: Link, Document Cited by: §1.  SymPy: symbolic computing in python. PeerJ Computer Science 3, pp. e103. External Links: ISSN 23765992, Link, Document Cited by: §2.1.
 Neuralsymbolic solver for math word problems with auxiliary tasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, pp. 5870–5881. External Links: Link, Document Cited by: §1.
 Semanticallyaligned universal treestructured solver for math word problems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 3780–3789. External Links: Link, Document Cited by: §1, §2.2, Table 1, §3.1.
 Generate & rank: a multitask framework for math word problems. In Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 2269–2279. Cited by: §1, §2.2, Table 1.
 Solving math word problems with multiencoders and multidecoders. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain (Online), pp. 2924–2934. External Links: Link, Document Cited by: §1.
 AutoPrompt: eliciting knowledge from language models with automatically generated prompts. In Empirical Methods in Natural Language Processing (EMNLP), Cited by: §1.
 Investigating math word problems using pretrained multilingual language models. External Links: 2105.08928 Cited by: §1.
 Annotating derivations: a new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Valencia, Spain, pp. 494–504. External Links: Link Cited by: §1, §3.1.
 Translating a math word problem to a expression tree. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 1064–1069. External Links: Link, Document Cited by: §1, §2.1.

Mathdqn: solving arithmetic word problems via deep reinforcement learning
. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. Cited by: §1.  Templatebased math word problem solvers with recursive neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 7144–7151. Cited by: §1.
 Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 845–854. External Links: Link, Document Cited by: §1, §3.1.
 A goaldriven treestructured neural model for math word problems.. In IJCAI, pp. 5299–5305. Cited by: §1, §2.2, Table 1.
 Teacherstudent networks with multiple decoders for solving math word problem. In Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence, IJCAI20, C. Bessiere (Ed.), pp. 4011–4017. Note: Main track External Links: Document, Link Cited by: §1.
 Graphtotree learning for solving math word problems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3928–3937. Cited by: §1, Table 1.
Appendix A Experimental Details
We evaluate Math23K on the standard train test setting. DRAW1K and HMWP are evaluated by 5cross validation.
For Math23K and DRAW1K, we use the bertbase pretrained encoder. For HMWP, we use the pretrained encoder that could be found here ^{1}^{1}1https://huggingface.co/yechen/bertbasechinese.
For Math23K, the max text length is 256, the max equation decoding length is 45, the batch size is 16 and the epochs number is 50. We use AdamW with a learning rate of 5e5.
For DRAW1K, the max text length is 256, the max equation decoding length is 32, the batch size is 16 and the epochs number is 50. We use AdamW with a learning rate of 5e5.
For HMWP, the max text length is 1024, the max equation decoding length is 100, the batch size is 8 and the epochs number is 50. We use AdamW with a learning rate of 5e5.
Experiments are conducted on NVIDIA 3090 and A100(80G). The runtime for the experiments is around 6 hours.