Tweet

Abstract

Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions. In this paper, we address this issue by introducing a weakly-supervised paradigm for learning MWPs. Our method only requires the annotations of the final answers and can generate various solutions for a single problem. To boost weakly-supervised learning, we propose a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning. Specifically, for an incorrect solution tree generated by the neural network, the fixing mechanism propagates the error from the root node to the leaf nodes and infers the most probable fix that can be executed to get the desired answer. To generate more diverse solutions, tree regularization is applied to guide the efficient shrinkage and exploration of the solution space, and a memory buffer is designed to track and save the discovered various fixes for each problem. Experimental results on the Math23K dataset show the proposed LBF framework significantly outperforms reinforcement learning baselines in weakly-supervised learning. Furthermore, it achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods, demonstrating its strength in producing diverse solutions.

Paper
Learning by Fixing: Solving Math Word Problems with Weak Supervision
Yining Hong, Qing Li, Daniel Ciao, Siyuan Haung, Song-Chun Zhu.
Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021.
Paper / Supplementary / Code / Slides

Team

1 Department of Computer Science, UCLA

2 Department of Statistics, UCLA

Bibtex

@inproceedings{hong2021lbf,
title={Learning by Fixing: Solving Math Word Problems with Weak Supervision},
author={Hong, Yining and Li, Qing and Ciao, Daniel and Huang, Siyuan and Zhu, Song-Chun.},
booktitle={Thirty-Fifth AAAI Conference on Artificial Intelligence},
year={2021}
}