Improving Mathematics Tutoring With A Code Scratchpad

Shriyash Upadhyay, Etan Ginsberg, Chris Callison-Burch


Abstract
Large language models can solve reasoning tasks (like math problems) more effectively when they are allowed to generate rationales. However, a good tutoring system should not just generate solutions, but should also generate explanations and should be able to correct and guide students. We show that providing a code scratchpad improves performance on each tutoring step with a gradeschool mathematics dataset. On these tutoring tasks, GPT-3 models provided with a code scratchpad significantly outperform those given only a language scratchpad (77.7% vs 48.7% cumulative accuracy).
Anthology ID:
2023.bea-1.2
Volume:
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
20–28
Language:
URL:
https://aclanthology.org/2023.bea-1.2
DOI:
10.18653/v1/2023.bea-1.2
Bibkey:
Cite (ACL):
Shriyash Upadhyay, Etan Ginsberg, and Chris Callison-Burch. 2023. Improving Mathematics Tutoring With A Code Scratchpad. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 20–28, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Improving Mathematics Tutoring With A Code Scratchpad (Upadhyay et al., BEA 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.bea-1.2.pdf