Anthony Tomasic


2021

pdf bib
A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection
Oscar J. Romero | Antian Wang | John Zimmerman | Aaron Steinfeld | Anthony Tomasic
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Recently, transformer language models have been applied to build both task- and non-task-oriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.

2018

pdf bib
Retrieval-Based Neural Code Generation
Shirley Anugrah Hayati | Raphael Olivier | Pravalika Avvaru | Pengcheng Yin | Anthony Tomasic | Graham Neubig
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

In models to generate program source code from natural language, representing this code in a tree structure has been a common approach. However, existing methods often fail to generate complex code correctly due to a lack of ability to memorize large and complex structures. We introduce RECODE, a method based on subtree retrieval that makes it possible to explicitly reference existing code examples within a neural code generation model. First, we retrieve sentences that are similar to input sentences using a dynamic-programming-based sentence similarity scoring method. Next, we extract n-grams of action sequences that build the associated abstract syntax tree. Finally, we increase the probability of actions that cause the retrieved n-gram action subtree to be in the predicted code. We show that our approach improves the performance on two code generation tasks by up to +2.6 BLEU.

2006

pdf bib
NER Systems that Suit User’s Preferences: Adjusting the Recall-Precision Trade-off for Entity Extraction
Einat Minkov | Richard Wang | Anthony Tomasic | William Cohen
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers