Edward Gehringer


2023

pdf bib
Labels are not necessary: Assessing peer-review helpfulness using domain adaptation based on self-training
Chengyuan Liu | Divyang Doshi | Muskaan Bhargava | Ruixuan Shang | Jialin Cui | Dongkuan Xu | Edward Gehringer
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)

A peer-assessment system allows students to provide feedback on each other’s work. An effective peer assessment system urgently requires helpful reviews to facilitate students to make improvements and progress. Automated evaluation of review helpfulness, with the help of deep learning models and natural language processing techniques, gains much interest in the field of peer assessment. However, collecting labeled data with the “helpfulness” tag to build these prediction models remains challenging. A straightforward solution would be using a supervised learning algorithm to train a prediction model on a similar domain and apply it to our peer review domain for inference. But naively doing so can degrade the model performance in the presence of the distributional gap between domains. Such a distributional gap can be effectively addressed by Domain Adaptation (DA). Self-training has recently been shown as a powerful branch of DA to address the distributional gap. The first goal of this study is to evaluate the performance of self-training-based DA in predicting the helpfulness of peer reviews as well as the ability to overcome the distributional gap. Our second goal is to propose an advanced self-training framework to overcome the weakness of the existing self-training by tailoring knowledge distillation and noise injection, to further improve the model performance and better address the distributional gap.

2022

pdf bib
Starting from “Zero”: An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Comments
Qinjin Jia | Yupeng Cao | Edward Gehringer
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)

Peer assessment is an effective and efficient pedagogical strategy for delivering feedback to learners. Asking students to provide quality feedback, which contains suggestions and mentions problems, can promote metacognition by reviewers and better assist reviewees in revising their work. Thus, various supervised machine learning algorithms have been proposed to detect quality feedback. However, all these powerful algorithms have the same Achilles’ heel: the reliance on sufficient historical data. In other words, collecting adequate peer feedback for training a supervised algorithm can take several semesters before the model can be deployed to a new class. In this paper, we present a new paradigm, called incremental zero-shot learning (IZSL), to tackle the problem of lacking sufficient historical data. Our results show that the method can achieve acceptable “cold-start” performance without needing any domain data, and it outperforms BERT when trained on the same data collected incrementally.

2013

pdf bib
Graph-Structures Matching for Review Relevance Identification
Lakshmi Ramachandran | Edward Gehringer
Proceedings of TextGraphs-8 Graph-based Methods for Natural Language Processing