Ulrike Pado

Also published as: Ulrike Padó


2023

pdf bib
Working at your own Pace: Computer-based Learning for CL
Anselm Knebusch | Ulrike Padó
Proceedings of the 1st Workshop on Teaching for NLP

2022

pdf bib
A Transformer for SAG: What Does it Grade?
Nico Willms | Ulrike Pado
Proceedings of the 11th Workshop on NLP for Computer Assisted Language Learning

2019

pdf bib
Summarization Evaluation meets Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning

2018

pdf bib
Work Smart - Reducing Effort in Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning

2017

pdf bib
Question Difficulty – How to Estimate Without Norming, How to Use for Automated Grading
Ulrike Padó
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

Question difficulty estimates guide test creation, but are too costly for small-scale testing. We empirically verify that Bloom’s Taxonomy, a standard tool for difficulty estimation during question creation, reliably predicts question difficulty observed after testing in a short-answer corpus. We also find that difficulty is mirrored in the amount of variation in student answers, which can be computed before grading. We show that question difficulty and its approximations are useful for automated grading, allowing us to identify the optimal feature set for grading each question even in an unseen-question setting.

2016

pdf bib
Get Semantic With Me! The Usefulness of Different Feature Types for Short-Answer Grading
Ulrike Padó
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Automated short-answer grading is key to help close the automation loop for large-scale, computerised testing in education. A wide range of features on different levels of linguistic processing has been proposed so far. We investigate the relative importance of the different types of features across a range of standard corpora (both from a language skill and content assessment context, in English and in German). We find that features on the lexical, text similarity and dependency level often suffice to approximate full-model performance. Features derived from semantic processing particularly benefit the linguistically more varied answers in content assessment corpora.

2015

pdf bib
Short Answer Grading: When Sorting Helps and When it Doesn’t
Ulrike Pado | Cornelia Kiefer
Proceedings of the fourth workshop on NLP for computer-assisted language learning

2010

pdf bib
A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences
Katrin Erk | Sebastian Padó | Ulrike Padó
Computational Linguistics, Volume 36, Issue 4 - December 2010

2009

pdf bib
Automated Assessment of Spoken Modern Standard Arabic
Jian Cheng | Jared Bernstein | Ulrike Pado | Masanori Suzuki
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications

2007

pdf bib
Flexible, Corpus-Based Modelling of Human Plausibility Judgements
Sebastian Padó | Ulrike Padó | Katrin Erk
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
Modelling Semantic Role Pausibility in Human Sentence Processing
Ulrike Padó | Matthew Crocker | Frank Keller
11th Conference of the European Chapter of the Association for Computational Linguistics