Ulrike Pado
Also published as: Ulrike Padó
2023
Working at your own Pace: Computer-based Learning for CL
Anselm Knebusch | Ulrike Padó
Proceedings of the 1st Workshop on Teaching for NLP
Anselm Knebusch | Ulrike Padó
Proceedings of the 1st Workshop on Teaching for NLP
2022
A Transformer for SAG: What Does it Grade?
Nico Willms | Ulrike Pado
Proceedings of the 11th Workshop on NLP for Computer Assisted Language Learning
Nico Willms | Ulrike Pado
Proceedings of the 11th Workshop on NLP for Computer Assisted Language Learning
2019
Summarization Evaluation meets Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning
Margot Mieskes | Ulrike Padó
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning
2018
Work Smart - Reducing Effort in Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning
Margot Mieskes | Ulrike Padó
Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning
2017
Question Difficulty – How to Estimate Without Norming, How to Use for Automated Grading
Ulrike Padó
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Ulrike Padó
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Question difficulty estimates guide test creation, but are too costly for small-scale testing. We empirically verify that Bloom’s Taxonomy, a standard tool for difficulty estimation during question creation, reliably predicts question difficulty observed after testing in a short-answer corpus. We also find that difficulty is mirrored in the amount of variation in student answers, which can be computed before grading. We show that question difficulty and its approximations are useful for automated grading, allowing us to identify the optimal feature set for grading each question even in an unseen-question setting.
2016
Get Semantic With Me! The Usefulness of Different Feature Types for Short-Answer Grading
Ulrike Padó
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Ulrike Padó
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Automated short-answer grading is key to help close the automation loop for large-scale, computerised testing in education. A wide range of features on different levels of linguistic processing has been proposed so far. We investigate the relative importance of the different types of features across a range of standard corpora (both from a language skill and content assessment context, in English and in German). We find that features on the lexical, text similarity and dependency level often suffice to approximate full-model performance. Features derived from semantic processing particularly benefit the linguistically more varied answers in content assessment corpora.
2015
Short Answer Grading: When Sorting Helps and When it Doesn’t
Ulrike Pado | Cornelia Kiefer
Proceedings of the fourth workshop on NLP for computer-assisted language learning
Ulrike Pado | Cornelia Kiefer
Proceedings of the fourth workshop on NLP for computer-assisted language learning
2010
A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences
Katrin Erk | Sebastian Padó | Ulrike Padó
Computational Linguistics, Volume 36, Issue 4 - December 2010
Katrin Erk | Sebastian Padó | Ulrike Padó
Computational Linguistics, Volume 36, Issue 4 - December 2010
2009
Automated Assessment of Spoken Modern Standard Arabic
Jian Cheng | Jared Bernstein | Ulrike Pado | Masanori Suzuki
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications
Jian Cheng | Jared Bernstein | Ulrike Pado | Masanori Suzuki
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications
2007
Flexible, Corpus-Based Modelling of Human Plausibility Judgements
Sebastian Padó | Ulrike Padó | Katrin Erk
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)
Sebastian Padó | Ulrike Padó | Katrin Erk
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)