Constructing Open Cloze Tests Using Generation and Discrimination Capabilities of Transformers

Mariano Felice, Shiva Taslimipoor, Paula Buttery


Abstract
This paper presents the first multi-objective transformer model for generating open cloze tests that exploits generation and discrimination capabilities to improve performance. Our model is further enhanced by tweaking its loss function and applying a post-processing re-ranking algorithm that improves overall test structure. Experiments using automatic and human evaluation show that our approach can achieve up to 82% accuracy according to experts, outperforming previous work and baselines. We also release a collection of high-quality open cloze tests along with sample system output and human annotations that can serve as a future benchmark.
Anthology ID:
2022.findings-acl.100
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1263–1273
Language:
URL:
https://aclanthology.org/2022.findings-acl.100
DOI:
10.18653/v1/2022.findings-acl.100
Bibkey:
Cite (ACL):
Mariano Felice, Shiva Taslimipoor, and Paula Buttery. 2022. Constructing Open Cloze Tests Using Generation and Discrimination Capabilities of Transformers. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1263–1273, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Constructing Open Cloze Tests Using Generation and Discrimination Capabilities of Transformers (Felice et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-acl.100.pdf