Structurally Diverse Sampling for Sample-Efficient Training and Comprehensive Evaluation

Shivanshu Gupta, Sameer Singh, Matt Gardner


Abstract
A growing body of research has demonstrated the inability of NLP models to generalize compositionally and has tried to alleviate it through specialized architectures, training schemes, and data augmentation, among other approaches. In this work, we study a different approach: training on instances with diverse structures. We propose a model-agnostic algorithm for subsampling such sets of instances from a labeled instance pool with structured outputs. Evaluating on both compositional template splits and traditional IID splits of 5 semantic parsing datasets of varying complexity, we show that structurally diverse training using our algorithm leads to comparable or better generalization than prior algorithms in 9 out of 10 dataset-split type pairs. In general, we find structural diversity to consistently improve sample efficiency compared to random train sets. Moreover, we show that structurally diverse sampling yields comprehensive test sets that are a lot more challenging than IID test sets. Finally, we provide two explanations for improved generalization from diverse train sets: 1) improved coverage of output substructures, and 2) a reduction in spurious correlations between these substructures.
Anthology ID:
2022.findings-emnlp.365
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4966–4979
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.365
DOI:
Bibkey:
Cite (ACL):
Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Structurally Diverse Sampling for Sample-Efficient Training and Comprehensive Evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4966–4979, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Structurally Diverse Sampling for Sample-Efficient Training and Comprehensive Evaluation (Gupta et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/2022.findings-emnlp.365.pdf
Video:
 https://preview.aclanthology.org/starsem-semeval-split/2022.findings-emnlp.365.mp4