Kaiser Sun
2023
Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks
Kaiser Sun
|
Peng Qi
|
Yuhao Zhang
|
Lan Liu
|
William Wang
|
Zhiheng Huang
Findings of the Association for Computational Linguistics: EMNLP 2023
Generative models have been widely applied to solve extractive tasks, where parts of the input is extracted to form the desired output, and achieved significant success. For example, in extractive question answering (QA), generative models have constantly yielded state-of-the-art results. In this work, we study the issue of tokenization inconsistency that is commonly neglected in training these models. This issue damages the extractive nature of these tasks after the input and output are tokenized inconsistently by the tokenizer, and thus leads to performance drop as well as hallucination. We propose a simple yet effective fix to this issue and conduct a case study on extractive QA. We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1.7 F1 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets. Further, the model converges faster, and becomes less likely to generate out-of-context answers. Our results demonstrate the need for increased scrutiny regarding how tokenization is done in extractive tasks and the benefits of consistent tokenization during training.
The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks
Kaiser Sun
|
Adina Williams
|
Dieuwke Hupkes
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
NLP models have progressed drastically in recent years, according to numerous datasets proposed to evaluate performance. Questions remain, however, about how particular dataset design choices may impact the conclusions we draw about model capabilities. In this work, we investigate this question in the domain of compositional generalization. We examine the performance of six modeling approaches across 4 datasets, split according to 8 compositional splitting strategies, ranking models by 18 compositional generalization splits in total. Our results show that: i) the datasets, although all designed to evaluate compositional generalization, rank modeling approaches differently; ii) datasets generated by humans align better with each other than with synthetic datasets, or than the latter among themselves; iii) generally, whether datasets are sampled from the same source is more predictive of the resulting model ranking than whether they maintain the same interpretation of compositionality; and iv) specific lexical items in dataset impacts the measurement consistency. Overall, our results demonstrate that much work remains to be done when it comes to assessing whether popular evaluation datasets measure what they intend to measure, and suggests that elucidating more rigorous standards for establishing the validity of evaluation sets could benefit the field.
2021
Effective Attention Sheds Light On Interpretability
Kaiser Sun
|
Ana Marasović
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Peng Qi 1
- Yuhao Zhang 1
- Lan Liu 1
- William Wang 1
- Zhiheng Huang 1
- show all...