Abstract
While neural networks produce state-of-the-art performance in several NLP tasks, they generally depend heavily on lexicalized information, which transfer poorly between domains. We present a combination of two strategies to mitigate this dependence on lexicalized information in fact verification tasks. We present a data distillation technique for delexicalization, which we then combine with a model distillation method to prevent aggressive data distillation. We show that by using our solution, not only does the performance of an existing state-of-the-art model remain at par with that of the model trained on a fully lexicalized data, but it also performs better than it when tested out of domain. We show that the technique we present encourages models to extract transferable facts from a given fact verification dataset.- Anthology ID:
- 2021.naacl-main.360
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4546–4552
- Language:
- URL:
- https://aclanthology.org/2021.naacl-main.360
- DOI:
- 10.18653/v1/2021.naacl-main.360
- Cite (ACL):
- Mitch Paul Mithun, Sandeep Suntwal, and Mihai Surdeanu. 2021. Data and Model Distillation as a Solution for Domain-transferable Fact Verification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4546–4552, Online. Association for Computational Linguistics.
- Cite (Informal):
- Data and Model Distillation as a Solution for Domain-transferable Fact Verification (Mithun et al., NAACL 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.naacl-main.360.pdf
- Data
- FIGER