Coping with Noisy Training Data Labels in Paraphrase Detection

Teemu Vahtola, Mathias Creutz, Eetu Sjöblom, Sami Itkonen


Abstract
We present new state-of-the-art benchmarks for paraphrase detection on all six languages in the Opusparcus sentential paraphrase corpus: English, Finnish, French, German, Russian, and Swedish. We reach these baselines by fine-tuning BERT. The best results are achieved on smaller and cleaner subsets of the training sets than was observed in previous research. Additionally, we study a translation-based approach that is competitive for the languages with more limited and noisier training data.
Anthology ID:
2021.wnut-1.32
Volume:
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Month:
November
Year:
2021
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
291–296
Language:
URL:
https://aclanthology.org/2021.wnut-1.32
DOI:
10.18653/v1/2021.wnut-1.32
Bibkey:
Cite (ACL):
Teemu Vahtola, Mathias Creutz, Eetu Sjöblom, and Sami Itkonen. 2021. Coping with Noisy Training Data Labels in Paraphrase Detection. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 291–296, Online. Association for Computational Linguistics.
Cite (Informal):
Coping with Noisy Training Data Labels in Paraphrase Detection (Vahtola et al., WNUT 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.wnut-1.32.pdf
Data
OpenSubtitlesOpusparcus