Shivendra Bhardwaj
2021
Knowledge Distillation with Noisy Labels for Natural Language Understanding
Shivendra Bhardwaj
|
Abbas Ghaddar
|
Ahmad Rashid
|
Khalil Bibi
|
Chengyang Li
|
Ali Ghodsi
|
Phillippe Langlais
|
Mehdi Rezagholizadeh
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for real-world applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present, to the best of our knowledge, the first study on KD with noisy labels in Natural Language Understanding (NLU). We document the scope of the problem and present two methods to mitigate the impact of label noise. Experiments on the GLUE benchmark show that our methods are effective even under high noise levels. Nevertheless, our results indicate that more research is necessary to cope with label noise under the KD.
2020
Human or Neural Translation?
Shivendra Bhardwaj
|
David Alfonso Hermelo
|
Phillippe Langlais
|
Gabriel Bernier-Colborne
|
Cyril Goutte
|
Michel Simard
Proceedings of the 28th International Conference on Computational Linguistics
Deep neural models tremendously improved machine translation. In this context, we investigate whether distinguishing machine from human translations is still feasible. We trained and applied 18 classifiers under two settings: a monolingual task, in which the classifier only looks at the translation; and a bilingual task, in which the source text is also taken into consideration. We report on extensive experiments involving 4 neural MT systems (Google Translate, DeepL, as well as two systems we trained) and varying the domain of texts. We show that the bilingual task is the easiest one and that transfer-based deep-learning classifiers perform best, with mean accuracies around 85% in-domain and 75% out-of-domain .
Search
Co-authors
- Philippe Langlais 2
- Abbas Ghaddar 1
- Ahmad Rashid 1
- Khalil Bibi 1
- Chengyang Li 1
- show all...