Robust Training under Linguistic Adversity

Yitong Li, Trevor Cohn, Timothy Baldwin


Abstract
Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods. Empirically, we evaluate our method with a convolutional neural model across a range of sentiment analysis datasets. Compared with a baseline and the dropout method, our method achieves better overall performance.
Anthology ID:
E17-2004
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–27
Language:
URL:
https://aclanthology.org/E17-2004
DOI:
Bibkey:
Cite (ACL):
Yitong Li, Trevor Cohn, and Timothy Baldwin. 2017. Robust Training under Linguistic Adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 21–27, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Robust Training under Linguistic Adversity (Li et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/E17-2004.pdf
Code
 lrank/Linguistic_adversity
Data
SST