Abstract
Morphological inflection, like many sequence-to-sequence tasks, sees great performance from recurrent neural architectures when data is plentiful, but performance falls off sharply in lower-data settings. We investigate one aspect of neural seq2seq models that we hypothesize contributes to overfitting - teacher forcing. By creating different training and test conditions, exposure bias increases the likelihood that a system too closely models its training data. Experiments show that teacher-forced models struggle to recover when they enter unknown territory. However, a simple modification to the training algorithm to more closely mimic test conditions creates models that are better able to generalize to unseen environments.- Anthology ID:
- 2020.coling-main.255
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Editors:
- Donia Scott, Nuria Bel, Chengqing Zong
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 2837–2846
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.255
- DOI:
- 10.18653/v1/2020.coling-main.255
- Cite (ACL):
- Garrett Nicolai and Miikka Silfverberg. 2020. Noise Isn’t Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2837–2846, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Noise Isn’t Always Negative: Countering Exposure Bias in Sequence-to-Sequence Inflection Models (Nicolai & Silfverberg, COLING 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.255.pdf