Few-Shot and Zero-Shot Learning for Historical Text Normalization

Marcel Bollmann, Natalia Korchagina, Anders Søgaard


Abstract
Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63 multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline.
Anthology ID:
D19-6112
Volume:
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Colin Cherry, Greg Durrett, George Foster, Reza Haffari, Shahram Khadivi, Nanyun Peng, Xiang Ren, Swabha Swayamdipta
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
104–114
Language:
URL:
https://aclanthology.org/D19-6112
DOI:
10.18653/v1/D19-6112
Bibkey:
Cite (ACL):
Marcel Bollmann, Natalia Korchagina, and Anders Søgaard. 2019. Few-Shot and Zero-Shot Learning for Historical Text Normalization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 104–114, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Few-Shot and Zero-Shot Learning for Historical Text Normalization (Bollmann et al., 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/D19-6112.pdf