Learning to Segment Inputs for NMT Favors Character-Level Processing

Julia Kreutzer, Artem Sokolov


Abstract
Most modern neural machine translation (NMT) systems rely on presegmented inputs. Segmentation granularity importantly determines the input and output sequence lengths, hence the modeling depth, and source and target vocabularies, which in turn determine model size, computational costs of softmax normalization, and handling of out-of-vocabulary words. However, the current practice is to use static, heuristic-based segmentations that are fixed before NMT training. This begs the question whether the chosen segmentation is optimal for the translation task. To overcome suboptimal segmentation choices, we present an algorithm for dynamic segmentation, that is trainable end-to-end and driven by the NMT objective. In an evaluation on four translation tasks we found that, given the freedom to navigate between different segmentation levels, the model prefers to operate on (almost) character level, providing support for purely character-level NMT models from a novel angle.
Anthology ID:
2018.iwslt-1.25
Volume:
Proceedings of the 15th International Conference on Spoken Language Translation
Month:
October 29-30
Year:
2018
Address:
Brussels
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
International Conference on Spoken Language Translation
Note:
Pages:
166–172
Language:
URL:
https://aclanthology.org/2018.iwslt-1.25
DOI:
Bibkey:
Cite (ACL):
Julia Kreutzer and Artem Sokolov. 2018. Learning to Segment Inputs for NMT Favors Character-Level Processing. In Proceedings of the 15th International Conference on Spoken Language Translation, pages 166–172, Brussels. International Conference on Spoken Language Translation.
Cite (Informal):
Learning to Segment Inputs for NMT Favors Character-Level Processing (Kreutzer & Sokolov, IWSLT 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2018.iwslt-1.25.pdf