Princess Dickens
2020
Linguist vs. Machine: Rapid Development of Finite-State Morphological Grammars
Sarah Beemer
|
Zak Boston
|
April Bukoski
|
Daniel Chen
|
Princess Dickens
|
Andrew Gerlach
|
Torin Hopkins
|
Parth Anand Jawale
|
Chris Koski
|
Akanksha Malhotra
|
Piyush Mishra
|
Saliha Muradoglu
|
Lan Sang
|
Tyler Short
|
Sagarika Shreevastava
|
Elizabeth Spaulding
|
Testumichi Umada
|
Beilei Xiang
|
Changbing Yang
|
Mans Hulden
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Sequence-to-sequence models have proven to be highly successful in learning morphological inflection from examples as the series of SIGMORPHON/CoNLL shared tasks have shown. It is usually assumed, however, that a linguist working with inflectional examples could in principle develop a gold standard-level morphological analyzer and generator that would surpass a trained neural network model in accuracy of predictions, but that it may require significant amounts of human labor. In this paper, we discuss an experiment where a group of people with some linguistic training develop 25+ grammars as part of the shared task and weigh the cost/benefit ratio of developing grammars by hand. We also present tools that can help linguists triage difficult complex morphophonological phenomena within a language and hypothesize inflectional class membership. We conclude that a significant development effort by trained linguists to analyze and model morphophonological patterns are required in order to surpass the accuracy of neural models.
Search
Co-authors
- Sarah Beemer 1
- Zak Boston 1
- April Bukoski 1
- Daniel Chen 1
- Andrew Gerlach 1
- show all...