Demian Inostroza Améstica

Also published as: Demian Inostroza Améstica


2025

pdf bib
Can a Neural Model Guide Fieldwork? A Case Study on Morphological Data Collection
Aso Mahmudi | Borja Herce | Demian Inostroza Améstica | Andreas Scherbakov | Eduard H. Hovy | Ekaterina Vylomova
Proceedings of the 18th Workshop on Building and Using Comparable Corpora (BUCC)

Linguistic fieldwork is an important component in language documentation and the creation of comprehensive linguistic corpora. Despite its significance, the process is often lengthy, exhaustive, and time-consuming. This paper presents a novel model that guides a linguist during the fieldwork and accounts for the dynamics of linguist-speaker interactions. We introduce a novel framework that evaluates the efficiency of various sampling strategies for obtaining morphological data and assesses the effectiveness of state-of-the-art neural models in generalising morphological structures. Our experiments highlight two key strategies for improving the efficiency: (1) increasing the diversity of annotated data by uniform sampling among the cells of the paradigm tables, and (2) using model confidence as a guide to enhance positive interaction by providing reliable predictions during annotation.

pdf bib
A Joint Multitask Model for Morpho-Syntactic Parsing
Demian Inostroza Améstica | Meladel Mistica | Ekaterina Vylomova | Chris Guest | Kemal Kurniawan
Proceedings of The UniDive 2025 Shared Task on Multilingual Morpho-Syntactic Parsing

We present a joint multitask model for the Uni-Dive 2025 Morpho-Syntactic Parsing shared task, where systems predict both morphological and syntactic analyses following novel UD annotation scheme. Our system uses a shared XLM-RoBERTa encoder with three specialized decoders for content word identification, dependency parsing, and morphosyntactic feature prediction. Our model achieves the best overall performance on the shared task’s leaderboard covering nine typologically diverse languages, with an average MSLAS score of 78.7%, LAS of 80.1%, and Feats F1 of 90.3%. Our ablation studies show that matching the task’s gold tokenization and content word identification are crucial to model performance. Error analysis reveals that our model struggles with core grammatical cases (particularly Nom–Acc) and nominal features across languages.