Abstract
Multilingual coreference resolution (MCR) has been a long-standing and challenging task. With the newly proposed multilingual coreference dataset, CorefUD (Nedoluzhko et al., 2022), we conduct an investigation into the task by using its harmonized universal morphosyntactic and coreference annotations. First, we study coreference by examining the ground truth data at different linguistic levels, namely mention, entity and document levels, and across different genres, to gain insights into the characteristics of coreference across multiple languages. Second, we perform an error analysis of the most challenging cases that the SotA system fails to resolve in the CRAC 2022 shared task using the universal annotations. Last, based on this analysis, we extract features from universal morphosyntactic annotations and integrate these features into a baseline system to assess their potential benefits for the MCR task. Our results show that our best configuration of features improves the baseline by 0.9% F1 score.- Anthology ID:
- 2023.findings-emnlp.671
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10010–10024
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.671
- DOI:
- 10.18653/v1/2023.findings-emnlp.671
- Cite (ACL):
- Haixia Chai and Michael Strube. 2023. Investigating Multilingual Coreference Resolution by Universal Annotations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10010–10024, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Investigating Multilingual Coreference Resolution by Universal Annotations (Chai & Strube, Findings 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.671.pdf