Chinese UMR annotation: Can LLMs help?
Haibo Sun, Nianwen Xue, Jin Zhao, Liulu Yue, Yao Sun, Keer Xu, Jiawei Wu
Abstract
We explore using LLMs, GPT-4 specifically, to generate draft sentence-level Chinese Uniform Meaning Representations (UMRs) that human annotators can revise to speed up the UMR annotation process. In this study, we use few-shot learning and Think-Aloud prompting to guide GPT-4 to generate sentence-level graphs of UMR. Our experimental results show that compared with annotating UMRs from scratch, using LLMs as a preprocessing step reduces the annotation time by two thirds on average. This indicates that there is great potential for integrating LLMs into the pipeline for complicated semantic annotation tasks.- Anthology ID:
- 2024.dmr-1.14
- Volume:
- Proceedings of the Fifth International Workshop on Designing Meaning Representations @ LREC-COLING 2024
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Claire Bonial, Julia Bonn, Jena D. Hwang
- Venues:
- DMR | WS
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 131–139
- Language:
- URL:
- https://aclanthology.org/2024.dmr-1.14
- DOI:
- Cite (ACL):
- Haibo Sun, Nianwen Xue, Jin Zhao, Liulu Yue, Yao Sun, Keer Xu, and Jiawei Wu. 2024. Chinese UMR annotation: Can LLMs help?. In Proceedings of the Fifth International Workshop on Designing Meaning Representations @ LREC-COLING 2024, pages 131–139, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- Chinese UMR annotation: Can LLMs help? (Sun et al., DMR-WS 2024)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2024.dmr-1.14.pdf