Annotating Situated Actions in Dialogue
Christopher Tam, Richard Brutti, Kenneth Lai, James Pustejovsky
Abstract
Actions are critical for interpreting dialogue: they provide context for demonstratives and definite descriptions in discourse, and they continually update the common ground. This paper describes how Abstract Meaning Representation (AMR) can be used to annotate actions in multimodal human-human and human-object interactions. We conduct initial annotations of shared task and first-person point-of-view videos. We show that AMRs can be interpreted by a proxy language, such as VoxML, as executable annotation structures in order to recreate and simulate a series of annotated events.- Anthology ID:
- 2023.dmr-1.5
- Volume:
- Proceedings of the Fourth International Workshop on Designing Meaning Representations
- Month:
- June
- Year:
- 2023
- Address:
- Nancy, France
- Editors:
- Julia Bonn, Nianwen Xue
- Venues:
- DMR | WS
- SIG:
- SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 45–51
- Language:
- URL:
- https://aclanthology.org/2023.dmr-1.5
- DOI:
- Cite (ACL):
- Christopher Tam, Richard Brutti, Kenneth Lai, and James Pustejovsky. 2023. Annotating Situated Actions in Dialogue. In Proceedings of the Fourth International Workshop on Designing Meaning Representations, pages 45–51, Nancy, France. Association for Computational Linguistics.
- Cite (Informal):
- Annotating Situated Actions in Dialogue (Tam et al., DMR-WS 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.dmr-1.5.pdf