Abstract
The question of interoperability for linguistic annotated resources covers different aspects. First, it requires a representation framework making it possible to compare, and eventually merge, different annotation schema. In this paper, a general description level representing the multimodal linguistic annotations is proposed. It focuses on time representation and on the data content representation: This paper reconsiders and enhances the current and generalized representation of annotations. An XML schema of such annotations is proposed. A Python API is also proposed. This framework is implemented in a multi-platform software and distributed under the terms of the GNU Public License.- Anthology ID:
- L14-1422
- Volume:
- Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
- Month:
- May
- Year:
- 2014
- Address:
- Reykjavik, Iceland
- Editors:
- Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 3386–3392
- Language:
- URL:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/51_Paper.pdf
- DOI:
- Cite (ACL):
- Brigitte Bigi, Tatsuya Watanabe, and Laurent Prévot. 2014. Representing Multimodal Linguistic Annotated data. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3386–3392, Reykjavik, Iceland. European Language Resources Association (ELRA).
- Cite (Informal):
- Representing Multimodal Linguistic Annotated data (Bigi et al., LREC 2014)
- PDF:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/51_Paper.pdf