LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments

Eric Kow, Anja Belz


Abstract
In this paper we describe the LG-Eval toolkit for creating online language evaluation experiments. LG-Eval is the direct result of our work setting up and carrying out the human evaluation experiments in several of the Generation Challenges shared tasks. It provides tools for creating experiments with different kinds of rating tools, allocating items to evaluators, and collecting the evaluation scores.
Anthology ID:
L12-1570
Volume:
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Month:
May
Year:
2012
Address:
Istanbul, Turkey
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
4033–4037
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2012/pdf/957_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Eric Kow and Anja Belz. 2012. LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 4033–4037, Istanbul, Turkey. European Language Resources Association (ELRA).
Cite (Informal):
LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments (Kow & Belz, LREC 2012)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2012/pdf/957_Paper.pdf