EASY-M: Evaluation System for Multilingual Summarizers


Abstract
Automatic text summarization aims at producing a shorter version of a document (or a document set). Evaluation of summarization quality is a challenging task. Because human evaluations are expensive and evaluators often disagree between themselves, many researchers prefer to evaluate their systems automatically, with help of software tools. Such a tool usually requires a point of reference in the form of one or more human-written summaries for each text in the corpus. Then, a system-generated summary is compared to one or more human-written summaries, according to selected metrics. However, a single metric cannot reflect all quality-related aspects of a summary. In this paper we present the EvAluation SYstem for Multilingual Summarization (EASY-M), which enables the evaluation of system-generated summaries in 17 different languages with several quality measures, based on comparison with their human-generated counterparts. The system also provides comparative results with two built-in baselines. The source code and both online and offline versions of EASY-M is freely available for the NLP community.
Anthology ID:
W19-8908
Volume:
Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
53–62
Language:
URL:
https://aclanthology.org/W19-8908
DOI:
10.26615/978-954-452-058-8_008
Bibkey:
Cite (ACL):
2019. EASY-M: Evaluation System for Multilingual Summarizers. In Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources, pages 53–62, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
EASY-M: Evaluation System for Multilingual Summarizers (, RANLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/W19-8908.pdf