Evaluation of Review Summaries via Question-Answering

Nannan Huang, Xiuzhen Zhang


Abstract
Summarisation of reviews aims at compressing opinions expressed in multiple review documents into a concise form while still covering the key opinions. Despite the advancement in summarisation models, evaluation metrics for opinionated text summaries lag behind and still rely on lexical-matching metrics such as ROUGE. In this paper, we propose to use the question-answering(QA) approach to evaluate summaries of opinions in reviews. We propose to identify opinion-bearing text spans in the reference summary to generate QA pairs so as to capture salient opinions. A QA model is then employed to probe the candidate summary to evaluate information overlap between candidate and reference summaries. We show that our metric RunQA, Review Summary Evaluation via Question Answering, correlates well with human judgments in terms of coverage and focus of information. Finally, we design an adversarial task and demonstrate that the proposed approach is more robust than metrics in the literature for ranking summaries.
Anthology ID:
2021.alta-1.9
Volume:
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2021
Address:
Online
Venue:
ALTA
SIG:
Publisher:
Australasian Language Technology Association
Note:
Pages:
87–96
Language:
URL:
https://aclanthology.org/2021.alta-1.9
DOI:
Bibkey:
Cite (ACL):
Nannan Huang and Xiuzhen Zhang. 2021. Evaluation of Review Summaries via Question-Answering. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association, pages 87–96, Online. Australasian Language Technology Association.
Cite (Informal):
Evaluation of Review Summaries via Question-Answering (Huang & Zhang, ALTA 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.alta-1.9.pdf