A Method for Automatically Estimating the Informativeness of Peer Reviews
Prabhat Bharti, Tirthankar Ghosal, Mayank Agarwal, Asif Ekbal
Abstract
Peer reviews are intended to give authors constructive and informative feedback. It is expected that the reviewers will make constructive suggestions over certain aspects, e.g., novelty, clarity, empirical and theoretical soundness, etc., and sections, e.g., problem definition/idea, datasets, methodology, experiments, results, etc., of the paper in a detailed manner. With this objective, we analyze the reviewer’s attitude towards the work. Aspects of the review are essential to determine how much weight the editor/chair should place on the review in making a decision. In this paper, we used a publically available Peer Review Analyze dataset of peer review texts manually annotated at the sentence level (∼13.22 k sentences) across two layers:Paper Section Correspondence and Paper Aspect Category. We transform these categorical annotations to derive an informativeness score of the review based on the review’s coverage across section correspondence, aspects of the paper, and reviewer-centric uncertainty associated with the review. We hope that our proposed methods, which are motivated towards automatically estimating the quality of peer reviews in the form of informativeness scores, will give editors an additional layer of confidence for the automatic judgment of review quality. We make our codes available at https://github.com/PrabhatkrBharti/informativeness.git.- Anthology ID:
- 2022.icon-main.34
- Volume:
- Proceedings of the 19th International Conference on Natural Language Processing (ICON)
- Month:
- December
- Year:
- 2022
- Address:
- New Delhi, India
- Editors:
- Md. Shad Akhtar, Tanmoy Chakraborty
- Venue:
- ICON
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 280–289
- Language:
- URL:
- https://aclanthology.org/2022.icon-main.34
- DOI:
- Cite (ACL):
- Prabhat Bharti, Tirthankar Ghosal, Mayank Agarwal, and Asif Ekbal. 2022. A Method for Automatically Estimating the Informativeness of Peer Reviews. In Proceedings of the 19th International Conference on Natural Language Processing (ICON), pages 280–289, New Delhi, India. Association for Computational Linguistics.
- Cite (Informal):
- A Method for Automatically Estimating the Informativeness of Peer Reviews (Bharti et al., ICON 2022)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2022.icon-main.34.pdf