AutoRef: Generating Refinements of Reviews Given Guidelines
Soham Chitnis, Manasi Patwardhan, Ashwin@goa.bits-pilani.ac.in Ashwin@goa.bits-pilani.ac.in, Tanmayv@goa.bits-pilani.ac.in Tanmayv@goa.bits-pilani.ac.in, Lovekesh Vig, Gautam Shroff
Abstract
When examining reviews of research papers, we can distinguish between two hypothetical referees: the maximally lenient referee who accepts any paper with a vacuous review and the maximally strict one who rejects any paper with an overly pedantic review. Clearly, both are of no practical value. Our interest is in a referee who makes a balanced judgement and provides a review abiding by the guidelines. In this paper, we present a case study of automatic correction of an existing machine-generated or human review. The \tt{AutoRef}\ system implements an iterative approach that progressively “refines” a review by attempting to make it more compliant with pre-defined requirements of a “good” review. It implements the following steps: (1) Translate the review requirements into a specification in natural language, of “yes/no” questions; (2) Given a (paper,review) pair, extract answers to the questions; (3) Use the results in (2) to generate a new review; and (4) Return to Step (2) with the paper and the new review. Here, (2) and (3) are implemented by large language model (LLM) based agents. We present a case study using papers and reviews made available for the International Conference on Learning Representations (ICLR). Our initial empirical results suggest that \tt{AutoRef}\ progressively improves the compliance of the generated reviews to the specification. Currently designed specification makes \tt{AutoRef}\ progressively generate reviews which are stricter, making the decisions more inclined towards “rejections”. This demonstrates the applicability of $AutoRef $ for: (1) The progressive correction of overly lenient reviews, being useful for referees and meta-reviewers; and (2) The generation of progressively stricter reviews for a paper, starting from a vacuous review (“Great paper. Accept.”), facilitating authors when trying to assess weaknesses in their papers.- Anthology ID:
- 2024.sdp-1.17
- Volume:
- Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Tirthankar Ghosal, Amanpreet Singh, Anita Waard, Philipp Mayr, Aakanksha Naik, Orion Weller, Yoonjoo Lee, Shannon Shen, Yanxia Qin
- Venues:
- sdp | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 175–190
- Language:
- URL:
- https://aclanthology.org/2024.sdp-1.17
- DOI:
- Cite (ACL):
- Soham Chitnis, Manasi Patwardhan, Ashwin@goa.bits-pilani.ac.in Ashwin@goa.bits-pilani.ac.in, Tanmayv@goa.bits-pilani.ac.in Tanmayv@goa.bits-pilani.ac.in, Lovekesh Vig, and Gautam Shroff. 2024. AutoRef: Generating Refinements of Reviews Given Guidelines. In Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024), pages 175–190, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- AutoRef: Generating Refinements of Reviews Given Guidelines (Chitnis et al., sdp-WS 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2024.sdp-1.17.pdf