Abstract
Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.- Anthology ID:
- 2020.findings-emnlp.112
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1256–1262
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.112
- DOI:
- 10.18653/v1/2020.findings-emnlp.112
- Cite (ACL):
- Anna Rogers and Isabelle Augenstein. 2020. What Can We Do to Improve Peer Review in NLP?. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1256–1262, Online. Association for Computational Linguistics.
- Cite (Informal):
- What Can We Do to Improve Peer Review in NLP? (Rogers & Augenstein, Findings 2020)
- PDF:
- https://preview.aclanthology.org/finnlp-2volume-ingestion/2020.findings-emnlp.112.pdf