- Anthology ID:
- 2021.findings-acl.334
- Volume:
- Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3813–3827
- Language:
- URL:
- https://aclanthology.org/2021.findings-acl.334
- DOI:
- 10.18653/v1/2021.findings-acl.334
- Cite (ACL):
- Ruiqi Zhong, Dhruba Ghosh, Dan Klein, and Jacob Steinhardt. 2021. Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3813–3827, Online. Association for Computational Linguistics.
- Cite (Informal):
- Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level (Zhong et al., Findings 2021)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.findings-acl.334.pdf
- Code
- ruiqi-zhong/acl2021-instance-level
- Data
- GLUE, SST, SST-2