ACL Anthology
News
(current)
FAQ
(current)
Corrections
(current)
Submissions
(current)
GitHub
This page is part of a
temporary preview
of a proposed change that may be incomplete or contain mistakes. It is
not official
and will be removed when the change is merged or abandoned.
Jacob
Steinhardt
2021
pdf
bib
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Ruiqi Zhong
|
Dhruba Ghosh
|
Dan Klein
|
Jacob Steinhardt
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
Dhruba Ghosh
1
Dan Klein
1
Ruiqi Zhong
1
Venues
findings
1
Fix author