ACL Anthology
News
(current)
FAQ
(current)
Corrections
(current)
Submissions
(current)
GitHub
This page is part of a
temporary preview
of a proposed change that may be incomplete or contain mistakes. It is
not official
and will be removed when the change is merged or abandoned.
Rui
Chaves
2020
pdf
bib
Assessing the ability of Transformer-based Neural Models to represent structurally unbounded dependencies
Jillian Da Costa
|
Rui Chaves
Proceedings of the Society for Computation in Linguistics 2020
pdf
bib
What Don’t
RNN
Language Models Learn About Filler-Gap Dependencies?
Rui Chaves
Proceedings of the Society for Computation in Linguistics 2020
Search
Co-authors
Jillian Da Costa
1
Venues
SCiL
2
Fix author