@inproceedings{jumelet-hupkes-2018-language,
    title = "Do Language Models Understand Anything? On the Ability of {LSTM}s to Understand Negative Polarity Items",
    author = "Jumelet, Jaap  and
      Hupkes, Dieuwke",
    editor = "Linzen, Tal  and
      Chrupa{\l}a, Grzegorz  and
      Alishahi, Afra",
    booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}",
    month = nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W18-5424/",
    doi = "10.18653/v1/W18-5424",
    pages = "222--231",
    abstract = "In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the \textit{scope} of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning."
}Markdown (Informal)
[Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items](https://preview.aclanthology.org/iwcs-25-ingestion/W18-5424/) (Jumelet & Hupkes, EMNLP 2018)
ACL