@inproceedings{strubell-mccallum-2018-syntax,
    title = "Syntax Helps {ELM}o Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for {SRL}?",
    author = "Strubell, Emma  and
      McCallum, Andrew",
    editor = "Dinu, Georgiana  and
      Ballesteros, Miguel  and
      Sil, Avirup  and
      Bowman, Sam  and
      Hamza, Wael  and
      Sogaard, Anders  and
      Naseem, Tahira  and
      Goldberg, Yoav",
    booktitle = "Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for {NLP}",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W18-2904/",
    doi = "10.18653/v1/W18-2904",
    pages = "19--27",
    abstract = "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell and McCallum, 2018), a strong, linguistically-informed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo out-performs typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still out-perform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP."
}Markdown (Informal)
[Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?](https://preview.aclanthology.org/iwcs-25-ingestion/W18-2904/) (Strubell & McCallum, ACL 2018)
ACL