@inproceedings{brarda-etal-2017-sequential,
    title = "Sequential Attention: A Context-Aware Alignment Function for Machine Reading",
    author = "Brarda, Sebastian  and
      Yeres, Philip  and
      Bowman, Samuel",
    editor = "Blunsom, Phil  and
      Bordes, Antoine  and
      Cho, Kyunghyun  and
      Cohen, Shay  and
      Dyer, Chris  and
      Grefenstette, Edward  and
      Hermann, Karl Moritz  and
      Rimell, Laura  and
      Weston, Jason  and
      Yih, Scott",
    booktitle = "Proceedings of the 2nd Workshop on Representation Learning for {NLP}",
    month = aug,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W17-2610/",
    doi = "10.18653/v1/W17-2610",
    pages = "75--80",
    abstract = "In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match. We evaluate this approach on the task of reading comprehension (on the Who did What and CNN datasets) and show that it dramatically improves a strong baseline{---}the Stanford Reader{---}and is competitive with the state of the art."
}Markdown (Informal)
[Sequential Attention: A Context-Aware Alignment Function for Machine Reading](https://preview.aclanthology.org/iwcs-25-ingestion/W17-2610/) (Brarda et al., RepL4NLP 2017)
ACL