@inproceedings{jiang-etal-2020-inserting,
    title = "{I}nserting {I}nformation {B}ottlenecks for {A}ttribution in {T}ransformers",
    author = "Jiang, Zhiying  and
      Tang, Raphael  and
      Xin, Ji  and
      Lin, Jimmy",
    editor = "Cohn, Trevor  and
      He, Yulan  and
      Liu, Yang",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.findings-emnlp.343/",
    doi = "10.18653/v1/2020.findings-emnlp.343",
    pages = "3850--3857",
    abstract = "Pretrained transformers achieve the state of the art across tasks in natural language processing, motivating researchers to investigate their inner mechanisms. One common direction is to understand what features are important for prediction. In this paper, we apply information bottlenecks to analyze the attribution of each feature for prediction on a black-box model. We use BERT as the example and evaluate our approach both quantitatively and qualitatively. We show the effectiveness of our method in terms of attribution and the ability to provide insight into how information flows through layers. We demonstrate that our technique outperforms two competitive methods in degradation tests on four datasets. Code is available at \url{https://github.com/bazingagin/IBA}."
}Markdown (Informal)
[Inserting Information Bottlenecks for Attribution in Transformers](https://preview.aclanthology.org/ingest-emnlp/2020.findings-emnlp.343/) (Jiang et al., Findings 2020)
ACL