@inproceedings{murray-etal-2019-efficiency,
    title = "Efficiency through Auto-Sizing: {N}otre {D}ame {NLP}{'}s Submission to the {WNGT} 2019 Efficiency Task",
    author = "Murray, Kenton  and
      DuSell, Brian  and
      Chiang, David",
    editor = "Birch, Alexandra  and
      Finch, Andrew  and
      Hayashi, Hiroaki  and
      Konstas, Ioannis  and
      Luong, Thang  and
      Neubig, Graham  and
      Oda, Yusuke  and
      Sudoh, Katsuhito",
    booktitle = "Proceedings of the 3rd Workshop on Neural Generation and Translation",
    month = nov,
    year = "2019",
    address = "Hong Kong",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/D19-5634/",
    doi = "10.18653/v1/D19-5634",
    pages = "297--301",
    abstract = "This paper describes the Notre Dame Natural Language Processing Group{'}s (NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model. Our method was able to eliminate more than 25{\%} of the model{'}s parameters while suffering a decrease of only 1.1 BLEU."
}Markdown (Informal)
[Efficiency through Auto-Sizing: Notre Dame NLP’s Submission to the WNGT 2019 Efficiency Task](https://preview.aclanthology.org/ingest-emnlp/D19-5634/) (Murray et al., NGT 2019)
ACL