@inproceedings{nguyen-salazar-2019-transformers,
    title = "Transformers without Tears: Improving the Normalization of Self-Attention",
    author = "Nguyen, Toan Q.  and
      Salazar, Julian",
    editor = {Niehues, Jan  and
      Cattoni, Rolando  and
      St{\"u}ker, Sebastian  and
      Negri, Matteo  and
      Turchi, Marco  and
      Ha, Thanh-Le  and
      Salesky, Elizabeth  and
      Sanabria, Ramon  and
      Barrault, Loic  and
      Specia, Lucia  and
      Federico, Marcello},
    booktitle = "Proceedings of the 16th International Conference on Spoken Language Translation",
    month = nov # " 2-3",
    year = "2019",
    address = "Hong Kong",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2019.iwslt-1.17/",
    abstract = "We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose l2 normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FIXNORM). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT `15 English-Vietnamese. We ob- serve sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT `14 English-German), SCALENORM and FIXNORM remain competitive but PRENORM degrades performance."
}Markdown (Informal)
[Transformers without Tears: Improving the Normalization of Self-Attention](https://preview.aclanthology.org/ingest-emnlp/2019.iwslt-1.17/) (Nguyen & Salazar, IWSLT 2019)
ACL