Srinadh Bhojanapalli


2021

pdf
A Simple and Effective Positional Encoding for Transformers
Pu-Chin Chen | Henry Tsai | Srinadh Bhojanapalli | Hyung Won Chung | Yin-Wen Chang | Chun-Sung Ferng
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Transformer models are permutation equivariant. To supply the order and type information of the input tokens, position and segment embeddings are usually added to the input. Recent works proposed variations of positional encodings with relative position encodings achieving better performance. Our analysis shows that the gain actually comes from moving positional information to attention layer from the input. Motivated by this, we introduce Decoupled Positional Attention for Transformers (DIET), a simple yet effective mechanism to encode position and segment information into the Transformer models. The proposed method has faster training and inference time, while achieving competitive performance on GLUE, XTREME and WMT benchmarks. We further generalize our method to long-range transformers and show performance gain.

2020

pdf
Semantic Label Smoothing for Sequence to Sequence Problems
Michal Lukasik | Himanshu Jain | Aditya Menon | Seungyeon Kim | Srinadh Bhojanapalli | Felix Yu | Sanjiv Kumar
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Label smoothing has been shown to be an effective regularization strategy in classification, that prevents overfitting and helps in label de-noising. However, extending such methods directly to seq2seq settings, such as Machine Translation, is challenging: the large target output space of such problems makes it intractable to apply label smoothing over all possible outputs. Most existing approaches for seq2seq settings either do token level smoothing, or smooth over sequences generated by randomly substituting tokens in the target sequence. Unlike these works, in this paper, we propose a technique that smooths over well formed relevant sequences that not only have sufficient n-gram overlap with the target sequence, but are also semantically similar. Our method shows a consistent and significant improvement over the state-of-the-art techniques on different datasets.