Abdul Waheed


2021

pdf
Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer
Abhinav Ramesh Kashyap | Laiba Mehnaz | Bhavitvya Malik | Abdul Waheed | Devamanyu Hazarika | Min-Yen Kan | Rajiv Ratn Shah
Proceedings of the Second Workshop on Domain Adaptation for NLP

The robustness of pretrained language models(PLMs) is generally measured using performance drops on two or more domains. However, we do not yet understand the inherent robustness achieved by contributions from different layers of a PLM. We systematically analyze the robustness of these representations layer by layer from two perspectives. First, we measure the robustness of representations by using domain divergence between two domains. We find that i) Domain variance increases from the lower to the upper layers for vanilla PLMs; ii) Models continuously pretrained on domain-specific data (DAPT)(Gururangan et al., 2020) exhibit more variance than their pretrained PLM counterparts; and that iii) Distilled models (e.g., DistilBERT) also show greater domain variance. Second, we investigate the robustness of representations by analyzing the encoded syntactic and semantic information using diagnostic probes. We find that similar layers have similar amounts of linguistic information for data from an unseen domain.

pdf
BloomNet: A Robust Transformer based model for Bloom’s Learning Outcome Classification
Abdul Waheed | Muskan Goyal | Nimisha Mittal | Deepak Gupta | Ashish Khanna | Moolchand Sharma
Proceedings of the 4th International Conference on Natural Language and Speech Processing (ICNLSP 2021)