Samay U. Shetty


2025

pdf bib
LPI-RIT at LeWiDi-2025: Improving Distributional Predictions via Metadata and Loss Reweighting with DisCo
Mandira Sawkar | Samay U. Shetty | Deepak Pandita | Tharindu Cyril Weerasooriya | Christopher M. Homan
Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP

The Learning With Disagreements (LeWiDi) 2025 shared task aims to model annotator disagreement through soft label distribution prediction and perspectivist evaluation, which focuses on modeling individual annotators. We adapt DisCo (Distribution from Context), a neural architecture that jointly models item-level and annotator-level label distributions, and present detailed analysis and improvements. In this paper, we extend DisCo by introducing annotator metadata embeddings, enhancing input representations, and multi-objective training losses to capture disagreement patterns better. Through extensive experiments, we demonstrate substantial improvements in both soft and perspectivist evaluation metrics across three datasets. We also conduct in-depth calibration and error analyses that reveal when and why disagreement-aware modeling improves. Our findings show that disagreement can be better captured by conditioning on annotator demographics and by optimizing directly for distributional metrics, yielding consistent improvements across datasets.

pdf bib
McMaster at LeWiDi-2025: Demographic-Aware RoBERTa
Mandira Sawkar | Samay U. Shetty | Deepak Pandita | Tharindu Cyril Weerasooriya | Christopher M. Homan
Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP

We present our submission to the Learning With Disagreements (LeWiDi) 2025 shared task. Our team implemented a variety of BERT-based models that encode annotator meta-data in combination with text to predict soft-label distributions and individual annotator labels. We show across four tasks that a combination of demographic factors leads to improved performance, however through ablations across all demographic variables we find that in some cases, a single variable performs best. Our approach placed 4th in the overall competition.