Atula Tejaswi Neerkaje

Also published as: Atula Tejaswi Neerkaje


2024

pdf
RISE: Robust Early-exiting Internal Classifiers for Suicide Risk Evaluation
Ritesh Singh Soun | Atula Tejaswi Neerkaje | Ramit Sawhney | Nikolaos Aletras | Preslav Nakov
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Suicide is a serious public health issue, but it is preventable with timely intervention. Emerging studies have suggested there is a noticeable increase in the number of individuals sharing suicidal thoughts online. As a result, utilising advance Natural Language Processing techniques to build automated systems for risk assessment is a viable alternative. However, existing systems are prone to incorrectly predicting risk severity and have no early detection mechanisms. Therefore, we propose RISE, a novel robust mechanism for accurate early detection of suicide risk by ensembling Hyperbolic Internal Classifiers equipped with an abstention mechanism and early-exit inference capabilities. Through quantitative, qualitative and ablative experiments, we demonstrate RISE as an efficient and robust human-in-the-loop approach for risk assessment over the Columbia Suicide Severity Risk Scale (C-SSRS) and CLPsych 2022 datasets. It is able to successfully abstain from 84% incorrect predictions on Reddit data while out-predicting state of the art models upto 3.5x earlier.

pdf
Saliency-Aware Interpolative Augmentation for Multimodal Financial Prediction
Samyak Jain | Parth Chhabra | Atula Tejaswi Neerkaje | Puneet Mathur | Ramit Sawhney | Shivam Agarwal | Preslav Nakov | Sudheer Chava | Dinesh Manocha
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Predicting price variations of financial instruments for risk modeling and stock trading is challenging due to the stochastic nature of the stock market. While recent advancements in the Financial AI realm have expanded the scope of data and methods they use, such as textual and audio cues from financial earnings calls, limitations exist. Most datasets are small, and show domain distribution shifts due to the nature of their source, suggesting the exploration for data augmentation for robust augmentation strategies such as Mixup. To tackle such challenges in the financial domain, we propose SH-Mix: Saliency-guided Hierarchical Mixup augmentation technique for multimodal financial prediction tasks. SH-Mix combines multi-level embedding mixup strategies based on the contribution of each modality and context subsequences. Through extensive quantitative and qualitative experiments on financial earnings and conference call datasets consisting of text and speech, we show that SH-Mix outperforms state-of-the-art methods by 3-7%. Additionally, we show that SH-Mix is generalizable across different modalities and models.

2022

pdf bib
The Impact of Differential Privacy on Group Disparity Mitigation
Victor Petren Bach Hansen | Atula Tejaswi Neerkaje | Ramit Sawhney | Lucie Flek | Anders Sogaard
Proceedings of the Fourth Workshop on Privacy in Natural Language Processing

The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization.