2025
pdf
bib
abs
CoMuMDR: Code-mixed Multi-modal Multi-domain corpus for Discourse paRsing in conversations
Divyaksh Shukla
|
Ritesh Baviskar
|
Dwijesh Gohil
|
Aniket Tiwari
|
Atul Shree
|
Ashutosh Modi
Findings of the Association for Computational Linguistics: ACL 2025
Discourse parsing is an important task useful for NLU applications such as summarization, machine comprehension, and emotion recognition. The current discourse parsing datasets based on conversations consists of written English dialogues restricted to a single domain. In this resource paper, we introduce CoMuMDR: Code-mixed Multi-modal Multi-domain corpus for Discourse paRsing in conversations. The corpus (code-mixed in Hindi and English) has both audio and transcribed text and is annotated with nine discourse relations. We experiment with various SoTA baseline models; the poor performance of SoTA models highlights the challenges of multi-domain code-mixed corpus, pointing towards the need for developing better models for such realistic settings.
pdf
bib
Towards Quantifying Commonsense Reasoning with Mechanistic Insights
Abhinav Joshi
|
Areeb Ahmad
|
Divyaksh Shukla
|
Ashutosh Modi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
2024
pdf
bib
abs
Towards Robust Evaluation of Unlearning in LLMs via Data Transformations
Abhinav Joshi
|
Shaswati Saha
|
Divyaksh Shukla
|
Sriram Vema
|
Harsh Jhamtani
|
Manas Gaur
|
Ashutosh Modi
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) have shown to be a great success in a wide range of applications ranging from regular NLP-based use cases to AI agents. LLMs have been trained on a vast corpus of texts from various sources; despite the best efforts during the data pre-processing stage while training the LLMs, they may pick some undesirable information such as personally identifiable information (PII). Consequently, in recent times research in the area of Machine Unlearning (MUL) has become active, the main idea is to force LLMs to forget (unlearn) certain information (e.g., PII) without suffering from performance loss on regular tasks. In this work, we examine the robustness of the existing MUL techniques for their ability to enable leakage-proof forgetting in LLMs. In particular, we examine the effect of data transformation on forgetting, i.e., is an unlearned LLM able to recall forgotten information if there is a change in the format of the input? Our findings on the TOFU dataset highlight the necessity of using diverse data formats to quantify unlearning in LLMs more reliably.
pdf
bib
abs
IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings
Shubham Patel
|
Divyaksh Shukla
|
Ashutosh Modi
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper presents our approach for the SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations. We propose a transformer-based speaker-centric model for the Emotion Flip Reasoning (EFR) task and a masked-memory network along with a speaker participation vector for the Emotion Recognition in Conversations (ERC) task. We propose a Probable Trigger Zone, which is more likely to contain the utterances causing the emotion of a speaker to flip. In EFR, sub-task 3, the proposed approach archives a 5.9 (F1 score) improvement over the provided task baseline. The ablation study results highlight the significance of various design choices in the proposed method.