2025
pdf
bib
abs
MINDS: A Cross-Cultural Dialogue Corpus for Social Norm Classification and Adherence Detection
Pritish Sahu
|
Anirudh Som
|
Ajay Divakaran
|
Dimitra Vergyri
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Social norms are implicit, culturally grounded expectations that guide interpersonal communication. Unlike factual commonsense, norm reasoning is subjective, context-dependent, and varies across cultures—posing challenges for computational models. Prior works provide valuable normative annotations but mostly target isolated utterances or synthetic dialogues, limiting their ability to capture the fluid, multi-turn nature of real-world conversations. In this work, we present Norm-RAG, a retrieval-augmented, agentic framework for nuanced social norm inference in multi-turn dialogues. Norm-RAG models utterance-level attributes including communicative intent, speaker roles, interpersonal framing, and linguistic cues and grounds them in structured normative documentation retrieved via a novel Semantic Chunking approach. This enables interpretable and context-aware reasoning about norm adherence and violation across multilingual dialogues. We further introduce MINDS (Multilingual Interactions with Norm-Driven Speech), a bilingual dataset comprising 31 multi-turn Mandarin-English and Spanish-English conversations. Each turn is annotated for norm category and adherence status using multi-annotator consensus, reflecting cross-cultural and realistic norm expression. Our experiments show that Norm-RAG improves norm detection and generalization, demonstrates improved performance for culturally adaptive and socially intelligent dialogue systems.
2024
pdf
bib
abs
Demonstrations Are All You Need: Advancing Offensive Content Paraphrasing using In-Context Learning
Anirudh Som
|
Karan Sikka
|
Helen Gent
|
Ajay Divakaran
|
Andreas Kathol
|
Dimitra Vergyri
Findings of the Association for Computational Linguistics: ACL 2024
Paraphrasing of offensive content is a better alternative to content removal and helps improve civility in a communication environment. Supervised paraphrasers; however, rely heavily on large quantities of labelled data to help preserve meaning and intent. They also often retain a large portion of the offensiveness of the original content, which raises questions on their overall usability. In this paper we aim to assist practitioners in developing usable paraphrasers by exploring In-Context Learning (ICL) with large language models (LLMs), i.e., using a limited number of input-label demonstration pairs to guide the model in generating desired outputs for specific queries. Our study focuses on key factors such as - number and order of demonstrations, exclusion of prompt instruction, and reduction in measured toxicity. We perform principled evaluation on three datasets, including our proposed Context-Aware Polite Paraphrase (CAPP) dataset, comprising of dialogue-style rude utterances, polite paraphrases, and additional dialogue context. We evaluate our approach using four closed source and one open source LLM. Our results reveal that ICL is comparable to supervised methods in generation quality, while being qualitatively better by 25% on human evaluation and attaining lower toxicity by 76%. Also, ICL-based paraphrasers only show a slight reduction in performance even with just 10% training data.
2009
pdf
bib
Anchored Speech Recognition for Question Answering
Sibel Yaman
|
Gokhan Tur
|
Dimitra Vergyri
|
Dilek Hakkani-Tur
|
Mary Harper
|
Wen Wang
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
2004
pdf
bib
Limited-Domain Speech-to-Speech Translation between English and Pashto
Kristin Precoda
|
Horacio Franco
|
Ascander Dost
|
Michael Frandsen
|
John Fry
|
Andreas Kathol
|
Colleen Richey
|
Susanne Riehemann
|
Dimitra Vergyri
|
Jing Zheng
|
Christopher Culy
Demonstration Papers at HLT-NAACL 2004
pdf
bib
Automatic Diacritization of Arabic for Acoustic Modeling in Speech Recognition
Dimitra Vergyri
|
Katrin Kirchhoff
Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages