Krithika Ramesh


2023

pdf
A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models
Krithika Ramesh | Arnav Chavan | Shrey Pandit | Sunayana Sitaram
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Compression techniques for deep learning have become increasingly popular, particularly in settings where latency and memory constraints are imposed. Several methods, such as pruning, distillation, and quantization, have been adopted for compressing models, each providing distinct advantages. However, existing literature demonstrates that compressing deep learning models could affect their fairness. Our analysis involves a comprehensive evaluation of pruned, distilled, and quantized language models, which we benchmark across a range of intrinsic and extrinsic metrics for measuring bias in text classification. We also investigate the impact of using multilingual models and evaluation measures. Our findings highlight the significance of considering both the pre-trained model and the chosen compression strategy in developing equitable language technologies. The results also indicate that compression strategies can have an adverse effect on fairness measures.

pdf
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh | Sunayana Sitaram | Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2023

With language models becoming increasingly ubiquitous, it has become essential to address their inequitable treatment of diverse demographic groups and factors. Most research on evaluating and mitigating fairness harms has been concentrated on English, while multilingual models and non-English languages have received comparatively little attention. In this paper, we survey different aspects of fairness in languages beyond English and multilingual contexts. This paper presents a survey of fairness in multilingual and non-English contexts, highlighting the shortcomings of current research and the difficulties faced by methods designed for English. We contend that the multitude of diverse cultures and languages across the world makes it infeasible to achieve comprehensive coverage in terms of constructing fairness datasets. Thus, the measurement and mitigation of biases must evolve beyond the current dataset-driven practices that are narrowly focused on specific dimensions and types of biases and, therefore, impossible to scale across languages and cultures.

pdf
MEGA: Multilingual Evaluation of Generative AI
Kabir Ahuja | Harshita Diddee | Rishav Hada | Millicent Ochieng | Krithika Ramesh | Prachi Jain | Akshay Nambi | Tanuja Ganu | Sameer Segal | Mohamed Ahmed | Kalika Bali | Sunayana Sitaram
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.

2022

pdf
Revisiting Queer Minorities in Lexicons
Krithika Ramesh | Sumeet Kumar | Ashiqur Khudabukhsh
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)

Lexicons play an important role in content moderation often being the first line of defense. However, little or no literature exists in analyzing the representation of queer-related words in them. In this paper, we consider twelve well-known lexicons containing inappropriate words and analyze how gender and sexual minorities are represented in these lexicons. Our analyses reveal that several of these lexicons barely make any distinction between pejorative and non-pejorative queer-related words. We express concern that such unfettered usage of non-pejorative queer-related words may impact queer presence in mainstream discourse. Our analyses further reveal that the lexicons have poor overlap in queer-related words. We finally present a quantifiable measure of consistency and show that several of these lexicons are not consistent in how they include (or omit) queer-related words.

2021

pdf
Evaluating Gender Bias in Hindi-English Machine Translation
Krithika Ramesh | Gauri Gupta | Sanjay Singh
Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing

With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that form a social bias within the model. The nature of gendered languages like Hindi, poses an additional problem to the quantification and mitigation of bias, owing to the change in the form of the words in the sentence, based on the gender of the subject. Additionally, there is sparse work done in the realm of measuring and debiasing systems for Indic languages. In our work, we attempt to evaluate and quantify the gender bias within a Hindi-English machine translation system. We implement a modified version of the existing TGBI metric based on the grammatical considerations for Hindi. We also compare and contrast the resulting bias measurements across multiple metrics for pre-trained embeddings and the ones learned by our machine translation model.

2020


Outcomes of coming out: Analyzing stories of LGBTQ+
Krithika Ramesh | Tanvi Anand
Proceedings of the Fourth Widening Natural Language Processing Workshop

The Internet is frequently used as a platform through which opinions and views on various topics can be expressed. One such topic that draws controversial attention is LGBTQ+ rights. This paper attempts to analyze the reaction that members of the LGBTQ+ community face when they reveal their gender or sexuality, or in other words, when they ‘come out of the closet’. We aim to classify the experiences shared by them as positive or negative. We collected data from various sources, primarily Twitter. We have applied deep learning techniques and compared the results to other classifiers, and the results obtained from applying classical sentiment analysis techniques to it.