2023
pdf
bib
Proceedings of the Seventh Widening NLP Workshop (WiNLP 2023)
Bonaventure F. P. Dossou
|
Isidora Tourni
|
Hatem Haddad
|
Shaily Bhatt
|
Fatemehsadat Mireshghallah
|
Sunipa Dev
|
Tanvi Anand
|
Weijia Xu
|
Atnafu Lambebo Tonja
|
Alfredo Gomez
|
Chanjun Park
Proceedings of the Seventh Widening NLP Workshop (WiNLP 2023)
2022
pdf
abs
Mitigating Gender Stereotypes in Hindi and Marathi
Neeraja Kirtane
|
Tanvi Anand
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
As the use of natural language processing increases in our day-to-day life, the need to address gender bias inherent in these systems also amplifies. This is because the inherent bias interferes with the semantic structure of the output of these systems while performing tasks in natural language processing. While research is being done in English to quantify and mitigate bias, debiasing methods in Indic Languages are either relatively nascent or absent for some Indic languages altogether. Most Indic languages are gendered, i.e., each noun is assigned a gender according to each language’s rules of grammar. As a consequence, evaluation differs from what is done in English. This paper evaluates the gender stereotypes in Hindi and Marathi languages. The methodologies will differ from the ones in the English language because there are masculine and feminine counterparts in the case of some words. We create a dataset of neutral and gendered occupation words, emotion words and measure bias with the help of Embedding Coherence Test (ECT) and Relative Norm Distance (RND). We also attempt to mitigate this bias from the embeddings. Experiments show that our proposed debiasing techniques reduce gender bias in these languages.
2021
pdf
abs
“Hold on honey, men at work”: A semi-supervised approach to detecting sexism in sitcoms
Smriti Singh
|
Tanvi Anand
|
Arijit Ghosh Chowdhury
|
Zeerak Waseem
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Television shows play an important role inpropagating societal norms. Owing to the popularity of the situational comedy (sitcom) genre, it contributes significantly to the over-all development of society. In an effort to analyze the content of television shows belong-ing to this genre, we present a dataset of dialogue turns from popular sitcoms annotated for the presence of sexist remarks. We train a text classification model to detect sexism using domain adaptive learning. We apply the model to our dataset to analyze the evolution of sexist content over the years. We propose a domain-specific semi-supervised architecture for the aforementioned detection of sexism. Through extensive experiments, we show that our model often yields better classification performance over generic deep learn-ing based sentence classification that does not employ domain-specific training. We find that while sexism decreases over time on average,the proportion of sexist dialogue for the most sexist sitcom actually increases. A quantitative analysis along with a detailed error analysis presents the case for our proposed methodology
2020
abs
Outcomes of coming out: Analyzing stories of LGBTQ+
Krithika Ramesh
|
Tanvi Anand
Proceedings of the Fourth Widening Natural Language Processing Workshop
The Internet is frequently used as a platform through which opinions and views on various topics can be expressed. One such topic that draws controversial attention is LGBTQ+ rights. This paper attempts to analyze the reaction that members of the LGBTQ+ community face when they reveal their gender or sexuality, or in other words, when they ‘come out of the closet’. We aim to classify the experiences shared by them as positive or negative. We collected data from various sources, primarily Twitter. We have applied deep learning techniques and compared the results to other classifiers, and the results obtained from applying classical sentiment analysis techniques to it.