Monojit Choudhury


2024

pdf
Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
Rishav Hada | Varun Gumma | Adrian Wynter | Harshita Diddee | Mohamed Ahmed | Monojit Choudhury | Kalika Bali | Sunayana Sitaram
Findings of the Association for Computational Linguistics: EACL 2024

Large Language Models (LLMs) excel in various Natural Language Processing (NLP) tasks, yet their evaluation, particularly in languages beyond the top 20, remains inadequate due to existing benchmarks and metrics limitations. Employing LLMs as evaluators to rank or score other models’ outputs emerges as a viable solution, addressing the constraints tied to human annotators and established benchmarks. In this study, we explore the potential of LLM-based evaluators in enhancing multilingual evaluation by calibrating them against 20K human judgments across three text-generation tasks, five metrics, and eight languages. Our analysis reveals a bias in LLM-based evaluators towards higher scores, underscoring the necessity of calibration with native speaker judgments, especially in low-resource and non-Latin script languages, to ensure accurate evaluation of LLM performance across diverse languages.

pdf
Evaluating Large Language Models for Health-related Queries with Presuppositions
Navreet Kaur | Monojit Choudhury | Danish Pruthi
Findings of the Association for Computational Linguistics ACL 2024

As corporations rush to integrate large language models (LLMs) it is critical that they provide factually accurate information, that is robust to any presuppositions that a user may express. In this work, we introduce UPHILL, a dataset consisting of health-related queries with varying degrees of presuppositions. Using UPHILL, we evaluate the factual accuracy and consistency of InstructGPT, ChatGPT, GPT-4 and Bing Copilot models. We find that while model responses rarely contradict true health claims (posed as questions), all investigated models fail to challenge false claims. Alarmingly, responses from these models agree with 23-32% of the existing false claims, and 49-55% with novel fabricated claims. As we increase the extent of presupposition in input queries, responses from all models except Bing Copilot agree with the claim considerably more often, regardless of its veracity. Given the moderate factual accuracy, and the inability of models to challenge false assumptions, our work calls for a careful assessment of current LLMs for use in high-stakes scenarios.

pdf
Ethical Reasoning and Moral Value Alignment of LLMs Depend on the Language We Prompt Them in
Utkarsh Agarwal | Kumar Tanmay | Aditi Khandelwal | Monojit Choudhury
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Ethical reasoning is a crucial skill for Large Language Models (LLMs). However, moral values are not universal, but rather influenced by language and culture. This paper explores how three prominent LLMs – GPT-4, ChatGPT, and Llama2Chat-70B – perform ethical reasoning in different languages and if their moral judgement depend on the language in which they are prompted. We extend the study of ethical reasoning of LLMs by (CITATION) to a multilingual setup following their framework of probing LLMs with ethical dilemmas and policies from three branches of normative ethics: deontology, virtue, and consequentialism. We experiment with six languages: English, Spanish, Russian, Chinese, Hindi, and Swahili. We find that GPT-4 is the most consistent and unbiased ethical reasoner across languages, while ChatGPT and Llama2Chat-70B show significant moral value bias when we move to languages other than English. Interestingly, the nature of this bias significantly vary across languages for all LLMs, including GPT-4.

pdf
INMT-Lite: Accelerating Low-Resource Language Data Collection via Offline Interactive Neural Machine Translation
Harshita Diddee | Anurag Shukla | Tanuja Ganu | Vivek Seshadri | Sandipan Dandapat | Monojit Choudhury | Kalika Bali
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

A steady increase in the performance of Massively Multilingual Models (MMLMs) has contributed to their rapidly increasing use in data collection pipelines. Interactive Neural Machine Translation (INMT) systems are one class of tools that can utilize MMLMs to promote such data collection in several under-resourced languages. However, these tools are often not adapted to the deployment constraints that native language speakers operate in, as bloated, online inference-oriented MMLMs trained for data-rich languages, drive them. INMT-Lite addresses these challenges through its support of (1) three different modes of Internet-independent deployment and (2) a suite of four assistive interfaces suitable for (3) data-sparse languages. We perform an extensive user study for INMT-Lite with an under-resourced language community, Gondi, to find that INMT-Lite improves the data generation experience of community members along multiple axes, such as cognitive load, task productivity, and interface interaction time and effort, without compromising on the quality of the generated translations.INMT-Lite’s code is open-sourced to further research in this domain.

pdf
Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks
Abhinav Sukumar Rao | Atharva Roshan Naik | Sachin Vashistha | Somak Aditya | Monojit Choudhury
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating their prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies. Limited studies have been conducted to formalize and analyze these attacks and their mitigations. We bridge this gap by proposing a formalism and a taxonomy of known (and possible) jailbreaks. We survey existing jailbreak methods and their effectiveness on open-source and commercial LLMs (such as GPT-based models, OPT, BLOOM, and FLAN-T5-XXL). We further discuss the challenges of jailbreak detection in terms of their effectiveness against known attacks. For further analysis, we release a dataset of model outputs across 3700 jailbreak prompts over 4 tasks.

pdf
Do Moral Judgment and Reasoning Capability of LLMs Change with Language? A Study using the Multilingual Defining Issues Test
Aditi Khandelwal | Utkarsh Agarwal | Kumar Tanmay | Monojit Choudhury
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

This paper explores the moral judgment and moral reasoning abilities exhibited by Large Language Models (LLMs) across languages through the Defining Issues Test. It is a well known fact that moral judgment depends on the language in which the question is asked. We extend the work of beyond English, to 5 new languages (Chinese, Hindi, Russian, Spanish and Swahili), and probe three LLMs – ChatGPT, GPT-4 and Llama2Chat-70B – that shows substantial multilingual text processing and generation abilities. Our study shows that the moral reasoning ability for all models, as indicated by the post-conventional score, is substantially inferior for Hindi and Swahili, compared to Spanish, Russian, Chinese and English, while there is no clear trend for the performance of the latter four languages. The moral judgments too vary considerably by the language.

2023

pdf
Prover: Generating Intermediate Steps for NLI with Commonsense Knowledge Retrieval and Next-Step Prediction
Deepanway Ghosal | Somak Aditya | Monojit Choudhury
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching
Genta Winata | Sudipta Kar | Marina Zhukova | Thamar Solorio | Mona Diab | Sunayana Sitaram | Monojit Choudhury | Kalika Bali
Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching

pdf
Everything you need to know about Multilingual LLMs: Towards fair, performant and reliable models for languages of the world
Sunayana Sitaram | Monojit Choudhury | Barun Patra | Vishrav Chaudhary | Kabir Ahuja | Kalika Bali
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)

This tutorial will describe various aspects of scaling up language technologies to many of the world’s languages by describing the latest research in Massively Multilingual Language Models (MMLMs). We will cover topics such as data collection, training and fine-tuning of models, Responsible AI issues such as fairness, bias and toxicity, linguistic diversity and evaluation in the context of MMLMs, specifically focusing on issues in non-English and low-resource languages. Further, we will also talk about some of the real-world challenges in deploying these models in language communities in the field. With the performance of MMLMs improving in the zero-shot setting for many languages, it is now becoming feasible to use them for building language technologies in many languages of the world, and this tutorial will provide the computational linguistics community with unique insights from the latest research in multilingual models.

pdf
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh | Sunayana Sitaram | Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2023

With language models becoming increasingly ubiquitous, it has become essential to address their inequitable treatment of diverse demographic groups and factors. Most research on evaluating and mitigating fairness harms has been concentrated on English, while multilingual models and non-English languages have received comparatively little attention. In this paper, we survey different aspects of fairness in languages beyond English and multilingual contexts. This paper presents a survey of fairness in multilingual and non-English contexts, highlighting the shortcomings of current research and the difficulties faced by methods designed for English. We contend that the multitude of diverse cultures and languages across the world makes it infeasible to achieve comprehensive coverage in terms of constructing fairness datasets. Thus, the measurement and mitigation of biases must evolve beyond the current dataset-driven practices that are narrowly focused on specific dimensions and types of biases and, therefore, impossible to scale across languages and cultures.

pdf
Performance and Risk Trade-offs for Multi-word Text Prediction at Scale
Aniket Vashishtha | S Sai Prasad | Payal Bajaj | Vishrav Chaudhary | Kate Cook | Sandipan Dandapat | Sunayana Sitaram | Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2023

Large Language Models such as GPT-3 are well-suited for text prediction tasks, which can help and delight users during text composition. LLMs are known to generate ethically inappropriate predictions even for seemingly innocuous contexts. Toxicity detection followed by filtering is a common strategy for mitigating the harm from such predictions. However, as we shall argue in this paper, in the context of text prediction, it is not sufficient to detect and filter toxic content. One also needs to ensure factual correctness and group-level fairness of the predictions; failing to do so can make the system ineffective and nonsensical at best, and unfair and detrimental to the users at worst. We discuss the gaps and challenges of toxicity detection approaches - from blocklist-based approaches to sophisticated state-of-the-art neural classifiers - by evaluating them on the text prediction task for English against a manually crafted CheckList of harms targeted at different groups and different levels of severity.

pdf
X-RiSAWOZ: High-Quality End-to-End Multilingual Dialogue Datasets and Few-shot Agents
Mehrad Moradshahi | Tianhao Shen | Kalika Bali | Monojit Choudhury | Gael de Chalendar | Anmol Goel | Sungkyun Kim | Prashant Kodali | Ponnurangam Kumaraguru | Nasredine Semmar | Sina Semnani | Jiwon Seo | Vivek Seshadri | Manish Shrivastava | Michael Sun | Aditya Yadavalli | Chaobin You | Deyi Xiong | Monica Lam
Findings of the Association for Computational Linguistics: ACL 2023

Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language.X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents. The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks. We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.

pdf
Ethical Reasoning over Moral Alignment: A Case and Framework for In-Context Ethical Policies in LLMs
Abhinav Sukumar Rao | Aditi Khandelwal | Kumar Tanmay | Utkarsh Agarwal | Monojit Choudhury
Findings of the Association for Computational Linguistics: EMNLP 2023

In this position paper, we argue that instead of morally aligning LLMs to specific set of ethical principles, we should infuse generic ethical reasoning capabilities into them so that they can handle value pluralism at a global scale. When provided with an ethical policy, an LLM should be capable of making decisions that are ethically consistent to the policy. We develop a framework that integrates moral dilemmas with moral principles pertaining to different foramlisms of normative ethics, and at different levels of abstractions. Initial experiments with GPT-x models shows that while GPT-4 is a nearly perfect ethical reasoner, the models still have bias towards the moral values of Western and English speaking societies.

pdf
DiTTO: A Feature Representation Imitation Approach for Improving Cross-Lingual Transfer
Shanu Kumar | Soujanya Abbaraju | Sandipan Dandapat | Sunayana Sitaram | Monojit Choudhury
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Zero-shot cross-lingual transfer is promising, however has been shown to be sub-optimal, with inferior transfer performance across low-resource languages. In this work, we envision languages as domains for improving zero-shot transfer by jointly reducing the feature incongruity between the source and the target language and increasing the generalization capabilities of pre-trained multilingual transformers. We show that our approach, DiTTO, significantly outperforms the standard zero-shot fine-tuning method on multiple datasets across all languages using solely unlabeled instances in the target language. Empirical results show that jointly reducing feature incongruity for multiple target languages is vital for successful cross-lingual transfer. Moreover, our model enables better cross-lingual transfer than standard fine-tuning methods, even in the few-shot setting.

pdf
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
Chenxi Whitehouse | Monojit Choudhury | Alham Fikri Aji
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated English-generated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency.

pdf
DUBLIN: Visual Document Understanding By Language-Image Network
Kriti Aggarwal | Aditi Khandelwal | Kumar Tanmay | Owais Khan Mohammed | Qiang Liu | Monojit Choudhury | Hardik Chauhan | Subhojit Som | Vishrav Chaudhary | Saurabh Tiwary
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track

In this paper, we present DUBLIN, a pixel-based model for visual document understanding that does not rely on OCR. DUBLIN can process both images and texts in documents just by the pixels and handle diverse document types and tasks. DUBLIN is pretrained on a large corpus of document images with novel tasks that enhance its visual and linguistic abilities. We evaluate DUBLIN on various benchmarks and show that it achieves state-of-the-art performance on extractive tasks such as DocVQA, InfoVQA, AI2D, OCR-VQA, RefExp, and CORD, as well as strong performance on abstraction datasets such as VisualMRC and text captioning. Our model demonstrates the potential of OCR-free document processing and opens new avenues for applications and research.

2022

pdf
On the Calibration of Massively Multilingual Language Models
Kabir Ahuja | Sunayana Sitaram | Sandipan Dandapat | Monojit Choudhury
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Massively Multilingual Language Models (MMLMs) have recently gained popularity due to their surprising effectiveness in cross-lingual transfer. While there has been much work in evaluating these models for their performance on a variety of tasks and languages, little attention has been paid on how well calibrated these models are with respect to the confidence in their predictions. We first investigate the calibration of MMLMs in the zero-shot setting and observe a clear case of miscalibration in low-resource languages or those which are typologically diverse from English. Next, we empirically show that calibration methods like temperature scaling and label smoothing do reasonably well in improving calibration in the zero-shot scenario. We also find that few-shot examples in the language can further help reduce calibration errors, often substantially. Overall, our work contributes towards building more reliable multilingual models by highlighting the issue of their miscalibration, understanding what language and model-specific factors influence it, and pointing out the strategies to improve the same.

pdf
On the Economics of Multilingual Few-shot Learning: Modeling the Cost-Performance Trade-offs of Machine Translated and Manual Data
Kabir Ahuja | Monojit Choudhury | Sandipan Dandapat
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Borrowing ideas from Production functions in micro-economics, in this paper we introduce a framework to systematically evaluate the performance and cost trade-offs between machine-translated and manually-created labelled data for task-specific fine-tuning of massively multilingual language models. We illustrate the effectiveness of our framework through a case-study on the TyDIQA-GoldP dataset. One of the interesting conclusion of the study is that if the cost of machine translation is greater than zero, the optimal performance at least cost is always achieved with at least some or only manually-created data. To our knowledge, this is the first attempt towards extending the concept of production functions to study data collection strategies for training multilingual models, and can serve as a valuable tool for other similar cost vs data trade-offs in NLP.

pdf
Global Readiness of Language Technology for Healthcare: What Would It Take to Combat the Next Pandemic?
Ishani Mondal | Kabir Ahuja | Mohit Jain | Jacki O’Neill | Kalika Bali | Monojit Choudhury
Proceedings of the 29th International Conference on Computational Linguistics

The COVID-19 pandemic has brought out both the best and worst of language technology (LT). On one hand, conversational agents for information dissemination and basic diagnosis have seen widespread use, and arguably, had an important role in fighting against the pandemic. On the other hand, it has also become clear that such technologies are readily available for a handful of languages, and the vast majority of the global south is completely bereft of these benefits. What is the state of LT, especially conversational agents, for healthcare across the world’s languages? And, what would it take to ensure global readiness of LT before the next pandemic? In this paper, we try to answer these questions through survey of existing literature and resources, as well as through a rapid chatbot building exercise for 15 Asian and African languages with varying amount of resource-availability. The study confirms the pitiful state of LT even for languages with large speaker bases, such as Sinhala and Hausa, and identifies the gaps that could help us prioritize research and investment strategies in LT for healthcare.

pdf
NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis
Shamsuddeen Hassan Muhammad | David Ifeoluwa Adelani | Sebastian Ruder | Ibrahim Sa’id Ahmad | Idris Abdulmumin | Bello Shehu Bello | Monojit Choudhury | Chris Chinenye Emezue | Saheed Salahudeen Abdullahi | Anuoluwapo Aremu | Alípio Jorge | Pavel Brazdil
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Sentiment analysis is one of the most widely studied applications in NLP, but most work focuses on languages with large amounts of data. We introduce the first large-scale human-annotated Twitter sentiment dataset for the four most widely spoken languages in Nigeria—Hausa, Igbo, Nigerian-Pidgin, and Yorùbá—consisting of around 30,000 annotated tweets per language, including a significant fraction of code-mixed tweets. We propose text collection, filtering, processing and labeling methods that enable us to create datasets for these low-resource languages. We evaluate a range of pre-trained models and transfer strategies on the dataset. We find that language-specific models and language-adaptive fine-tuning generally perform best. We release the datasets, trained models, sentiment lexicons, and code to incentivize research on sentiment analysis in under-represented languages.

pdf
Language Patterns and Behaviour of the Peer Supporters in Multilingual Healthcare Conversational Forums
Ishani Mondal | Kalika Bali | Mohit Jain | Monojit Choudhury | Jacki O’Neill | Millicent Ochieng | Kagnoya Awori | Keshet Ronen
Proceedings of the Thirteenth Language Resources and Evaluation Conference

In this work, we conduct a quantitative linguistic analysis of the language usage patterns of multilingual peer supporters in two health-focused WhatsApp groups in Kenya comprising of youth living with HIV. Even though the language of communication for the group was predominantly English, we observe frequent use of Kiswahili, Sheng and code-mixing among the three languages. We present an analysis of language choice and its accommodation, different functions of code-mixing, and relationship between sentiment and code-mixing. To explore the effectiveness of off-the-shelf Language Technologies (LT) in such situations, we attempt to build a sentiment analyzer for this dataset. Our experiments demonstrate the challenges of developing LT and therefore effective interventions for such forums and languages. We provide recommendations for language resources that should be built to address these challenges.

pdf
Too Brittle to Touch: Comparing the Stability of Quantization and Distillation towards Developing Low-Resource MT Models
Harshita Diddee | Sandipan Dandapat | Monojit Choudhury | Tanuja Ganu | Kalika Bali
Proceedings of the Seventh Conference on Machine Translation (WMT)

Leveraging shared learning through Massively Multilingual Models, state-of-the-art Machine translation (MT) models are often able to adapt to the paucity of data for low-resource languages. However, this performance comes at the cost of significantly bloated models which aren’t practically deployable. Knowledge Distillation is one popular technique to develop competitive lightweight models: In this work, we first evaluate its use in compressing MT models, focusing specifically on languages with extremely limited training data. Through our analysis across 8 languages, we find that the variance in the performance of the distilled models due to their dependence on priors including the amount of synthetic data used for distillation, the student architecture, training hyper-parameters and confidence of the teacher models, makes distillation a brittle compression mechanism. To mitigate this, we further explore the use of post-training quantization for the compression of these models. Here, we find that while Distillation provides gains across some low-resource languages, Quantization provides more consistent performance trends for the entire range of languages, especially the lowest-resource languages in our target set.

pdf
Multi Task Learning For Zero Shot Performance Prediction of Multilingual Models
Kabir Ahuja | Shanu Kumar | Sandipan Dandapat | Monojit Choudhury
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages, though the performance varies from language to language depending on the pivot language(s) used for fine-tuning. In this work, we build upon some of the existing techniques for predicting the zero-shot performance on a task, by modeling it as a multi-task learning problem. We jointly train predictive models for different tasks which helps us build more accurate predictors for tasks where we have test data in very few languages to measure the actual performance of the model. Our approach also lends us the ability to perform a much more robust feature selection, and identify a common set of features that influence zero-shot performance across a variety of tasks.

pdf
Vector Space Interpolation for Query Expansion
Deepanway Ghosal | Somak Aditya | Sandipan Dandapat | Monojit Choudhury
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Topic-sensitive query set expansion is an important area of research that aims to improve search results for information retrieval. It is particularly crucial for queries related to sensitive and emerging topics. In this work, we describe a method for query set expansion about emerging topics using vector space interpolation. We use a transformer model called OPTIMUS, which is suitable for vector space manipulation due to its variational autoencoder nature. One of our proposed methods – Dirichlet interpolation shows promising results for query expansion. Our methods effectively generate new queries about the sensitive topic by incorporating set-level diversity, which is not captured by traditional sentence-level augmentation methods such as paraphrasing or back-translation.

pdf
SyMCoM - Syntactic Measure of Code Mixing A Study Of English-Hindi Code-Mixing
Prashant Kodali | Anmol Goel | Monojit Choudhury | Manish Shrivastava | Ponnurangam Kumaraguru
Findings of the Association for Computational Linguistics: ACL 2022

Code mixing is the linguistic phenomenon where bilingual speakers tend to switch between two or more languages in conversations. Recent work on code-mixing in computational settings has leveraged social media code mixed texts to train NLP models. For capturing the variety of code mixing in, and across corpus, Language ID (LID) tags based measures (CMI) have been proposed. Syntactical variety/patterns of code-mixing and their relationship vis-a-vis computational model’s performance is under explored. In this work, we investigate a collection of English(en)-Hindi(hi) code-mixed datasets from a syntactic lens to propose, SyMCoM, an indicator of syntactic variety in code-mixed text, with intuitive theoretical bounds. We train SoTA en-hi PoS tagger, accuracy of 93.4%, to reliably compute PoS tags on a corpus, and demonstrate the utility of SyMCoM by applying it on various syntactical categories on a collection of datasets, and compare datasets using the measure.

pdf
”Diversity and Uncertainty in Moderation” are the Key to Data Selection for Multilingual Few-shot Transfer
Shanu Kumar | Sandipan Dandapat | Monojit Choudhury
Findings of the Association for Computational Linguistics: NAACL 2022

Few-shot transfer often shows substantial gain over zero-shot transfer (CITATION), which is a practically useful trade-off between fully supervised and unsupervised learning approaches for multilingual pretained model-based systems. This paper explores various strategies for selecting data for annotation that can result in a better few-shot transfer. The proposed approaches rely on multiple measures such as data entropy using n-gram language model, predictive entropy, and gradient embedding. We propose a loss embedding method for sequence labeling tasks, which induces diversity and uncertainty sampling similar to gradient embedding. The proposed data selection strategies are evaluated and compared for POS tagging, NER, and NLI tasks for up to 20 languages. Our experiments show that the gradient and loss embedding-based strategies consistently outperform random data selection baselines, with gains varying with the initial performance of the zero-shot transfer. Furthermore, the proposed method shows similar trends in improvement even when the model is fine-tuned using a lower proportion of the original task-specific labeled training data for zero-shot transfer.

pdf
Multilingual CheckList: Generation and Evaluation
Karthikeyan K | Shaily Bhatt | Pankaj Singh | Somak Aditya | Sandipan Dandapat | Sunayana Sitaram | Monojit Choudhury
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

Multilingual evaluation benchmarks usually contain limited high-resource languages and do not test models for specific linguistic capabilities. CheckList is a template-based evaluation approach that tests models for specific capabilities. The CheckList template creation process requires native speakers, posing a challenge in scaling to hundreds of languages. In this work, we explore multiple approaches to generate Multilingual CheckLists. We device an algorithm –Template Extraction Algorithm (TEA) for automatically extracting target language CheckList templates from machine translated instances of a source language templates. We compare the TEA CheckLists with CheckLists created with different levels of human intervention. We further introduce metrics along the dimensions of cost, diversity, utility, and correctness to compare the CheckLists. We thoroughly analyze different approaches to creating CheckLists in Hindi. Furthermore, we experiment with 9 more different languages. We find that TEA followed by human verification is ideal for scaling Checklist-based evaluation to multiple languages while TEA gives a good estimates of model performance. We release the code of TEA and the CheckLists created at aka.ms/multilingualchecklist

pdf bib
Proceedings of the First Workshop on Scaling Up Multilingual Evaluation
Kabir Ahuja | Antonios Anastasopoulos | Barun Patra | Graham Neubig | Monojit Choudhury | Sandipan Dandapat | Sunayana Sitaram | Vishrav Chaudhary
Proceedings of the First Workshop on Scaling Up Multilingual Evaluation

pdf bib
The SUMEval 2022 Shared Task on Performance Prediction of Multilingual Pre-trained Language Models
Kabir Ahuja | Antonios Anastasopoulos | Barun Patra | Graham Neubig | Monojit Choudhury | Sandipan Dandapat | Sunayana Sitaram | Vishrav Chaudhary
Proceedings of the First Workshop on Scaling Up Multilingual Evaluation

pdf
Beyond Static models and test sets: Benchmarking the potential of pre-trained models across tasks and languages
Kabir Ahuja | Sandipan Dandapat | Sunayana Sitaram | Monojit Choudhury
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP

Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 100 languages, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages with little linguistic diversity. We argue that this makes the existing practices in multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. We propose that the recent work done in Performance Prediction for NLP tasks can serve as a potential solution in fixing benchmarking in Multilingual NLP by utilizing features related to data and language typology to estimate the performance of an MMLM on different languages. We compare performance prediction with translating test data with a case study on four different multilingual datasets, and observe that these methods can provide reliable estimates of the performance that are often on-par with the translation based approaches, without the need for any additional translation as well as evaluation costs.

2021

pdf
Sample-efficient Linguistic Generalizations through Program Synthesis: Experiments with Phonology Problems
Saujas Vaduguru | Aalok Sathe | Monojit Choudhury | Dipti Sharma
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology

Neural models excel at extracting statistical patterns from large amounts of data, but struggle to learn patterns or reason about language from only a few examples. In this paper, we ask: Can we learn explicit rules that generalize well from only a few examples? We explore this question using program synthesis. We develop a synthesis model to learn phonology rules as programs in a domain-specific language. We test the ability of our models to generalize from few training examples using our new dataset of problems from the Linguistics Olympiad, a challenging set of tasks that require strong linguistic reasoning ability. In addition to being highly sample-efficient, our approach generates human-readable programs, and allows control over the generalizability of the learnt programs.

pdf
Comparing Grammatical Theories of Code-Mixing
Adithya Pratapa | Monojit Choudhury
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

Code-mixed text generation systems have found applications in many downstream tasks, including speech recognition, translation and dialogue. A paradigm of these generation systems relies on well-defined grammatical theories of code-mixing, and there is a lack of comparison of these theories. We present a large-scale human evaluation of two popular grammatical theories, Matrix-Embedded Language (ML) and Equivalence Constraint (EC). We compare them against three heuristic-based models and quantitatively demonstrate the effectiveness of the two grammatical theories.

pdf
A Linguistic Annotation Framework to Study Interactions in Multilingual Healthcare Conversational Forums
Ishani Mondal | Kalika Bali | Mohit Jain | Monojit Choudhury | Ashish Sharma | Evans Gitau | Jacki O’Neill | Kagonya Awori | Sarah Gitau
Proceedings of the Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop

In recent years, remote digital healthcare using online chats has gained momentum, especially in the Global South. Though prior work has studied interaction patterns in online (health) forums, such as TalkLife, Reddit and Facebook, there has been limited work in understanding interactions in small, close-knit community of instant messengers. In this paper, we propose a linguistic annotation framework to facilitate analysis of health-focused WhatsApp groups. The primary aim of the framework is to understand interpersonal relationships among peer supporters in order to help develop NLP solutions for remote patient care and reduce burden of overworked healthcare providers. Our framework consists of fine-grained peer support categorization and message-level sentiment tagging. Additionally, due to the prevalence of code-mixing in such groups, we incorporate word-level language annotations. We use the proposed framework to study two WhatsApp groups in Kenya for youth living with HIV, facilitated by a healthcare provider.

pdf
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices
Sebastin Santy | Anku Rani | Monojit Choudhury
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
GCM: A Toolkit for Generating Synthetic Code-mixed Text
Mohd Sanad Zaki Rizvi | Anirudh Srinivasan | Tanuja Ganu | Monojit Choudhury | Sunayana Sitaram
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.

pdf
BERTologiCoMix: How does Code-Mixing interact with Multilingual BERT?
Sebastin Santy | Anirudh Srinivasan | Monojit Choudhury
Proceedings of the Second Workshop on Domain Adaptation for NLP

Models such as mBERT and XLMR have shown success in solving Code-Mixed NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed NLP models have relied on using synthetically generated data along with naturally occurring data to improve their performance. Finetuning mBERT on such data improves it’s code-mixed performance, but the benefits of using the different types of Code-Mixed data aren’t clear. In this paper, we study the impact of finetuning with different types of code-mixed data and outline the changes that occur to the model during such finetuning. Our findings suggest that using naturally occurring code-mixed data brings in the best performance improvement after finetuning and that finetuning with any type of code-mixed text improves the responsivity of it’s attention heads to code-mixed text inputs.

pdf
On the Universality of Deep Contextual Language Models
Shaily Bhatt | Poonam Goyal | Sandipan Dandapat | Monojit Choudhury | Sunayana Sitaram
Proceedings of the 18th International Conference on Natural Language Processing (ICON)

Deep Contextual Language Models (LMs) like ELMO, BERT, and their successors dominate the landscape of Natural Language Processing due to their ability to scale across multiple tasks rapidly by pre-training a single model, followed by task-specific fine-tuning. Furthermore, multilingual versions of such models like XLM-R and mBERT have given promising results in zero-shot cross-lingual transfer, potentially enabling NLP applications in many under-served and under-resourced languages. Due to this initial success, pre-trained models are being used as ‘Universal Language Models’ as the starting point across diverse tasks, domains, and languages. This work explores the notion of ‘Universality’ by identifying seven dimensions across which a universal model should be able to scale, that is, perform equally well or reasonably well, to be useful across diverse settings. We outline the current theoretical and empirical results that support model performance across these dimensions, along with extensions that may help address some of their current limitations. Through this survey, we lay the foundation for understanding the capabilities and limitations of massive contextual language models and help discern research gaps and directions for future work to make these LMs inclusive and fair to diverse applications, users, and linguistic phenomena.

pdf
Stress Rules from Surface Forms: Experiments with Program Synthesis
Saujas Vaduguru | Partho Sarthi | Monojit Choudhury | Dipti Sharma
Proceedings of the 18th International Conference on Natural Language Processing (ICON)

Learning linguistic generalizations from only a few examples is a challenging task. Recent work has shown that program synthesis – a method to learn rules from data in the form of programs in a domain-specific language – can be used to learn phonological rules in highly data-constrained settings. In this paper, we use the problem of phonological stress placement as a case to study how the design of the domain-specific language influences the generalization ability when using the same learning algorithm. We find that encoding the distinction between consonants and vowels results in much better performance, and providing syllable-level information further improves generalization. Program synthesis, thus, provides a way to investigate how access to explicit linguistic information influences what can be learnt from a small number of examples.

pdf
Analyzing the Effects of Reasoning Types on Cross-Lingual Transfer Performance
Karthikeyan K | Aalok Sathe | Somak Aditya | Monojit Choudhury
Proceedings of the 1st Workshop on Multilingual Representation Learning

Multilingual language models achieve impressive zero-shot accuracies in many languages in complex tasks such as Natural Language Inference (NLI). Examples in NLI (and equivalent complex tasks) often pertain to various types of sub-tasks, requiring different kinds of reasoning. Certain types of reasoning have proven to be more difficult to learn in a monolingual context, and in the crosslingual context, similar observations may shed light on zero-shot transfer efficiency and few-shot sample selection. Hence, to investigate the effects of types of reasoning on transfer performance, we propose a category-annotated multilingual NLI dataset and discuss the challenges to scale monolingual annotations to multiple languages. We statistically observe interesting effects that the confluence of reasoning types and language similarities have on transfer performance.

2020

pdf
GLUECoS: An Evaluation Benchmark for Code-Switched NLP
Simran Khanuja | Sandipan Dandapat | Anirudh Srinivasan | Sunayana Sitaram | Monojit Choudhury
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Code-switching is the use of more than one language in the same conversation or utterance. Recently, multilingual contextual embedding models, trained on multiple monolingual corpora, have shown promising results on cross-lingual and multilingual tasks. We present an evaluation benchmark, GLUECoS, for code-switched languages, that spans several NLP tasks in English-Hindi and English-Spanish. Specifically, our evaluation benchmark includes Language Identification from text, POS tagging, Named Entity Recognition, Sentiment Analysis, Question Answering and a new task for code-switching, Natural Language Inference. We present results on all these tasks using cross-lingual word embedding models and multilingual models. In addition, we fine-tune multilingual models on artificially generated code-switched data. Although multilingual models perform significantly better than cross-lingual models, our results show that in most tasks, across both language pairs, multilingual models fine-tuned on code-switched data perform best, showing that multilingual models can be further optimized for code-switching tasks.

pdf
The State and Fate of Linguistic Diversity and Inclusion in the NLP World
Pratik Joshi | Sebastin Santy | Amar Budhiraja | Kalika Bali | Monojit Choudhury
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the “language agnostic” status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind.

pdf
TaxiNLI: Taking a Ride up the NLU Hill
Pratik Joshi | Somak Aditya | Aalok Sathe | Monojit Choudhury
Proceedings of the 24th Conference on Computational Natural Language Learning

Pre-trained Transformer-based neural architectures have consistently achieved state-of-the-art performance in the Natural Language Inference (NLI) task. Since NLI examples encompass a variety of linguistic, logical, and reasoning phenomena, it remains unclear as to which specific concepts are learnt by the trained systems and where they can achieve strong generalization. To investigate this question, we propose a taxonomic hierarchy of categories that are relevant for the NLI task. We introduce TaxiNLI, a new dataset, that has 10k examples from the MNLI dataset with these taxonomic labels. Through various experiments on TaxiNLI, we observe that whereas for certain taxonomic categories SOTA neural models have achieved near perfect accuracies—a large jump over the previous models—some categories still remain difficult. Our work adds to the growing body of literature that shows the gaps in the current NLI systems and datasets through a systematic presentation and analysis of reasoning categories.

pdf bib
Proceedings of the 4th Workshop on Computational Approaches to Code Switching
Thamar Solorio | Monojit Choudhury | Kalika Bali | Sunayana Sitaram | Amitava Das | Mona Diab
Proceedings of the 4th Workshop on Computational Approaches to Code Switching

pdf bib
A New Dataset for Natural Language Inference from Code-mixed Conversations
Simran Khanuja | Sandipan Dandapat | Sunayana Sitaram | Monojit Choudhury
Proceedings of the 4th Workshop on Computational Approaches to Code Switching

Natural Language Inference (NLI) is the task of inferring the logical relationship, typically entailment or contradiction, between a premise and hypothesis. Code-mixing is the use of more than one language in the same conversation or utterance, and is prevalent in multilingual communities all over the world. In this paper, we present the first dataset for code-mixed NLI, in which both the premises and hypotheses are in code-mixed Hindi-English. We use data from Hindi movies (Bollywood) as premises, and crowd-source hypotheses from Hindi-English bilinguals. We conduct a pilot annotation study and describe the final annotation protocol based on observations from the pilot. Currently, the data collected consists of 400 premises in the form of code-mixed conversation snippets and 2240 code-mixed hypotheses. We conduct an extensive analysis to infer the linguistic phenomena commonly observed in the dataset obtained. We evaluate the dataset using a standard mBERT-based pipeline for NLI and report results.

pdf
Understanding Script-Mixing: A Case Study of Hindi-English Bilingual Twitter Users
Abhishek Srivastava | Kalika Bali | Monojit Choudhury
Proceedings of the 4th Workshop on Computational Approaches to Code Switching

In a multi-lingual and multi-script society such as India, many users resort to code-mixing while typing on social media. While code-mixing has received a lot of attention in the past few years, it has mostly been studied within a single-script scenario. In this work, we present a case study of Hindi-English bilingual Twitter users while considering the nuances that come with the intermixing of different scripts. We present a concise analysis of how scripts and languages interact in communities and cultures where code-mixing is rampant and offer certain insights into the findings. Our analysis shows that both intra-sentential and inter-sentential script-mixing are present on Twitter and show different behavior in different contexts. Examples suggest that script can be employed as a tool for emphasizing certain phrases within a sentence or disambiguating the meaning of a word. Script choice can also be an indicator of whether a word is borrowed or not. We present our analysis along with examples that bring out the nuances of the different cases.

pdf
Code-mixed parse trees and how to find them
Anirudh Srinivasan | Sandipan Dandapat | Monojit Choudhury
Proceedings of the 4th Workshop on Computational Approaches to Code Switching

In this paper, we explore the methods of obtaining parse trees of code-mixed sentences and analyse the obtained trees. Existing work has shown that linguistic theories can be used to generate code-mixed sentences from a set of parallel sentences. We build upon this work, using one of these theories, the Equivalence-Constraint theory to obtain the parse trees of synthetically generated code-mixed sentences and evaluate them with a neural constituency parser. We highlight the lack of a dataset non-synthetic code-mixed constituency parse trees and how it makes our evaluation difficult. To complete our evaluation, we convert a code-mixed dependency parse tree set into “pseudo constituency trees” and find that a parser trained on synthetically generated trees is able to decently parse these as well.

pdf
Crowdsourcing Speech Data for Low-Resource Languages from Low-Income Workers
Basil Abraham | Danish Goel | Divya Siddarth | Kalika Bali | Manu Chopra | Monojit Choudhury | Pratik Joshi | Preethi Jyoti | Sunayana Sitaram | Vivek Seshadri
Proceedings of the Twelfth Language Resources and Evaluation Conference

Voice-based technologies are essential to cater to the hundreds of millions of new smartphone users. However, most of the languages spoken by these new users have little to no labelled speech data. Unfortunately, collecting labelled speech data in any language is an expensive and resource-intensive task. Moreover, existing platforms typically collect speech data only from urban speakers familiar with digital technology whose dialects are often very different from low-income users. In this paper, we explore the possibility of collecting labelled speech data directly from low-income workers. In addition to providing diversity to the speech dataset, we believe this approach can also provide valuable supplemental earning opportunities to these communities. To this end, we conducted a study where we collected labelled speech data in the Marathi language from three different user groups: low-income rural users, low-income urban users, and university students. Overall, we collected 109 hours of data from 36 participants. Our results show that the data collected from low-income participants is of comparable quality to the data collected from university students (who are typically employed to do this work) and that crowdsourcing speech data from low-income rural and urban workers is a viable method of gathering speech data.

2019

pdf
Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities
Pratik Joshi | Christain Barnes | Sebastin Santy | Simran Khanuja | Sanket Shah | Anirudh Srinivasan | Satwik Bhattamishra | Sunayana Sitaram | Monojit Choudhury | Kalika Bali
Proceedings of the 16th International Conference on Natural Language Processing

In this paper, we examine and analyze the challenges associated with developing and introducing language technologies to low-resource language communities. While doing so we bring to light the successes and failures of past work in this area, challenges being faced in doing so, and what have they achieved. Throughout this paper, we take a problem-facing approach and describe essential factors which the success of such technologies hinges upon. We present the various aspects in a manner which clarify and lay out the different tasks involved, which can aid organizations looking to make an impact in this area. We take the example of Gondi, an extremely-low resource Indian language, to reinforce and complement our discussion.

bib
Processing and Understanding Mixed Language Data
Monojit Choudhury | Anirudh Srinivasan | Sandipan Dandapat
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts

Multilingual communities exhibit code-mixing, that is, mixing of two or more socially stable languages in a single conversation, sometimes even in a single utterance. This phenomenon has been widely studied by linguists and interaction scientists in the spoken language of such communities. However, with the prevalence of social media and other informal interactive platforms, code-switching is now also ubiquitously observed in user-generated text. As multilingual communities are more the norm from a global perspective, it becomes essential that code-switched text and speech are adequately handled by language technologies and NUIs.Code-mixing is extremely prevalent in all multilingual societies. Current studies have shown that as much as 20% of user generated content from some geographies, like South Asia, parts of Europe, and Singapore, are code-mixed. Thus, it is very important to handle code-mixed content as a part of NLP systems and applications for these geographies.In the past 5 years, there has been an active interest in computational models for code-mixing with a substantive research outcome in terms of publications, datasets and systems. However, it is not easy to find a single point of access for a complete and coherent overview of the research. This tutorial is expecting to fill this gap and provide new researchers in the area with a foundation in both linguistic and computational aspects of code-mixing. We hope that this then becomes a starting point for those who wish to pursue research, design, development and deployment of code-mixed systems in multilingual societies.

pdf
INMT: Interactive Neural Machine Translation Prediction
Sebastin Santy | Sandipan Dandapat | Monojit Choudhury | Kalika Bali
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations

In this paper, we demonstrate an Interactive Machine Translation interface, that assists human translators with on-the-fly hints and suggestions. This makes the end-to-end translation process faster, more efficient and creates high-quality translations. We augment the OpenNMT backend with a mechanism to accept the user input and generate conditioned translations.

2018

pdf bib
Phone Merging For Code-Switched Speech Recognition
Sunit Sivasankaran | Brij Mohan Lal Srivastava | Sunayana Sitaram | Kalika Bali | Monojit Choudhury
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching

Speakers in multilingual communities often switch between or mix multiple languages in the same conversation. Automatic Speech Recognition (ASR) of code-switched speech faces many challenges including the influence of phones of different languages on each other. This paper shows evidence that phone sharing between languages improves the Acoustic Model performance for Hindi-English code-switched speech. We compare baseline system built with separate phones for Hindi and English with systems where the phones were manually merged based on linguistic knowledge. Encouraged by the improved ASR performance after manually merging the phones, we further investigate multiple data-driven methods to identify phones to be merged across the languages. We show detailed analysis of automatic phone merging in this language pair and the impact it has on individual phone accuracies and WER. Though the best performance gain of 1.2% WER was observed with manually merged phones, we show experimentally that the manual phone merge is not optimal.

pdf
Accommodation of Conversational Code-Choice
Anshul Bawa | Monojit Choudhury | Kalika Bali
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching

Bilingual speakers often freely mix languages. However, in such bilingual conversations, are the language choices of the speakers coordinated? How much does one speaker’s choice of language affect other speakers? In this paper, we formulate code-choice as a linguistic style, and show that speakers are indeed sensitive to and accommodating of each other’s code-choice. We find that the saliency or markedness of a language in context directly affects the degree of accommodation observed. More importantly, we discover that accommodation of code-choices persists over several conversational turns. We also propose an alternative interpretation of conversational accommodation as a retrieval problem, and show that the differences in accommodation characteristics of code-choices are based on their markedness in context.

pdf
User Perception of Code-Switching Dialog Systems
Anshul Bawa | Monojit Choudhury | Kalika Bali
Proceedings of the 15th International Conference on Natural Language Processing

pdf
An Integrated Representation of Linguistic and Social Functions of Code-Switching
Silvana Hartmann | Monojit Choudhury | Kalika Bali
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Discovering Canonical Indian English Accents: A Crowdsourcing-based Approach
Sunayana Sitaram | Varun Manjunath | Varun Bharadwaj | Monojit Choudhury | Kalika Bali | Michael Tjalve
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Word Embeddings for Code-Mixed Language Processing
Adithya Pratapa | Monojit Choudhury | Sunayana Sitaram
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We compare three existing bilingual word embedding approaches, and a novel approach of training skip-grams on synthetic code-mixed text generated through linguistic models of code-mixing, on two tasks - sentiment analysis and POS tagging for code-mixed text. Our results show that while CVM and CCA based embeddings perform as well as the proposed embedding technique on semantic and syntactic tasks respectively, the proposed approach provides the best performance for both tasks overall. Thus, this study demonstrates that existing bilingual embedding techniques are not ideal for code-mixed text processing and there is a need for learning multilingual word embedding from the code-mixed text.

pdf
Language Modeling for Code-Mixing: The Role of Linguistic Theory based Synthetic Data
Adithya Pratapa | Gayatri Bhat | Monojit Choudhury | Sunayana Sitaram | Sandipan Dandapat | Kalika Bali
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Training language models for Code-mixed (CM) language is known to be a difficult problem because of lack of data compounded by the increased confusability due to the presence of more than one language. We present a computational technique for creation of grammatically valid artificial CM data based on the Equivalence Constraint Theory. We show that when training examples are sampled appropriately from this synthetic data and presented in certain order (aka training curriculum) along with monolingual and real CM data, it can significantly reduce the perplexity of an RNN-based language model. We also show that randomly generated CM data does not help in decreasing the perplexity of the LMs.

2017

pdf
All that is English may be Hindi: Enhancing language identification through automatic ranking of the likeliness of word borrowing in social media
Jasabanta Patro | Bidisha Samanta | Saurabh Singh | Abhipsa Basu | Prithwish Mukherjee | Monojit Choudhury | Animesh Mukherjee
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

n this paper, we present a set of computational methods to identify the likeliness of a word being borrowed, based on the signals from social media. In terms of Spearman’s correlation values, our methods perform more than two times better (∼ 0.62) in predicting the borrowing likeliness compared to the best performing baseline (∼ 0.26) reported in literature. Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts. In 88% of cases the annotators felt that the foreign language tag should be replaced by native language tag, thus indicating a huge scope for improvement of automatic language identification systems.

pdf
Estimating Code-Switching on Twitter with a Novel Generalized Word-Level Language Detection Technique
Shruti Rijhwani | Royal Sequiera | Monojit Choudhury | Kalika Bali | Chandra Shekhar Maddila
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Word-level language detection is necessary for analyzing code-switched text, where multiple languages could be mixed within a sentence. Existing models are restricted to code-switching between two specific languages and fail in real-world scenarios as text input rarely has a priori information on the languages used. We present a novel unsupervised word-level language detection technique for code-switched text for an arbitrarily large number of languages, which does not require any manually annotated training data. Our experiments with tweets in seven languages show a 74% relative error reduction in word-level labeling with respect to competitive baselines. We then use this system to conduct a large-scale quantitative analysis of code-switching patterns on Twitter, both global as well as region-specific, with 58M tweets.

pdf
Curriculum Design for Code-switching: Experiments with Language Identification and Language Modeling with Deep Neural Networks
Monojit Choudhury | Kalika Bali | Sunayana Sitaram | Ashutosh Baheti
Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)

pdf
Quantitative Characterization of Code Switching Patterns in Complex Multi-Party Conversations: A Case Study on Hindi Movie Scripts
Adithya Pratapa | Monojit Choudhury
Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)

2016

pdf
Understanding Language Preference for Expression of Opinion and Sentiment: What do Hindi-English Speakers do on Twitter?
Koustav Rudra | Shruti Rijhwani | Rafiya Begum | Kalika Bali | Monojit Choudhury | Niloy Ganguly
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Functions of Code-Switching in Tweets: An Annotation Framework and Some Initial Experiments
Rafiya Begum | Kalika Bali | Monojit Choudhury | Koustav Rudra | Niloy Ganguly
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Code-Switching (CS) between two languages is extremely common in communities with societal multilingualism where speakers switch between two or more languages when interacting with each other. CS has been extensively studied in spoken language by linguists for several decades but with the popularity of social-media and less formal Computer Mediated Communication, we now see a big rise in the use of CS in the text form. This poses interesting challenges and a need for computational processing of such code-switched data. As with any Computational Linguistic analysis and Natural Language Processing tools and applications, we need annotated data for understanding, processing, and generation of code-switched language. In this study, we focus on CS between English and Hindi Tweets extracted from the Twitter stream of Hindi-English bilinguals. We present an annotation scheme for annotating the pragmatic functions of CS in Hindi-English (Hi-En) code-switched tweets based on a linguistic analysis and some initial experiments.

2015

pdf
POS Tagging of Hindi-English Code Mixed Text from Social Media: Some Machine Learning Experiments
Royal Sequiera | Monojit Choudhury | Kalika Bali
Proceedings of the 12th International Conference on Natural Language Processing

2014

pdf
Word-level Language Identification using CRF: Code-switching Shared Task Report of MSR India System
Gokul Chittaranjan | Yogarshi Vyas | Kalika Bali | Monojit Choudhury
Proceedings of the First Workshop on Computational Approaches to Code Switching

pdf
I am borrowing ya mixing ?" An Analysis of English-Hindi Code Mixing in Facebook
Kalika Bali | Jatin Sharma | Monojit Choudhury | Yogarshi Vyas
Proceedings of the First Workshop on Computational Approaches to Code Switching

pdf
Hierarchical Recursive Tagset for Annotating Cooking Recipes
Sharath Reddy Gunamgari | Sandipan Dandapat | Monojit Choudhury
Proceedings of the 11th International Conference on Natural Language Processing

pdf
“ye word kis lang ka hai bhai?” Testing the Limits of Word level Language Identification
Spandana Gella | Kalika Bali | Monojit Choudhury
Proceedings of the 11th International Conference on Natural Language Processing

pdf
Automatic Discovery of Adposition Typology
Rishiraj Saha Roy | Rahul Katare | Niloy Ganguly | Monojit Choudhury
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf
POS Tagging of English-Hindi Code-Mixed Social Media Content
Yogarshi Vyas | Spandana Gella | Jatin Sharma | Kalika Bali | Monojit Choudhury
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf
Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation
Rohan Ramanath | Monojit Choudhury | Kalika Bali | Rishiraj Saha Roy
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Entailment: An Effective Metric for Comparing and Evaluating Hierarchical and Non-hierarchical Annotation Schemes
Rohan Ramanath | Monojit Choudhury | Kalika Bali
Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse

2012

pdf
An Empirical Study of the Occurrence and Co-Occurrence of Named Entities in Natural Language Corpora
K Saravanan | Monojit Choudhury | Raghavendra Udupa | A Kumaran
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Named Entities (NEs) that occur in natural language text are important especially due to the advent of social media, and they play a critical role in the development of many natural language technologies. In this paper, we systematically analyze the patterns of occurrence and co-occurrence of NEs in standard large English news corpora - providing valuable insight for the understanding of the corpus, and subsequently paving way for the development of technologies that rely critically on handling NEs. We use two distinctive approaches: normal statistical analysis that measure and report the occurrence patterns of NEs in terms of frequency, growth, etc., and a complex networks based analysis that measures the co-occurrence pattern in terms of connectivity, degree-distribution, small-world phenomenon, etc. Our analysis indicates that: (i) NEs form an open-set in corpora and grow linearly, (ii) presence of a kernel and peripheral NE's, with the large periphery occurring rarely, and (iii) a strong evidence of small-world phenomenon. Our findings may suggest effective ways for construction of NE lexicons to aid efficient development of several natural language technologies.

pdf
Mining Hindi-English Transliteration Pairs from Online Hindi Lyrics
Kanika Gupta | Monojit Choudhury | Kalika Bali
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper describes a method to mine Hindi-English transliteration pairs from online Hindi song lyrics. The technique is based on the observations that lyrics are transliterated word-by-word, maintaining the precise word order. The mining task is nevertheless challenging because the Hindi lyrics and its transliterations are usually available from different, often unrelated, websites. Therefore, it is a non-trivial task to match the Hindi lyrics to their transliterated counterparts. Moreover, there are various types of noise in lyrics data that needs to be appropriately handled before songs can be aligned at word level. The mined data of 30823 unique Hindi-English transliteration pairs with an accuracy of more than 92% is available publicly. Although the present work reports mining of Hindi-English word pairs, the same technique can be easily adapted for other languages for which song lyrics are available online in native and Roman scripts.

pdf bib
Proceedings of the Second Workshop on Advances in Text Input Methods
Kalika Bali | Monojit Choudhury | Yoh Okuno
Proceedings of the Second Workshop on Advances in Text Input Methods

2011

pdf bib
Challenges in Designing Input Method Editors for Indian Lan-guages: The Role of Word-Origin and Context
Umair Z Ahmed | Kalika Bali | Monojit Choudhury | Sowmya VB
Proceedings of the Workshop on Advances in Text Input Methods (WTIM 2011)

2010

pdf
Resource Creation for Training and Testing of Transliteration Systems for Indian Languages
Sowmya V. B. | Monojit Choudhury | Kalika Bali | Tirthankar Dasgupta | Anupam Basu
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Machine transliteration is used in a number of NLP applications ranging from machine translation and information retrieval to input mechanisms for non-roman scripts. Many popular Input Method Editors for Indian languages, like Baraha, Akshara, Quillpad etc, use back-transliteration as a mechanism to allow users to input text in a number of Indian language. The lack of a standard dataset to evaluate these systems makes it difficult to make any meaningful comparisons of their relative accuracies. In this paper, we describe the methodology for the creation of a dataset of ~2500 transliterated sentence pairs each in Bangla, Hindi and Telugu. The data was collected across three different modes from a total of 60 users. We believe that this dataset will prove useful not only for the evaluation and training of back-transliteration systems but also help in the linguistic analysis of the process of transliterating Indian languages from native scripts to Roman.

pdf
Global topology of word co-occurrence networks: Beyond the two-regime power-law
Monojit Choudhury | Diptesh Chatterjee | Animesh Mukherjee
Coling 2010: Posters

2009

pdf
Large-Coverage Root Lexicon Extraction for Hindi
Cohan Sujay Carlos | Monojit Choudhury | Sandipan Dandapat
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

pdf
Discovering Global Patterns in Linguistic Networks through Spectral Analysis: A Case Study of the Consonant Inventories
Animesh Mukherjee | Monojit Choudhury | Ravi Kannan
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

pdf
Language Diversity across the Consonant Inventories: A Study in the Framework of Complex Networks
Monojit Choudhury | Animesh Mukherjee | Anupam Basu | Niloy Ganguly | Ashish Garg | Vaibhav Jalan
Proceedings of the EACL 2009 Workshop on Cognitive Aspects of Computational Language Acquisition

pdf bib
Complex Linguistic Annotation – No Easy Way Out! A Case from Bangla and Hindi POS Labeling Tasks
Sandipan Dandapat | Priyanka Biswas | Monojit Choudhury | Kalika Bali
Proceedings of the Third Linguistic Annotation Workshop (LAW III)

pdf bib
Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)
Monojit Choudhury | Samer Hassan | Animesh Mukherjee | Smaranda Muresan
Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)

pdf
Syntax is from Mars while Semantics from Venus! Insights from Spectral Analysis of Distributional Similarity Networks
Chris Biemann | Monojit Choudhury | Animesh Mukherjee
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Coling 2008: Proceedings of the 3rd Textgraphs workshop on Graph-based Algorithms for Natural Language Processing
Irina Matveeva | Chris Biemann | Monojit Choudhury | Mona Diab
Coling 2008: Proceedings of the 3rd Textgraphs workshop on Graph-based Algorithms for Natural Language Processing

pdf
Social Network Inspired Models of NLP and Language Evolution
Monojit Choudhury | Animesh Mukherjee | Niloy Ganguly
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

pdf
Invited Talk: Breaking the Zipfian Barrier of NLP
Monojit Choudhury
Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages

pdf
Modeling the Structure and Dynamics of the Consonant Inventories: A Complex Network Approach
Animesh Mukherjee | Monojit Choudhury | Anupam Basu | Niloy Ganguly
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Unsupervised Parts-of-Speech Induction for Bengali
Joydeep Nath | Monojit Choudhury | Animesh Mukherjee | Christian Biemann | Niloy Ganguly
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

We present a study of the word interaction networks of Bengali in the framework of complex networks. The topological properties of these networks reveal interesting insights into the morpho-syntax of the language, whereas clustering helps in the induction of the natural word classes leading to a principled way of designing POS tagsets. We compare different network construction techniques and clustering algorithms based on the cohesiveness of the word clusters. Cohesiveness is measured against two gold-standard tagsets by means of the novel metric of tag-entropy. The approach presented here is a generic one that can be easily extended to any language.

pdf
A Common Parts-of-Speech Tagset Framework for Indian Languages
Baskaran Sankaran | Kalika Bali | Monojit Choudhury | Tanmoy Bhattacharya | Pushpak Bhattacharyya | Girish Nath Jha | S. Rajendran | K. Saravanan | L. Sobha | K.V. Subbarao
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

We present a universal Parts-of-Speech (POS) tagset framework covering most of the Indian languages (ILs) following the hierarchical and decomposable tagset schema. In spite of significant number of speakers, there is no workable POS tagset and tagger for most ILs, which serve as fundamental building blocks for NLP research. Existing IL POS tagsets are often designed for a specific language; the few that have been designed for multiple languages cover only shallow linguistic features ignoring linguistic richness and the idiosyncrasies. The new framework that is proposed here addresses these deficiencies in an efficient and principled manner. We follow a hierarchical schema similar to that of EAGLES and this enables the framework to be flexible enough to capture rich features of a language/ language family, even while capturing the shared linguistic structures in a methodical way. The proposed common framework further facilitates the sharing and reusability of scarce resources in these languages and ensures cross-linguistic compatibility.

2007

pdf
Redundancy Ratio: An Invariant Property of the Consonant Inventories of the World’s Languages
Animesh Mukherjee | Monojit Choudhury | Anupam Basu | Niloy Ganguly
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf
How Difficult is it to Develop a Perfect Spell-checker? A Cross-Linguistic Analysis through Complex Network Approach
Monojit Choudhury | Markose Thomas | Animesh Mukherjee | Anupam Basu | Niloy Ganguly
Proceedings of the Second Workshop on TextGraphs: Graph-Based Algorithms for Natural Language Processing

pdf
Evolution, Optimization, and Language Change: The Case of Bengali Verb Inflections
Monojit Choudhury | Vaibhav Jalan | Sudeshna Sarkar | Anupam Basu
Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology

pdf
Emergence of Community Structures in Vowel Inventories: An Analysis Based on Complex Networks
Animesh Mukherjee | Monojit Choudhury | Anupam Basu | Niloy Ganguly
Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology

2006

pdf
Analysis and Synthesis of the Distribution of Consonants over Languages: A Complex Network Approach
Monojit Choudhury | Animesh Mukherjee | Anupam Basu | Niloy Ganguly
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2004

pdf
A Diachronic Approach for Schwa Deletion in Indo Aryan Languages
Monojit Choudhury | Anupam Basu | Sudeshna Sarkar
Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology

Search
Co-authors