Sudeshna Sarkar


2022

pdf
Does Meta-learning Help mBERT for Few-shot Question Generation in a Cross-lingual Transfer Setting for Indic Languages?
Aniruddha Roy | Rupak Kumar Thakur | Isha Sharma | Ashim Gupta | Amrith Krishna | Sudeshna Sarkar | Pawan Goyal
Proceedings of the 29th International Conference on Computational Linguistics

Few-shot Question Generation (QG) is an important and challenging problem in the Natural Language Generation (NLG) domain. Multilingual BERT (mBERT) has been successfully used in various Natural Language Understanding (NLU) applications. However, the question of how to utilize mBERT for few-shot QG, possibly with cross-lingual transfer, remains. In this paper, we try to explore how mBERT performs in few-shot QG (cross-lingual transfer) and also whether applying meta-learning on mBERT further improves the results. In our setting, we consider mBERT as the base model and fine-tune it using a seq-to-seq language modeling framework in a cross-lingual setting. Further, we apply the model agnostic meta-learning approach to our base model. We evaluate our model for two low-resource Indian languages, Bengali and Telugu, using the TyDi QA dataset. The proposed approach consistently improves the performance of the base model in few-shot settings and even works better than some heavily parameterized models. Human evaluation also confirms the effectiveness of our approach.

pdf
ArgGen: Prompting Text Generation Models for Document-Level Event-Argument Aggregation
Debanjana Kar | Sudeshna Sarkar | Pawan Goyal
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022

Most of the existing discourse-level Information Extraction tasks have been modeled to be extractive in nature. However, we argue that extracting information from larger bodies of discourse-like documents requires more natural language understanding and reasoning capabilities. In our work, we propose the novel task of document-level event argument aggregation which generates consolidated event-arguments at a document-level with minimal loss of information. More specifically, we focus on generating precise document-level information frames in a multilingual setting using prompt-based methods. In this paper, we show the effectiveness of u prompt-based text generation approach to generate document-level argument spans in a low-resource and zero-shot setting. We also release the first of its kind multilingual event argument aggregation dataset that can be leveraged in other related multilingual text generation tasks as well: https://github.com/DebanjanaKar/ArgGen.

pdf
PESE: Event Structure Extraction using Pointer Network based Encoder-Decoder Architecture
Alapan Kuila | Sudeshna Sarkar
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

The task of event extraction (EE) aims to find the events and event-related argument information from the text and represent them in a structured format. Most previous works try to solve the problem by separately identifying multiple substructures and aggregating them to get the complete event structure. The problem with the methods is that it fails to identify all the interdependencies among the event participants (event-triggers, arguments, and roles). In this paper, we represent each event record in a unique tuple format that contains trigger phrase, trigger type, argument phrase, and corresponding role information. Our proposed pointer network-based encoder-decoder model generates an event tuple in each time step by exploiting the interactions among event participants and presenting a truly end-to-end solution to the EE task. We evaluate our model on the ACE2005 dataset, and experimental results demonstrate the effectiveness of our model by achieving competitive performance compared to the state-of-the-art methods.

2021

pdf
ArgFuse: A Weakly-Supervised Framework for Document-Level Event Argument Aggregation
Debanjana Kar | Sudeshna Sarkar | Pawan Goyal
Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)

Most of the existing information extraction frameworks (Wadden et al., 2019; Veysehet al., 2020) focus on sentence-level tasks and are hardly able to capture the consolidated information from a given document. In our endeavour to generate precise document-level information frames from lengthy textual records, we introduce the task of Information Aggregation or Argument Aggregation. More specifically, our aim is to filter irrelevant and redundant argument mentions that were extracted at a sentence level and render a document level information frame. Majority of the existing works have been observed to resolve related tasks of document-level event argument extraction (Yang et al., 2018; Zheng et al., 2019) and salient entity identification (Jain et al., 2020) using supervised techniques. To remove dependency from large amounts of labelled data, we explore the task of information aggregation using weakly supervised techniques. In particular, we present an extractive algorithm with multiple sieves which adopts active learning strategies to work efficiently in low-resource settings. For this task, we have annotated our own test dataset comprising of 131 document information frames and have released the code and dataset to further research prospects in this new domain. To the best of our knowledge, we are the first to establish baseline results for this task in English. Our data and code are publicly available at https://github.com/DebanjanaKar/ArgFuse.

2020

pdf
Event Argument Extraction using Causal Knowledge Structures
Debanjana Kar | Sudeshna Sarkar | Pawan Goyal
Proceedings of the 17th International Conference on Natural Language Processing (ICON)

Event Argument extraction refers to the task of extracting structured information from unstructured text for a particular event of interest. The existing works exhibit poor capabilities to extract causal event arguments like Reason and After Effects. Futhermore, most of the existing works model this task at a sentence level, restricting the context to a local scope. While it may be effective for short spans of text, for longer bodies of text such as news articles, it has often been observed that the arguments for an event do not necessarily occur in the same sentence as that containing an event trigger. To tackle the issue of argument scattering across sentences, the use of global context becomes imperative in this task. In our work, we propose an external knowledge aided approach to infuse document level event information to aid the extraction of complex event arguments. We develop a causal network for our event-annotated dataset by extracting relevant event causal structures from ConceptNet and phrases from Wikipedia. We use the extracted event causal features in a bi-directional transformer encoder to effectively capture long-range inter-sentence dependencies. We report the effectiveness of our proposed approach through both qualitative and quantitative analysis. In this task, we establish our findings on an event annotated dataset in 5 Indian languages. This dataset adds further complexity to the task by labeling arguments of entity type (like Time, Place) as well as more complex argument types (like Reason, After-Effect). Our approach achieves state-of-the-art performance across all the five languages. Since our work does not rely on any language specific features, it can be easily extended to other languages as well.

pdf bib
A Graph Convolution Network-based System for Technical Domain Identification
Alapan Kuila | Ayan Das | Sudeshna Sarkar
Proceedings of the 17th International Conference on Natural Language Processing (ICON): TechDOfication 2020 Shared Task

This paper presents the IITKGP contribution at the Technical DOmain Identification (TechDOfication) shared task at ICON 2020. In the preprocessing stage, we applied part-of-speech (PoS) taggers and dependency parsers to tag the data. We trained a graph convolution neural network (GCNN) based system that uses the tokens along with their PoS and dependency relations as features to identify the domain of a given document. We participated in the subtasks for coarse-grained domain classification in the English (Subtask 1a), Bengali (Subtask 1b) and Hindi language (Subtask 1d), and, the subtask for fine-grained domain classification task within Computer Science domain in English language (Subtask 2a).

2019

pdf
A little perturbation makes a difference: Treebank augmentation by perturbation improves transfer parsing
Ayan Das | Sudeshna Sarkar
Proceedings of the 16th International Conference on Natural Language Processing

We present an approach for cross-lingual transfer of dependency parser so that the parser trained on a single source language can more effectively cater to diverse target languages. In this work, we show that the cross-lingual performance of the parsers can be enhanced by over-generating the source language treebank. For this, the source language treebank is augmented with its perturbed version in which controlled perturbation is introduced in the parse trees by stochastically reordering the positions of the dependents with respect to their heads while keeping the structure of the parse trees unchanged. This enables the parser to capture diverse syntactic patterns in addition to those that are found in the source language. The resulting parser is found to more effectively parse target languages with different syntactic structures. With English as the source language, our system shows an average improvement of 6.7% and 7.7% in terms of UAS and LAS over 29 target languages compared to the baseline single source parser trained using unperturbed source language treebank. This also results in significant improvement over the transfer parser proposed by (CITATION) that involves an “order-free” parser algorithm.

pdf
Biomedical Relation Classification by single and multiple source domain adaptation
Sinchani Chakraborty | Sudeshna Sarkar | Pawan Goyal | Mahanandeeshwar Gattu
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)

Relation classification is crucial for inferring semantic relatedness between entities in a piece of text. These systems can be trained given labelled data. However, relation classification is very domain-specific and it takes a lot of effort to label data for a new domain. In this paper, we explore domain adaptation techniques for this task. While past works have focused on single source domain adaptation for bio-medical relation classification, we classify relations in an unlabeled target domain by transferring useful knowledge from one or more related source domains. Our experiments with the model have shown to improve state-of-the-art F1 score on 3 benchmark biomedical corpora for single domain and on 2 out of 3 for multi-domain scenarios. When used with contextualized embeddings, there is further boost in performance outperforming neural-network based domain adaptation baselines for both the cases.

pdf
Medical Entity Linking using Triplet Network
Ishani Mondal | Sukannya Purkayastha | Sudeshna Sarkar | Pawan Goyal | Jitesh Pillai | Amitava Bhattacharyya | Mahanandeeshwar Gattu
Proceedings of the 2nd Clinical Natural Language Processing Workshop

Entity linking (or Normalization) is an essential task in text mining that maps the entity mentions in the medical text to standard entities in a given Knowledge Base (KB). This task is of great importance in the medical domain. It can also be used for merging different medical and clinical ontologies. In this paper, we center around the problem of disease linking or normalization. This task is executed in two phases: candidate generation and candidate scoring. In this paper, we present an approach to rank the candidate Knowledge Base entries based on their similarity with disease mention. We make use of the Triplet Network for candidate ranking. While the existing methods have used carefully generated sieves and external resources for candidate generation, we introduce a robust and portable candidate generation scheme that does not make use of the hand-crafted rules. Experimental results on the standard benchmark NCBI disease dataset demonstrate that our system outperforms the prior methods by a significant margin.

2017

pdf
Delexicalized transfer parsing for low-resource languages using transformed and combined treebanks
Ayan Das | Affan Zaffar | Sudeshna Sarkar
Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

This paper describes our dependency parsing system in CoNLL-2017 shared task on Multilingual Parsing from Raw Text to Universal Dependencies. We primarily focus on the low-resource languages (surprise languages). We have developed a framework to combine multiple treebanks to train parsers for low resource languages by delexicalization method. We have applied transformation on source language treebanks based on syntactic features of the low-resource language to improve performance of the parser. In the official evaluation, our system achieves an macro-averaged LAS score of 67.61 and 37.16 on the entire blind test data and the surprise language test data respectively.

2016

pdf
Development of a Bengali parser by cross-lingual transfer from Hindi
Ayan Das | Agnivo Saha | Sudeshna Sarkar
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)

In recent years there has been a lot of interest in cross-lingual parsing for developing treebanks for languages with small or no annotated treebanks. In this paper, we explore the development of a cross-lingual transfer parser from Hindi to Bengali using a Hindi parser and a Hindi-Bengali parallel corpus. A parser is trained and applied to the Hindi sentences of the parallel corpus and the parse trees are projected to construct probable parse trees of the corresponding Bengali sentences. Only about 14% of these trees are complete (transferred trees contain all the target sentence words) and they are used to construct a Bengali parser. We relax the criteria of completeness to consider well-formed trees (43% of the trees) leading to an improvement. We note that the words often do not have a one-to-one mapping in the two languages but considering sentences at the chunk-level results in better correspondence between the two languages. Based on this we present a method to use chunking as a preprocessing step and do the transfer on the chunk trees. We find that about 72% of the projected parse trees of Bengali are now well-formed. The resultant parser achieves significant improvement in both Unlabeled Attachment Score (UAS) as well as Labeled Attachment Score (LAS) over the baseline word-level transferred parser.

pdf
Query Translation for Cross-Language Information Retrieval using Multilingual Word Clusters
Paheli Bhattacharya | Pawan Goyal | Sudeshna Sarkar
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)

In Cross-Language Information Retrieval, finding the appropriate translation of the source language query has always been a difficult problem to solve. We propose a technique towards solving this problem with the help of multilingual word clusters obtained from multilingual word embeddings. We use word embeddings of the languages projected to a common vector space on which a community-detection algorithm is applied to find clusters such that words that represent the same concept from different languages fall in the same group. We utilize these multilingual word clusters to perform query translation for Cross-Language Information Retrieval for three languages - English, Hindi and Bengali. We have experimented with the FIRE 2012 and Wikipedia datasets and have shown improvements over several standard methods like dictionary-based method, a transliteration-based model and Google Translate.

pdf
A study of attention-based neural machine translation model on Indian languages
Ayan Das | Pranay Yerra | Ken Kumar | Sudeshna Sarkar
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)

Neural machine translation (NMT) models have recently been shown to be very successful in machine translation (MT). The use of LSTMs in machine translation has significantly improved the translation performance for longer sentences by being able to capture the context and long range correlations of the sentences in their hidden layers. The attention model based NMT system (Bahdanau et al., 2014) has become the state-of-the-art, performing equal or better than other statistical MT approaches. In this paper, we wish to study the performance of the attention-model based NMT system (Bahdanau et al., 2014) on the Indian language pair, Hindi and Bengali, and do an analysis on the types or errors that occur in case when the languages are morphologically rich and there is a scarcity of large parallel training corpus. We then carry out certain post-processing heuristic steps to improve the quality of the translated statements and suggest further measures that can be carried out.

pdf
Cross-lingual transfer parser from Hindi to Bengali using delexicalization and chunking
Ayan Das | Agnivo Saha | Sudeshna Sarkar
Proceedings of the 13th International Conference on Natural Language Processing

2014

pdf
Accurate Identification of the Karta (Subject) Relation in Bangla
Arnab Dhar | Sudeshna Sarkar
Proceedings of the 11th International Conference on Natural Language Processing

pdf
Handling Plurality in Bengali Noun Phrases
Biswanath Barik | Sudeshna Sarkar
Proceedings of the 11th International Conference on Natural Language Processing

2012

pdf
An Efficient Technique for De-Noising Sentences using Monolingual Corpus and Synonym Dictionary
Sanjay Chatterji | Diptesh Chatterjee | Sudeshna Sarkar
Proceedings of COLING 2012: Demonstration Papers

pdf
A Hybrid Dependency Parser for Bangla
Arnab Dhar | Sanjay Chatterji | Sudeshna Sarkar | Anupam Basu
Proceedings of the 10th Workshop on Asian Language Resources

pdf
Repairing Bengali Verb Chunks for Improved Bengali to Hindi Machine Translation
Sanjay Chatterji | Nabanita Datta | Arnab Dhar | Biswanath Barik | Sudeshna Sarkar | Anupam Basu
Proceedings of the 10th Workshop on Asian Language Resources

pdf
Translations of Ambiguous Hindi Pronouns to Possible Bengali Pronouns
Sanjay Chatterji | Sudeshna Sarkar | Anupam Basu
Proceedings of the 10th Workshop on Asian Language Resources

pdf
A Three Stage Hybrid Parser for Hindi
Sanjay Chatterji | Arnad Dhar | Sudeshna Sarkar | Anupam Basu
Proceedings of the Workshop on Machine Translation and Parsing in Indian Languages

2010

pdf bib
Proceedings of the 4th Workshop on Cross Lingual Information Access
Sudeshna Sarkar | Min Zhang | Adam Lopez | Raghavendra Udupa
Proceedings of the 4th Workshop on Cross Lingual Information Access

pdf
Co-occurrence Graph Based Iterative Bilingual Lexicon Extraction From Comparable Corpora
Diptesh Chatterjee | Sudeshna Sarkar | Arpit Mishra
Proceedings of the 4th Workshop on Cross Lingual Information Access

2009

pdf bib
Proceedings of the Third International Workshop on Cross Lingual Information Access: Addressing the Information Need of Multilingual Societies (CLIAWS3)
Sivaji Bandyopadhyay | Pushpak Bhattacharyya | Vasudeva Varma | Sudeshna Sarkar | A Kumaran | Raghavendra Udupa
Proceedings of the Third International Workshop on Cross Lingual Information Access: Addressing the Information Need of Multilingual Societies (CLIAWS3)

pdf
Learning Multi Character Alignment Rules and Classification of Training Data for Transliteration
Dipankar Bose | Sudeshna Sarkar
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)

2008

pdf
A Hybrid Feature Set based Maximum Entropy Hindi Named Entity Recognition
Sujan Kumar Saha | Sudeshna Sarkar | Pabitra Mitra
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf
A Hybrid Named Entity Recognition System for South and South East Asian Languages
Sujan Kumar Saha | Sanjay Chatterji | Sandipan Dandapat | Sudeshna Sarkar | Pabitra Mitra
Proceedings of the IJCNLP-08 Workshop on Named Entity Recognition for South and South East Asian Languages

pdf
Bengali and Hindi to English CLIR Evaluation
Debasis Mandal | Sandipan Dandapat | Mayank Gupta | Pratyush Banerjee | Sudeshna Sarkar
Proceedings of the 2nd workshop on Cross Lingual Information Access (CLIA) Addressing the Information Need of Multilingual Societies

pdf bib
Gazetteer Preparation for Named Entity Recognition in Indian Languages
Sujan Kumar Saha | Sudeshna Sarkar | Pabitra Mitra
Proceedings of the 6th Workshop on Asian Language Resources

pdf
Word Clustering and Word Selection Based Feature Reduction for MaxEnt Based Hindi NER
Sujan Kumar Saha | Pabitra Mitra | Sudeshna Sarkar
Proceedings of ACL-08: HLT

2007

pdf
Automatic Part-of-Speech Tagging for Bengali: An Approach for Morphologically Rich Languages in a Poor Resource Scenario
Sandipan Dandapat | Sudeshna Sarkar | Anupam Basu
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

pdf
Evolution, Optimization, and Language Change: The Case of Bengali Verb Inflections
Monojit Choudhury | Vaibhav Jalan | Sudeshna Sarkar | Anupam Basu
Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology

2006

pdf
A Conceptual Analysis of the Notion of Instrumentality via a Multilingual Analysis
Asanee Kawtrakul | Mukda Suktarachan | Bali Ranaivo-Malancon | Pek Kuan | Achla Raina | Sudeshna Sarkar | Alda Mari | Sina Zarriess | Elixabete Murguia | Patrick Saint-Dizier
Proceedings of the Third ACL-SIGSEM Workshop on Prepositions

2004

pdf
A Diachronic Approach for Schwa Deletion in Indo Aryan Languages
Monojit Choudhury | Anupam Basu | Sudeshna Sarkar
Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology