2022
pdf
abs
TCS WITM 2022@FinSim4-ESG: Augmenting BERT with Linguistic and Semantic features for ESG data classification
Tushar Goel
|
Vipul Chauhan
|
Suyash Sangwan
|
Ishan Verma
|
Tirthankar Dasgupta
|
Lipika Dey
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
Advanced neural network architectures have provided several opportunities to develop systems to automatically capture information from domain-specific unstructured text sources. The FinSim4-ESG shared task, collocated with the FinNLP workshop, proposed two sub-tasks. In sub-task1, the challenge was to design systems that could utilize contextual word embeddings along with sustainability resources to elaborate an ESG taxonomy. In the second sub-task, participants were asked to design a system that could classify sentences into sustainable or unsustainable sentences. In this paper, we utilize semantic similarity features along with BERT embeddings to segregate domain terms into a fixed number of class labels. The proposed model not only considers the contextual BERT embeddings but also incorporates Word2Vec, cosine, and Jaccard similarity which gives word-level importance to the model. For sentence classification, several linguistic elements along with BERT embeddings were used as classification features. We have shown a detailed ablation study for the proposed models.
pdf
abs
ATL at FinCausal 2022: Transformer Based Architecture for Automatic Causal Sentence Detection and Cause-Effect Extraction
Abir Naskar
|
Tirthankar Dasgupta
|
Sudeshna Jana
|
Lipika Dey
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Automatic extraction of cause-effect relationships from natural language texts is a challenging open problem in Artificial Intelligence. Most of the early attempts at its solution used manually constructed linguistic and syntactic rules on restricted domain data sets. With the advent of big data, and the recent popularization of deep learning, the paradigm to tackle this problem has slowly shifted. In this work we proposed a transformer based architecture to automatically detect causal sentences from textual mentions and then identify the corresponding cause-effect relations. We describe our submission to the FinCausal 2022 shared task based on this method. Our model achieves a F1-score of 0.99 for the Task-1 and F1-score of 0.60 for Task-2 on the shared task data set on financial documents.
2020
pdf
abs
Extracting Semantic Aspects for Structured Representation of Clinical Trial Eligibility Criteria
Tirthankar Dasgupta
|
Ishani Mondal
|
Abir Naskar
|
Lipika Dey
Proceedings of the 3rd Clinical Natural Language Processing Workshop
Eligibility criteria in the clinical trials specify the characteristics that a patient must or must not possess in order to be treated according to a standard clinical care guideline. As the process of manual eligibility determination is time-consuming, automatic structuring of the eligibility criteria into various semantic categories or aspects is the need of the hour. Existing methods use hand-crafted rules and feature-based statistical machine learning methods to dynamically induce semantic aspects. However, in order to deal with paucity of aspect-annotated clinical trials data, we propose a novel weakly-supervised co-training based method which can exploit a large pool of unlabeled criteria sentences to augment the limited supervised training data, and consequently enhance the performance. Experiments with 0.2M criteria sentences show that the proposed approach outperforms the competitive supervised baselines by 12% in terms of micro-averaged F1 score for all the aspects. Probing deeper into analysis, we observe domain-specific information boosts up the performance by a significant margin.
pdf
abs
Learning Domain Terms - Empirical Methods to Enhance Enterprise Text Analytics Performance
Gargi Roy
|
Lipika Dey
|
Mohammad Shakir
|
Tirthankar Dasgupta
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
Performance of standard text analytics algorithms are known to be substantially degraded on consumer generated data, which are often very noisy. These algorithms also do not work well on enterprise data which has a very different nature from News repositories, storybooks or Wikipedia data. Text cleaning is a mandatory step which aims at noise removal and correction to improve performance. However, enterprise data need special cleaning methods since it contains many domain terms which appear to be noise against a standard dictionary, but in reality are not so. In this work we present detailed analysis of characteristics of enterprise data and suggest unsupervised methods for cleaning these repositories after domain terms have been automatically segregated from true noise terms. Noise terms are thereafter corrected in a contextual fashion. The effectiveness of the method is established through careful manual evaluation of error corrections over several standard data sets, including those available for hate speech detection, where there is deliberate distortion to avoid detection. We also share results to show enhancement in classification accuracy after noise correction.
2018
pdf
abs
Automatic Curation and Visualization of Crime Related Information from Incrementally Crawled Multi-source News Reports
Tirthankar Dasgupta
|
Lipika Dey
|
Rupsa Saha
|
Abir Naskar
Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations
In this paper, we demonstrate a system for the automatic extraction and curation of crime-related information from multi-source digitally published News articles collected over a period of five years. We have leveraged the use of deep convolution recurrent neural network model to analyze crime articles to extract different crime related entities and events. The proposed methods are not restricted to detecting known crimes only but contribute actively towards maintaining an updated crime ontology. We have done experiments with a collection of 5000 crime-reporting News articles span over time, and multiple sources. The end-product of our experiments is a crime-register that contains details of crime committed across geographies and time. This register can be further utilized for analytical and reporting purposes.
pdf
abs
Augmenting Textual Qualitative Features in Deep Convolution Recurrent Neural Network for Automatic Essay Scoring
Tirthankar Dasgupta
|
Abir Naskar
|
Lipika Dey
|
Rupsa Saha
Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications
In this paper we present a qualitatively enhanced deep convolution recurrent neural network for computing the quality of a text in an automatic essay scoring task. The novelty of the work lies in the fact that instead of considering only the word and sentence representation of a text, we try to augment the different complex linguistic, cognitive and psycological features associated within a text document along with a hierarchical convolution recurrent neural network framework. Our preliminary investigation shows that incorporation of such qualitative feature vectors along with standard word/sentence embeddings can give us better understanding about improving the overall evaluation of the input essays.
pdf
bib
Proceedings of the First International Workshop on Language Cognition and Computational Models
Manjira Sinha
|
Tirthankar Dasgupta
Proceedings of the First International Workshop on Language Cognition and Computational Models
pdf
abs
Automatic Extraction of Causal Relations from Text using Linguistically Informed Deep Neural Networks
Tirthankar Dasgupta
|
Rupsa Saha
|
Lipika Dey
|
Abir Naskar
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
In this paper we have proposed a linguistically informed recursive neural network architecture for automatic extraction of cause-effect relations from text. These relations can be expressed in arbitrarily complex ways. The architecture uses word level embeddings and other linguistic features to detect causal events and their effects mentioned within a sentence. The extracted events and their relations are used to build a causal-graph after clustering and appropriate generalization, which is then used for predictive purposes. We have evaluated the performance of the proposed extraction model with respect to two baseline systems,one a rule-based classifier, and the other a conditional random field (CRF) based supervised model. We have also compared our results with related work reported in the past by other authors on SEMEVAL data set, and found that the proposed bi-directional LSTM model enhanced with an additional linguistic layer performs better. We have also worked extensively on creating new annotated datasets from publicly available data, which we are willing to share with the community.
pdf
abs
Leveraging Web Based Evidence Gathering for Drug Information Identification from Tweets
Rupsa Saha
|
Abir Naskar
|
Tirthankar Dasgupta
|
Lipika Dey
Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task
In this paper, we have explored web-based evidence gathering and different linguistic features to automatically extract drug names from tweets and further classify such tweets into Adverse Drug Events or not. We have evaluated our proposed models with the dataset as released by the SMM4H workshop shared Task-1 and Task-3 respectively. Our evaluation results shows that the proposed model achieved good results, with Precision, Recall and F-scores of 78.5%, 88% and 82.9% respectively for Task1 and 33.2%, 54.7% and 41.3% for Task3.
2017
pdf
Study on Visual Word Recognition in Bangla across Different Reader Groups
Manjira Sinha
|
Tirthankar Dasgupta
Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017)
2016
pdf
Effect of Syntactic Features in Bangla Sentence Comprehension
Manjira Sinha
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of the 13th International Conference on Natural Language Processing
pdf
abs
A Framework for Mining Enterprise Risk and Risk Factors from News Documents
Tirthankar Dasgupta
|
Lipika Dey
|
Prasenjit Dey
|
Rupsa Saha
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations
Any real world events or trends that can affect the company’s growth trajectory can be considered as risk. There has been a growing need to automatically identify, extract and analyze risk related statements from news events. In this demonstration, we will present a risk analytics framework that processes enterprise project management reports in the form of textual data and news documents and classify them into valid and invalid risk categories. The framework also extracts information from the text pertaining to the different categories of risks like their possible cause and impacts. Accordingly, we have used machine learning based techniques and studied different linguistic features like n-gram, POS, dependency, future timing, uncertainty factors in texts and their various combinations. A manual annotation study from management experts using risk descriptions collected for a specific organization was conducted to evaluate the framework. The evaluation showed promising results for automated risk analysis and identification.
2015
pdf
Compositionality in Bangla Compound Verbs and their Processing in the Mental Lexicon
Tirthankar Dasgupta
|
Manjira Sinha
|
Anupam Basu
Proceedings of the 12th International Conference on Natural Language Processing
2014
pdf
Influence of Target Reader Background and Text Features on Text Readability in Bangla: A Computational Approach
Manjira Sinha
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers
pdf
Text Readability in Hindi: A Comparative Study of Feature Performances Using Support Vectors
Manjira Sinha
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of the 11th International Conference on Natural Language Processing
pdf
abs
Design and Development of an Online Computational Framework to Facilitate Language Comprehension Research on Indian Languages
Manjira Sinha
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
In this paper we have developed an open-source online computational framework that can be used by different research groups to conduct reading researches on Indian language texts. The framework can be used to develop a large annotated Indian language text comprehension data from different user based experiments. The novelty in this framework lies in the fact that it brings different empirical data-collection techniques for text comprehension under one roof. The framework has been customized specifically to address language particularities for Indian languages. It will also offer many types of automatic analysis on the data at different levels such as full text, sentence and word level. To address the subjectivity of text difficulty perception, the framework allows to capture user background against multiple factors. The assimilated data can be automatically cross referenced against varying strata of readers.
2013
pdf
Psycholinguistically Motivated Computational Models on the Organization and Processing of Morphologically Complex Words
Tirthankar Dasgupta
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop
2012
pdf
Forward Transliteration of Dzongkha Text to Braille
Tirthankar Dasgupta
|
Manjira Sinha
|
Anupam Basu
Proceedings of the Second Workshop on Advances in Text Input Methods
pdf
Automatic Extraction of Compound Verbs from Bangla Corpora
Sibanshu Mukhopadhayay
|
Tirthankar Dasgupta
|
Manjira Sinha
|
Anupam Basu
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing
pdf
A New Semantic Lexicon and Similarity Measure in Bangla
Manjira Sinha
|
Abhik Jana
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon
pdf
Modelling the Organization and Processing of Bangla Polymorphemic Words in the Mental Lexicon: A Computational Approach
Tirthankar Dasgupta
|
Manjira Sinha
|
Anupam Basu
Proceedings of COLING 2012: Posters
pdf
New Readability Measures for Bangla and Hindi Texts
Manjira Sinha
|
Sakshi Sharma
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of COLING 2012: Posters
2010
pdf
abs
Resource Creation for Training and Testing of Transliteration Systems for Indian Languages
Sowmya V. B.
|
Monojit Choudhury
|
Kalika Bali
|
Tirthankar Dasgupta
|
Anupam Basu
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Machine transliteration is used in a number of NLP applications ranging from machine translation and information retrieval to input mechanisms for non-roman scripts. Many popular Input Method Editors for Indian languages, like Baraha, Akshara, Quillpad etc, use back-transliteration as a mechanism to allow users to input text in a number of Indian language. The lack of a standard dataset to evaluate these systems makes it difficult to make any meaningful comparisons of their relative accuracies. In this paper, we describe the methodology for the creation of a dataset of ~2500 transliterated sentence pairs each in Bangla, Hindi and Telugu. The data was collected across three different modes from a total of 60 users. We believe that this dataset will prove useful not only for the evaluation and training of back-transliteration systems but also help in the linguistic analysis of the process of transliterating Indian languages from native scripts to Roman.
2008
pdf
Prototype Machine Translation System From Text-To-Indian Sign Language
Tirthankar Dasgupta
|
Sandipan Dandpat
|
Anupam Basu
Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages
A Multilingual Multimedia Indian Sign Language Dictionary Tool
Tirthankar Dasgupta
|
Sambit Shukla
|
Sandeep Kumar
|
Synny Diwakar
|
Anupam Basu
Proceedings of the 6th Workshop on Asian Language Resources