Suma Bhat


2021

pdf bib
Idiomatic Expression Identification using Semantic Compatibility
Ziheng Zeng | Suma Bhat
Transactions of the Association for Computational Linguistics, Volume 9

Abstract Idiomatic expressions are an integral part of natural language and constantly being added to a language. Owing to their non-compositionality and their ability to take on a figurative or literal meaning depending on the sentential context, they have been a classical challenge for NLP systems. To address this challenge, we study the task of detecting whether a sentence has an idiomatic expression and localizing it when it occurs in a figurative sense. Prior research for this task has studied specific classes of idiomatic expressions offering limited views of their generalizability to new idioms. We propose a multi-stage neural architecture with attention flow as a solution. The network effectively fuses contextual and lexical information at different levels using word and sub-word representations. Empirical evaluations on three of the largest benchmark datasets with idiomatic expressions of varied syntactic patterns and degrees of non-compositionality show that our proposed model achieves new state-of-the-art results. A salient feature of the model is its ability to identify idioms unseen during training with gains from 1.4% to 30.8% over competitive baselines on the largest dataset.

pdf bib
PIE: A Parallel Idiomatic Expression Corpus for Idiomatic Sentence Generation and Paraphrasing
Jianing Zhou | Hongyu Gong | Suma Bhat
Proceedings of the 17th Workshop on Multiword Expressions (MWE 2021)

Idiomatic expressions (IE) play an important role in natural language, and have long been a “pain in the neck” for NLP systems. Despite this, text generation tasks related to IEs remain largely under-explored. In this paper, we propose two new tasks of idiomatic sentence generation and paraphrasing to fill this research gap. We introduce a curated dataset of 823 IEs, and a parallel corpus with sentences containing them and the same sentences where the IEs were replaced by their literal paraphrases as the primary resource for our tasks. We benchmark existing deep learning models, which have state-of-the-art performance on related tasks using automated and manual evaluation with our dataset to inspire further research on our proposed tasks. By establishing baseline models, we pave the way for more comprehensive and accurate modeling of IEs, both for generation and paraphrasing.

pdf bib
Generate, Prune, Select: A Pipeline for Counterspeech Generation against Online Hate Speech
Wanzheng Zhu | Suma Bhat
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Euphemistic Phrase Detection by Masked Language Model
Wanzheng Zhu | Suma Bhat
Findings of the Association for Computational Linguistics: EMNLP 2021

It is a well-known approach for fringe groups and organizations to use euphemisms—ordinary-sounding and innocent-looking words with a secret meaning—to conceal what they are discussing. For instance, drug dealers often use “pot” for marijuana and “avocado” for heroin. From a social media content moderation perspective, though recent advances in NLP have enabled the automatic detection of such single-word euphemisms, no existing work is capable of automatically detecting multi-word euphemisms, such as “blue dream” (marijuana) and “black tar” (heroin). Our paper tackles the problem of euphemistic phrase detection without human effort for the first time, as far as we are aware. We first perform phrase mining on a raw text corpus (e.g., social media posts) to extract quality phrases. Then, we utilize word embedding similarities to select a set of euphemistic phrase candidates. Finally, we rank those candidates by a masked language model—SpanBERT. Compared to strong baselines, we report 20-50% higher detection accuracies using our algorithm for detecting euphemistic phrases.

pdf bib
Paraphrase Generation: A Survey of the State of the Art
Jianing Zhou | Suma Bhat
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

This paper focuses on paraphrase generation,which is a widely studied natural language generation task in NLP. With the development of neural models, paraphrase generation research has exhibited a gradual shift to neural methods in the recent years. This has provided architectures for contextualized representation of an input text and generating fluent, diverseand human-like paraphrases. This paper surveys various approaches to paraphrase generation with a main focus on neural methods.

2020

pdf bib
Context-Aware Automatic Text Simplification of Health Materials in Low-Resource Domains
Tarek Sakakini | Jong Yoon Lee | Aditya Duri | Renato F.L. Azevedo | Victor Sadauskas | Kuangxiao Gu | Suma Bhat | Dan Morrow | James Graumlich | Saqib Walayat | Mark Hasegawa-Johnson | Thomas Huang | Ann Willemsen-Dunlap | Donald Halpin
Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis

Healthcare systems have increased patients’ exposure to their own health materials to enhance patients’ health levels, but this has been impeded by patients’ lack of understanding of their health material. We address potential barriers to their comprehension by developing a context-aware text simplification system for health material. Given the scarcity of annotated parallel corpora in healthcare domains, we design our system to be independent of a parallel corpus, complementing the availability of data-driven neural methods when such corpora are available. Our system compensates for the lack of direct supervision using a biomedical lexical database: Unified Medical Language System (UMLS). Compared to a competitive prior approach that uses a tool for identifying biomedical concepts and a consumer-directed vocabulary list, we empirically show the enhanced accuracy of our system due to improved handling of ambiguous terms. We also show the enhanced accuracy of our system over directly-supervised neural methods in this low-resource setting. Finally, we show the direct impact of our system on laypeople’s comprehension of health material via a human subjects’ study (n=160).

pdf bib
Rich Syntactic and Semantic Information Helps Unsupervised Text Style Transfer
Hongyu Gong | Linfeng Song | Suma Bhat
Proceedings of the 13th International Conference on Natural Language Generation

Text style transfer aims to change an input sentence to an output sentence by changing its text style while preserving the content. Previous efforts on unsupervised text style transfer only use the surface features of words and sentences. As a result, the transferred sentences may either have inaccurate or missing information compared to the inputs. We address this issue by explicitly enriching the inputs via syntactic and semantic structures, from which richer features are then extracted to better capture the original information. Experiments on two text-style-transfer tasks show that our approach improves the content preservation of a strong unsupervised baseline model thereby demonstrating improved transfer performance.

pdf bib
GRUEN for Evaluating Linguistic Quality of Generated Text
Wanzheng Zhu | Suma Bhat
Findings of the Association for Computational Linguistics: EMNLP 2020

Automatic evaluation metrics are indispensable for evaluating generated text. To date, these metrics have focused almost exclusively on the content selection aspect of the system output, ignoring the linguistic quality aspect altogether. We bridge this gap by proposing GRUEN for evaluating Grammaticality, non-Redundancy, focUs, structure and coherENce of generated text. GRUEN utilizes a BERT-based model and a class of syntactic, semantic, and contextual features to examine the system output. Unlike most existing evaluation metrics which require human references as an input, GRUEN is reference-less and requires only the system output. Besides, it has the advantage of being unsupervised, deterministic, and adaptable to various tasks. Experiments on seven datasets over four language generation tasks show that the proposed metric correlates highly with human judgments.

pdf bib
Enriching Word Embeddings with Temporal and Spatial Information
Hongyu Gong | Suma Bhat | Pramod Viswanath
Proceedings of the 24th Conference on Computational Natural Language Learning

The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time- and location-stamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings.

pdf bib
IlliniMet: Illinois System for Metaphor Detection with Contextual and Linguistic Information
Hongyu Gong | Kshitij Gupta | Akriti Jain | Suma Bhat
Proceedings of the Second Workshop on Figurative Language Processing

Metaphors are rhetorical use of words based on the conceptual mapping as opposed to their literal use. Metaphor detection, an important task in language understanding, aims to identify metaphors in word level from given sentences. We present IlliniMet, a system to automatically detect metaphorical words. Our model combines the strengths of the contextualized representation by the widely used RoBERTa model and the rich linguistic information from external resources such as WordNet. The proposed approach is shown to outperform strong baselines on a benchmark dataset. Our best model achieves F1 scores of 73.0% on VUA ALLPOS, 77.1% on VUA VERB, 70.3% on TOEFL ALLPOS and 71.9% on TOEFL VERB.

2019

pdf bib
Equipping Educational Applications with Domain Knowledge
Tarek Sakakini | Hongyu Gong | Jong Yoon Lee | Robert Schloss | JinJun Xiong | Suma Bhat
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subject-specific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in better performances than while using a general domain corpus, a heuristically constructed domain-specific corpus, and a corpus generated by a popular system: BootCaT.

pdf bib
Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus
Hongyu Gong | Suma Bhat | Lingfei Wu | JinJun Xiong | Wen-mei Hwu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Text style transfer rephrases a text from a source style (e.g., informal) to a target style (e.g., formal) while keeping its original meaning. Despite the success existing works have achieved using a parallel corpus for the two styles, transferring text style has proven significantly more challenging when there is no parallel training corpus. In this paper, we address this challenge by using a reinforcement-learning-based generator-evaluator architecture. Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style. Our evaluator is an adversarially trained style discriminator with semantic and syntactic constraints that score the generated sentence for style, meaning preservation, and fluency. Experimental results on two different style transfer tasks–sentiment transfer, and formality transfer–show that our model outperforms state-of-the-art approaches.Furthermore, we perform a manual evaluation that demonstrates the effectiveness of the proposed method using subjective metrics of generated text quality.

pdf bib
PaRe: A Paper-Reviewer Matching Approach Using a Common Topic Space
Omer Anjum | Hongyu Gong | Suma Bhat | Wen-Mei Hwu | JinJun Xiong
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Finding the right reviewers to assess the quality of conference submissions is a time consuming process for conference organizers. Given the importance of this step, various automated reviewer-paper matching solutions have been proposed to alleviate the burden. Prior approaches including bag-of-words model and probabilistic topic model are less effective to deal with the vocabulary mismatch and partial topic overlap between the submission and reviewer. Our approach, the common topic model, jointly models the topics common to the submission and the reviewer’s profile while relying on abstract topic vectors. Experiments and insightful evaluations on two datasets demonstrate that the proposed method achieves consistent improvements compared to the state-of-the-art.

2018

pdf bib
Preposition Sense Disambiguation and Representation
Hongyu Gong | Jiaqi Mu | Suma Bhat | Pramod Viswanath
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Prepositions are highly polysemous, and their variegated senses encode significant semantic information. In this paper we match each preposition’s left- and right context, and their interplay to the geometry of the word vectors to the left and right of the preposition. Extracting these features from a large corpus and using them with machine learning models makes for an efficient preposition sense disambiguation (PSD) algorithm, which is comparable to and better than state-of-the-art on two benchmark datasets. Our reliance on no linguistic tool allows us to scale the PSD algorithm to a large corpus and learn sense-specific preposition representations. The crucial abstraction of preposition senses as word representations permits their use in downstream applications–phrasal verb paraphrasing and preposition selection–with new state-of-the-art results.

pdf bib
Document Similarity for Texts of Varying Lengths via Hidden Topics
Hongyu Gong | Tarek Sakakini | Suma Bhat | JinJun Xiong
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of the incorporation of domain knowledge to text matching.

pdf bib
Embedding Syntax and Semantics of Prepositions via Tensor Decomposition
Hongyu Gong | Suma Bhat | Pramod Viswanath
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Prepositions are among the most frequent words in English and play complex roles in the syntax and semantics of sentences. Not surprisingly, they pose well-known difficulties in automatic processing of sentences (prepositional attachment ambiguities and idiosyncratic uses in phrases). Existing methods on preposition representation treat prepositions no different from content words (e.g., word2vec and GloVe). In addition, recent studies aiming at solving prepositional attachment and preposition selection problems depend heavily on external linguistic resources and use dataset-specific word representations. In this paper we use word-triple counts (one of the triples being a preposition) to capture a preposition’s interaction with its attachment and complement. We then derive preposition embeddings via tensor decomposition on a large unlabeled corpus. We reveal a new geometry involving Hadamard products and empirically demonstrate its utility in paraphrasing phrasal verbs. Furthermore, our preposition embeddings are used as simple features in two challenging downstream tasks: preposition selection and prepositional attachment disambiguation. We achieve results comparable to or better than the state-of-the-art on multiple standardized datasets.

2017

pdf bib
MORSE: Semantic-ally Drive-n MORpheme SEgment-er
Tarek Sakakini | Suma Bhat | Pramod Viswanath
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We present in this paper a novel framework for morpheme segmentation which uses the morpho-syntactic regularities preserved by word representations, in addition to orthographic features, to segment words into morphemes. This framework is the first to consider vocabulary-wide syntactico-semantic information for this task. We also analyze the deficiencies of available benchmarking datasets and introduce our own dataset that was created on the basis of compositionality. We validate our algorithm across datasets and present state-of-the-art results.

pdf bib
Representing Sentences as Low-Rank Subspaces
Jiaqi Mu | Suma Bhat | Pramod Viswanath
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Sentences are important semantic units of natural language. A generic, distributional representation of sentences that can capture the latent semantics is beneficial to multiple downstream applications. We observe a simple geometry of sentences – the word representations of a given sentence (on average 10.23 words in all SemEval datasets with a standard deviation 4.84) roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this observation, we represent a sentence by the low-rank subspace spanned by its word vectors. Such an unsupervised representation is empirically validated via semantic textual similarity tasks on 19 different datasets, where it outperforms the sophisticated neural network models, including skip-thought vectors, by 15% on average.

2014

pdf bib
Predicting Attrition Along the Way: The UIUC Model
Bussaba Amnueypornsakul | Suma Bhat | Phakpoom Chinprutthiwong
Proceedings of the EMNLP 2014 Workshop on Analysis of Large Scale Social Interaction in MOOCs

pdf bib
Shallow Analysis Based Assessment of Syntactic Complexity for Automated Speech Scoring
Suma Bhat | Huichao Xue | Su-Youn Yoon
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Machine-guided Solution to Mathematical Word Problems
Bussaba Amnueypornsakul | Suma Bhat
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

2013

pdf bib
Statistical Stemming for Kannada
Suma Bhat
Proceedings of the 4th Workshop on South and Southeast Asian Natural Language Processing

2012

pdf bib
Assessment of ESL Learners’ Syntactic Competence Based on Similarity Measures
Su-Youn Yoon | Suma Bhat
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Vocabulary Profile as a Measure of Vocabulary Sophistication
Su-Youn Yoon | Suma Bhat | Klaus Zechner
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Morpheme Segmentation for Kannada Standing on the Shoulder of Giants
Suma Bhat
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing

2009

pdf bib
Knowing the Unseen: Estimating Vocabulary Size over Unseen Samples
Suma Bhat | Richard Sproat
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2007

pdf bib
UIUC: A Knowledge-rich Approach to Identifying Semantic Relations between Nominals
Brandon Beamer | Suma Bhat | Brant Chee | Andrew Fister | Alla Rozovskaya | Roxana Girju
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)