Rajakrishnan Rajkumar

Also published as: Rajkumar Rajakrishnan


2021

pdf bib
Effects of Duration, Locality, and Surprisal in Speech Disfluency Prediction in English Spontaneous Speech
Samvit Dammalapati | Rajakrishnan Rajkumar | Sumeet Agarwal
Proceedings of the Society for Computation in Linguistics 2021

2019

pdf bib
Surprisal and Interference Effects of Case Markers in Hindi Word Order
Sidharth Ranjan | Sumeet Agarwal | Rajakrishnan Rajkumar
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Based on the Production-Distribution-Comprehension (PDC) account of language processing, we formulate two distinct hypotheses about case marking, word order choices and processing in Hindi. Our first hypothesis is that Hindi tends to optimize for processing efficiency at both lexical and syntactic levels. We quantify the role of case markers in this process. For the task of predicting the reference sentence occurring in a corpus (amidst meaning-equivalent grammatical variants) using a machine learning model, surprisal estimates from an artificial version of the language (i.e., Hindi without any case markers) result in lower prediction accuracy compared to natural Hindi. Our second hypothesis is that Hindi tends to minimize interference due to case markers while ordering preverbal constituents. We show that Hindi tends to avoid placing next to each other constituents whose heads are marked by identical case inflections. Our findings adhere to PDC assumptions and we discuss their implications for language production, learning and universals.

pdf bib
A Simple Approach to Classify Fictional and Non-Fictional Genres
Mohammed Rameez Qureshi | Sidharth Ranjan | Rajakrishnan Rajkumar | Kushal Shah
Proceedings of the Second Workshop on Storytelling

In this work, we deploy a logistic regression classifier to ascertain whether a given document belongs to the fiction or non-fiction genre. For genre identification, previous work had proposed three classes of features, viz., low-level (character-level and token counts), high-level (lexical and syntactic information) and derived features (type-token ratio, average word length or average sentence length). Using the Recursive feature elimination with cross-validation (RFECV) algorithm, we perform feature selection experiments on an exhaustive set of nineteen features (belonging to all the classes mentioned above) extracted from Brown corpus text. As a result, two simple features viz., the ratio of the number of adverbs to adjectives and the number of adjectives to pronouns turn out to be the most significant. Subsequently, our classification experiments aimed towards genre identification of documents from the Brown and Baby BNC corpora demonstrate that the performance of a classifier containing just the two aforementioned features is at par with that of a classifier containing the exhaustive feature set.

pdf bib
Expectation and Locality Effects in the Prediction of Disfluent Fillers and Repairs in English Speech
Samvit Dammalapati | Rajakrishnan Rajkumar | Sumeet Agarwal
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

This study examines the role of three influential theories of language processing, viz., Surprisal Theory, Uniform Information Density (UID) hypothesis and Dependency Locality Theory (DLT), in predicting disfluencies in speech production. To this end, we incorporate features based on lexical surprisal, word duration and DLT integration and storage costs into logistic regression classifiers aimed to predict disfluencies in the Switchboard corpus of English conversational speech. We find that disfluencies occur in the face of upcoming difficulties and speakers tend to handle this by lessening cognitive load before disfluencies occur. Further, we see that reparandums behave differently from disfluent fillers possibly due to the lessening of the cognitive load also happening in the word choice of the reparandum, i.e., in the disfluency itself. While the UID hypothesis does not seem to play a significant role in disfluency prediction, lexical surprisal and DLT costs do give promising results in explaining language production. Further, we also find that as a means to lessen cognitive load for upcoming difficulties speakers take more time on words preceding disfluencies, making duration a key element in understanding disfluencies.

2018

pdf bib
Uniform Information Density Effects on Syntactic Choice in Hindi
Ayush Jain | Vishal Singh | Sidharth Ranjan | Rajakrishnan Rajkumar | Sumeet Agarwal
Proceedings of the Workshop on Linguistic Complexity and Natural Language Processing

According to the UNIFORM INFORMATION DENSITY (UID) hypothesis (Levy and Jaeger, 2007; Jaeger, 2010), speakers tend to distribute information density across the signal uniformly while producing language. The prior works cited above studied syntactic reduction in language production at particular choice points in a sentence. In contrast, we use a variant of the above UID hypothesis in order to investigate the extent to which word order choices in Hindi are influenced by the drive to minimize the variance of information across entire sentences. To this end, we propose multiple lexical and syntactic measures (at both word and constituent levels) to capture the uniform spread of information across a sentence. Subsequently, we incorporate these measures in machine learning models aimed to distinguish between a naturally occurring corpus sentence and its grammatical variants (expressing the same idea). Our results indicate that our UID measures are not a significant factor in predicting the corpus sentence in the presence of lexical surprisal, a competing control predictor. Finally, in the light of other recent works, we conclude with a discussion of reasons for UID not being suitable for a theory of word order.

2016

pdf bib
Quantifying sentence complexity based on eye-tracking measures
Abhinav Deep Singh | Poojan Mehta | Samar Husain | Rajkumar Rajakrishnan
Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)

Eye-tracking reading times have been attested to reflect cognitive processes underlying sentence comprehension. However, the use of reading times in NLP applications is an underexplored area of research. In this initial work we build an automatic system to assess sentence complexity using automatically predicted eye-tracking reading time measures and demonstrate the efficacy of these reading times for a well known NLP task, namely, readability assessment. We use a machine learning model and a set of features known to be significant predictors of reading times in order to learn per-word reading times from a corpus of English text having reading times of human readers. Subsequently, we use the model to predict reading times for novel text in the context of the aforementioned task. A model based only on reading times gave competitive results compared to the systems that use extensive syntactic features to compute linguistic complexity. Our work, to the best of our knowledge, is the first study to show that automatically predicted reading times can successfully model the difficulty of a text and can be deployed in practical text processing applications.

2012

pdf bib
Minimal Dependency Length in Realization Ranking
Michael White | Rajakrishnan Rajkumar
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
A Word Reordering Model for Improved Machine Translation
Karthik Visweswariah | Rajakrishnan Rajkumar | Ankur Gandhe | Ananthakrishnan Ramanathan | Jiri Navratil
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Linguistically Motivated Complementizer Choice in Surface Realization
Rajakrishnan Rajkumar | Michael White
Proceedings of the UCNLG+Eval: Language Generation and Evaluation Workshop

pdf bib
The OSU System for Surface Realization at Generation Challenges 2011
Rajakrishnan Rajkumar | Dominic Espinosa | Michael White
Proceedings of the 13th European Workshop on Natural Language Generation

2010

pdf bib
Further Meta-Evaluation of Broad-Coverage Surface Realization
Dominic Espinosa | Rajakrishnan Rajkumar | Michael White | Shoshana Berleant
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Designing Agreement Features for Realization Ranking
Rajakrishnan Rajkumar | Michael White
Coling 2010: Posters

2009

pdf bib
Exploiting Named Entity Classes in CCG Surface Realization
Rajakrishnan Rajkumar | Michael White | Dominic Espinosa
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

pdf bib
Perceptron Reranking for CCG Realization
Michael White | Rajakrishnan Rajkumar
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Grammar Engineering for CCG using Ant and XSLT
Scott Martin | Rajakrishnan Rajkumar | Michael White
Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing (SETQA-NLP 2009)

2008

pdf bib
A More Precise Analysis of Punctuation for Broad-Coverage Surface Realization with CCG
Michael White | Rajakrishnan Rajkumar
Coling 2008: Proceedings of the workshop on Grammar Engineering Across Frameworks

2007

pdf bib
Towards broad coverage surface realization with CCG
Michael White | Rajakrishnan Rajkumar | Scott Martin
Proceedings of the Workshop on Using corpora for natural language generation