2025
pdf
bib
abs
Large Language Models Do Multi-Label Classification Differently
Marcus Ma
|
Georgios Chochlakis
|
Niyantha Maruthu Pandiyan
|
Jesse Thomason
|
Shrikanth Narayanan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multi-label classification is prevalent in real-world settings, but the behavior of Large Language Models (LLMs) in this setting is understudied. We investigate how autoregressive LLMs perform multi-label classification, focusing on subjective tasks, by analyzing the output distributions of the models at each label generation step. We find that the initial probability distribution for the first label often does not reflect the eventual final output, even in terms of relative order and find LLMs tend to suppress all but one label at each generation step. We further observe that as model scale increases, their token distributions exhibit lower entropy and higher single-label confidence, but the internal relative ranking of the labels improves. Finetuning methods such as supervised finetuning and reinforcement learning amplify this phenomenon. We introduce the task of distribution alignment for multi-label settings: aligning LLM-derived label distributions with empirical distributions estimated from annotator responses in subjective tasks. We propose both zero-shot and supervised methods which improve both alignment and predictive performance over existing approaches. We find one method – taking the max probability over all label generation distributions instead of just using the initial probability distribution – improves both distribution alignment and overall F1 classification without adding any additional computation.
pdf
bib
abs
Humans Hallucinate Too: Language Models Identify and Correct Subjective Annotation Errors With Label-in-a-Haystack Prompts
Georgios Chochlakis
|
Peter Wu
|
Tikka Arjun Singh Bedi
|
Marcus Ma
|
Kristina Lerman
|
Shrikanth Narayanan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Modeling complex subjective tasks in Natural Language Processing, such as recognizing emotion and morality, is considerably challenging due to significant variation in human annotations. This variation often reflects reasonable differences in semantic interpretations rather than mere noise, necessitating methods to distinguish between legitimate subjectivity and error.We address this challenge by exploring label verification in these contexts using Large Language Models (LLMs). First, we propose a simple In-Context Learning binary filtering baseline that estimates the reasonableness of a document-label pair. We then introduce the Label-in-a-Haystack setting: the query and its label(s) are included in the demonstrations shown to LLMs, which are prompted to predict the label(s) again, while receiving task-specific instructions (e.g., emotion recognition) rather than label copying.We show how the failure to copy the label(s) to the output of the LLM are task-relevant and informative. Building on this, we propose the Label-in-a-Haystack Rectification (LiaHR) framework for subjective label correction: when the model outputs diverge from the reference gold labels, we assign the generated labels to the example instead of discarding it. This approach can be integrated into annotation pipelines to enhance signal-to-noise ratios. Comprehensive analyses, human evaluations, and ecological validity studies verify the utility of LiaHR for label correction. Code is available at https://github.com/gchochla/liahr.
pdf
bib
abs
Creating a Lens of Chinese Culture: A Multimodal Dataset for Chinese Pun Rebus Art Understanding
Tuo Zhang
|
Tiantian Feng
|
Yibin Ni
|
Mengqin Cao
|
Ruying Liu
|
Kiana Avestimehr
|
Katharine Butler
|
Yanjun Weng
|
Mi Zhang
|
Shrikanth Narayanan
|
Salman Avestimehr
Findings of the Association for Computational Linguistics: ACL 2025
Large vision-language models (VLMs) have demonstrated remarkable abilities in understanding everyday content. However, their performance in the domain of art, particularly culturally rich art forms, remains less explored. As a pearl of human wisdom and creativity, art encapsulates complex cultural narratives and symbolism. In this paper, we offer the Pun Rebus Art Dataset, a multimodal dataset for art understanding deeply rooted in traditional Chinese culture. We focus on three primary tasks: identifying salient visual elements, matching elements with their symbolic meanings, and explanations for the conveyed messages. Our evaluation reveals that state-of-the-art VLMs struggle with these tasks, often providing biased and hallucinated explanations and showing limited improvement through in-context learning. By releasing the Pun Rebus Art Dataset, we aim to facilitate the development of VLMs that can better understand and interpret culturally specific content, promoting greater inclusiveness beyond English-based corpora. The dataset and evaluation code are available at [this link](https://github.com/zhang-tuo-pdf/Pun-Rebus-Art-Benchmark).
pdf
bib
abs
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models’ Posteriors
Georgios Chochlakis
|
Alexandros Potamianos
|
Kristina Lerman
|
Shrikanth Narayanan
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs). The knowledge acquired during pre-training is crucial for this few-shot capability, providing the model with task priors. However, recent studies have shown that ICL predominantly relies on retrieving task priors rather than “learning” to perform tasks. This limitation is particularly evident in complex subjective domains such as emotion and morality, where priors significantly influence posterior predictions. In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt. Moreover, we evaluate the posterior bias towards certain annotators by grounding our study in appropriate, quantitative measures of LLM priors. Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead. However, aggregation does not explain the entire gap between ICL and the state of the art, meaning other factors in such tasks also account for the observed phenomena. Finally, by rigorously studying annotator-level labels, we find that it is possible for minority annotators to both better align with LLMs and have their perspectives further amplified.
pdf
bib
abs
CHATTER: A character-attribution dataset for narrative understanding
Sabyasachee Baruah
|
Shrikanth Narayanan
Proceedings of the The 7th Workshop on Narrative Understanding
Computational narrative understanding studies the identification, description, and interaction of the elements of a narrative: characters, attributes, events, and relations.Narrative research has given considerable attention to defining and classifying character types.However, these character-type taxonomies do not generalize well because they are small, too simple, or specific to a domain.We require robust and reliable benchmarks to test whether narrative models truly understand the nuances of the character’s development in the story.Our work addresses this by curating the CHATTER dataset that labels whether a character portrays some attribute for 88124 character-attribute pairs, encompassing 2998 characters, 12967 attributes and 660 movies.We validate a subset of CHATTER, called CHATTEREVAL, using human annotations to serve as an evaluation benchmark for the character attribution task in movie scripts.CHATTEREVAL also assesses narrative understanding and the long-context modeling capacity of language models.
2023
pdf
bib
abs
Character Coreference Resolution in Movie Screenplays
Sabyasachee Baruah
|
Shrikanth Narayanan
Findings of the Association for Computational Linguistics: ACL 2023
Movie screenplays have a distinct narrative structure. It segments the story into scenes containing interleaving descriptions of actions, locations, and character dialogues.A typical screenplay spans several scenes and can include long-range dependencies between characters and events.A holistic document-level understanding of the screenplay requires several natural language processing capabilities, such as parsing, character identification, coreference resolution, action recognition, summarization, and attribute discovery. In this work, we develop scalable and robust methods to extract the structural information and character coreference clusters from full-length movie screenplays. We curate two datasets for screenplay parsing and character coreference — MovieParse and MovieCoref, respectively.We build a robust screenplay parser to handle inconsistencies in screenplay formatting and leverage the parsed output to link co-referring character mentions.Our coreference models can scale to long screenplay documents without drastically increasing their memory footprints.
pdf
bib
abs
Domain Adaptation for Sentiment Analysis Using Robust Internal Representations
Mohammad Rostami
|
Digbalay Bose
|
Shrikanth Narayanan
|
Aram Galstyan
Findings of the Association for Computational Linguistics: EMNLP 2023
Sentiment analysis is a costly yet necessary task for enterprises to study the opinions of their customers to improve their products and to determine optimal marketing strategies. Due to the existence of a wide range of domains across different products and services, cross-domain sentiment analysis methods have received significant attention. These methods mitigate the domain gap between different applications by training cross-domain generalizable classifiers which relax the need for data annotation for each domain. We develop a domain adaptation method which induces large margins between data representations that belong to different classes in an embedding space. This embedding space is trained to be domain-agnostic by matching the data distributions across the domains. Large interclass margins in the source domain help to reduce the effect of “domain shift” in the target domain. Theoretical and empirical analysis are provided to demonstrate that the proposed method is effective.
pdf
bib
Clinical note section classification on doctor-patient conversations in low-resourced settings
Zhuohao Chen
|
Jangwon Kim
|
Yang Liu
|
Shrikanth Narayanan
Proceedings of the Third Workshop on NLP for Medical Conversations
2022
pdf
bib
abs
Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios
Zhuohao Chen
|
Nikolaos Flemotomos
|
Zac Imel
|
David Atkins
|
Shrikanth Narayanan
Findings of the Association for Computational Linguistics: EMNLP 2022
In psychotherapy interactions, the quality of a session is assessed by codifying the communicative behaviors of participants during the conversation through manual observation and annotation. Developing computational approaches for automated behavioral coding can reduce the burden on human coders and facilitate the objective evaluation of the intervention. In the real world, however, implementing such algorithms is associated with data sparsity challenges since privacy concerns lead to limited available in-domain data. In this paper, we leverage a publicly available conversation-based dataset and transfer knowledge to the low-resource behavioral coding task by performing an intermediate language model training via meta-learning. We introduce a task augmentation method to produce a large number of “analogy tasks” — tasks similar to the target one — and demonstrate that the proposed framework predicts target behaviors more accurately than all the other baseline models.
2021
pdf
bib
Annotation and Evaluation of Coreference Resolution in Screenplays
Sabyasachee Baruah
|
Sandeep Nallan Chakravarthula
|
Shrikanth Narayanan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
bib
abs
Towards end-2-end learning for predicting behavior codes from spoken utterances in psychotherapy conversations
Karan Singla
|
Zhuohao Chen
|
David Atkins
|
Shrikanth Narayanan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Spoken language understanding tasks usually rely on pipelines involving complex processing blocks such as voice activity detection, speaker diarization and Automatic speech recognition (ASR). We propose a novel framework for predicting utterance level labels directly from speech features, thus removing the dependency on first generating transcripts, and transcription free behavioral coding. Our classifier uses a pretrained Speech-2-Vector encoder as bottleneck to generate word-level representations from speech features. This pretrained encoder learns to encode speech features for a word using an objective similar to Word2Vec. Our proposed approach just uses speech features and word segmentation information for predicting spoken utterance-level target labels. We show that our model achieves competitive results to other state-of-the-art approaches which use transcribed text for the task of predicting psychotherapy-relevant behavior codes.
pdf
bib
abs
Joint Estimation and Analysis of Risk Behavior Ratings in Movie Scripts
Victor Martinez
|
Krishna Somandepalli
|
Yalda Tehranian-Uhls
|
Shrikanth Narayanan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Exposure to violent, sexual, or substance-abuse content in media increases the willingness of children and adolescents to imitate similar behaviors. Computational methods that identify portrayals of risk behaviors from audio-visual cues are limited in their applicability to films in post-production, where modifications might be prohibitively expensive. To address this limitation, we propose a model that estimates content ratings based on the language use in movie scripts, making our solution available at the earlier stages of creative production. Our model significantly improves the state-of-the-art by adapting novel techniques to learn better movie representations from the semantic and sentiment aspects of a character’s language use, and by leveraging the co-occurrence of risk behaviors, following a multi-task approach. Additionally, we show how this approach can be useful to learn novel insights on the joint portrayal of these behaviors, and on the subtleties that filmmakers may otherwise not pick up on.
pdf
bib
abs
Screenplay Quality Assessment: Can We Predict Who Gets Nominated?
Ming-Chang Chiu
|
Tiantian Feng
|
Xiang Ren
|
Shrikanth Narayanan
Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events
Deciding which scripts to turn into movies is a costly and time-consuming process for filmmakers. Thus, building a tool to aid script selection, an initial phase in movie production, can be very beneficial. Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues. We address this in a two-fold approach: (1) we define the task as predicting nominations of scripts at major film awards with the hypothesis that the peer-recognized scripts should have a greater chance to succeed. (2) based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques. We face two challenges (1) scripts are much longer than other document datasets (2) nominated scripts are limited and thus difficult to collect. However, with narratology-inspired modeling and domain features, our approach offers clear improvements over strong baselines. Our work provides a new approach for future work in screenplay analysis.
2018
pdf
bib
abs
A Multi-task Approach to Learning Multilingual Representations
Karan Singla
|
Dogan Can
|
Shrikanth Narayanan
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
We present a novel multi-task modeling approach to learning multilingual distributed representations of text. Our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model. Our architecture can transparently use both monolingual and sentence aligned bilingual corpora to learn multilingual embeddings, thus covering a vocabulary significantly larger than the vocabulary of the bilingual corpora alone. Our model shows competitive performance in a standard cross-lingual document classification task. We also show the effectiveness of our method in a limited resource scenario.
pdf
bib
abs
NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning
Christos Baziotis
|
Athanasiou Nikolaos
|
Alexandra Chronopoulou
|
Athanasia Kolovou
|
Georgios Paraskevopoulos
|
Nikolaos Ellinas
|
Shrikanth Narayanan
|
Alexandros Potamianos
Proceedings of the 12th International Workshop on Semantic Evaluation
In this paper we present deep-learning models that submitted to the SemEval-2018 Task 1 competition: “Affect in Tweets”. We participated in all subtasks for English tweets. We propose a Bi-LSTM architecture equipped with a multi-layer self attention mechanism. The attention mechanism improves the model performance and allows us to identify salient words in tweets, as well as gain insight into the models making them more interpretable. Our model utilizes a set of word2vec word embeddings trained on a large collection of 550 million Twitter messages, augmented by a set of word affective features. Due to the limited amount of task-specific training data, we opted for a transfer learning approach by pretraining the Bi-LSTMs on the dataset of Semeval 2017, Task 4A. The proposed approach ranked 1st in Subtask E “Multi-Label Emotion Classification”, 2nd in Subtask A “Emotion Intensity Regression” and achieved competitive results in other subtasks.
2017
pdf
bib
abs
Linguistic analysis of differences in portrayal of movie characters
Anil Ramakrishna
|
Victor R. Martínez
|
Nikolaos Malandrakis
|
Karan Singla
|
Shrikanth Narayanan
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We examine differences in portrayal of characters in movies using psycholinguistic and graph theoretic measures computed directly from screenplays. Differences are examined with respect to characters’ gender, race, age and other metadata. Psycholinguistic metrics are extrapolated to dialogues in movies using a linear regression model built on a set of manually annotated seed words. Interesting patterns are revealed about relationships between genders of production team and the gender ratio of characters. Several correlations are noted between gender, race, age of characters and the linguistic metrics.
pdf
bib
abs
Tweester at SemEval-2017 Task 4: Fusion of Semantic-Affective and pairwise classification models for sentiment analysis in Twitter
Athanasia Kolovou
|
Filippos Kokkinos
|
Aris Fergadis
|
Pinelopi Papalampidi
|
Elias Iosif
|
Nikolaos Malandrakis
|
Elisavet Palogiannidi
|
Haris Papageorgiou
|
Shrikanth Narayanan
|
Alexandros Potamianos
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)
In this paper, we describe our submission to SemEval2017 Task 4: Sentiment Analysis in Twitter. Specifically the proposed system participated both to tweet polarity classification (two-, three- and five class) and tweet quantification (two and five-class) tasks.
2016
pdf
bib
Tweester at SemEval-2016 Task 4: Sentiment Analysis in Twitter Using Semantic-Affective Model Adaptation
Elisavet Palogiannidi
|
Athanasia Kolovou
|
Fenia Christopoulou
|
Filippos Kokkinos
|
Elias Iosif
|
Nikolaos Malandrakis
|
Haris Papageorgiou
|
Shrikanth Narayanan
|
Alexandros Potamianos
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
2015
pdf
bib
A quantitative analysis of gender differences in movies using psycholinguistic normatives
Anil Ramakrishna
|
Nikolaos Malandrakis
|
Elizabeth Staruk
|
Shrikanth Narayanan
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
pdf
bib
A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices
Doğan Can
|
Shrikanth Narayanan
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
2014
pdf
bib
SAIL-GRS: Grammar Induction for Spoken Dialogue Systems using CF-IRF Rule Similarity
Kalliopi Zervanou
|
Nikolaos Malandrakis
|
Shrikanth Narayanan
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)
pdf
bib
SAIL: Sentiment Analysis using Semantic Similarity and Contrast Features
Nikolaos Malandrakis
|
Michael Falcone
|
Colin Vaz
|
Jesse James Bisogni
|
Alexandros Potamianos
|
Shrikanth Narayanan
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)
2013
pdf
bib
DeepPurple: Lexical, String and Affective Feature Fusion for Sentence-Level Semantic Similarity Estimation
Nikolaos Malandrakis
|
Elias Iosif
|
Vassiliki Prokopi
|
Alexandros Potamianos
|
Shrikanth Narayanan
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity
pdf
bib
SAIL: A hybrid approach to sentiment analysis
Nikolaos Malandrakis
|
Abe Kazemzadeh
|
Alexandros Potamianos
|
Shrikanth Narayanan
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)
pdf
bib
Which ASR should I choose for my dialogue system?
Fabrizio Morbini
|
Kartik Audhkhasi
|
Kenji Sagae
|
Ron Artstein
|
Doğan Can
|
Panayiotis Georgiou
|
Shri Narayanan
|
Anton Leuski
|
David Traum
Proceedings of the SIGDIAL 2013 Conference
2012
pdf
bib
abs
The Twins Corpus of Museum Visitor Questions
Priti Aggarwal
|
Ron Artstein
|
Jillian Gerten
|
Athanasios Katsamanis
|
Shrikanth Narayanan
|
Angela Nazarian
|
David Traum
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
The Twins corpus is a collection of utterances spoken in interactions with two virtual characters who serve as guides at the Museum of Science in Boston. The corpus contains about 200,000 spoken utterances from museum visitors (primarily children) as well as from trained handlers who work at the museum. In addition to speech recordings, the corpus contains the outputs of speech recognition performed at the time of utterance as well as the system interpretation of the utterances. Parts of the corpus have been manually transcribed and annotated for question interpretation. The corpus has been used for improving performance of the museum characters and for a variety of research projects, such as phonetic-based Natural Language Understanding, creation of conversational characters from text resources, dialogue policy learning, and research on patterns of user interaction. It has the potential to be used for research on children's speech and on language used when talking to a virtual human.
pdf
bib
A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle
Hao Wang
|
Dogan Can
|
Abe Kazemzadeh
|
François Bar
|
Shrikanth Narayanan
Proceedings of the ACL 2012 System Demonstrations
2009
pdf
bib
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Mari Ostendorf
|
Michael Collins
|
Shri Narayanan
|
Douglas W. Oard
|
Lucy Vanderwende
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Mari Ostendorf
|
Michael Collins
|
Shri Narayanan
|
Douglas W. Oard
|
Lucy Vanderwende
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
2008
pdf
bib
Enriching Spoken Language Translation with Dialog Acts
Vivek Kumar Rangarajan Sridhar
|
Srinivas Bangalore
|
Shrikanth Narayanan
Proceedings of ACL-08: HLT, Short Papers
pdf
bib
Mitigation of Data Sparsity in Classifier-Based Translation
Emil Ettelaie
|
Panayiotis G. Georgiou
|
Shrikanth S. Narayanan
Coling 2008: Proceedings of the workshop on Speech Processing for Safety Critical Translation and Pervasive Applications
2007
pdf
bib
Hassan: A Virtual Human for Tactical Questioning
David Traum
|
Antonio Roque
|
Anton Leuski
|
Panayiotis Georgiou
|
Jillian Gerten
|
Bilyana Martinovski
|
Shrikanth Narayanan
|
Susan Robinson
|
Ashish Vaswani
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue
pdf
bib
Exploiting Acoustic and Syntactic Features for Prosody Labeling in a Maximum Entropy Framework
Vivek Kumar Rangarajan Sridhar
|
Srinivas Bangalore
|
Shrikanth Narayanan
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference
2006
pdf
bib
Selecting relevant text subsets from web-data for building topic specific language models
Abhinav Sethy
|
Panayiotis Georgiou
|
Shrikanth Narayanan
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
pdf
bib
Text data acquisition for domain-specific language models
Abhinav Sethy
|
Panayiotis G. Georgiou
|
Shrikanth Narayanan
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
2005
pdf
bib
Dealing with Doctors: A Virtual Human for Non-team Interaction
David Traum
|
William Swartout
|
Jonathan Gratch
|
Stacy Marsella
|
Patrick Kenny
|
Eduard Hovy
|
Shri Narayanan
|
Ed Fast
|
Bilyana Martinovski
|
Rahul Baghat
|
Susan Robinson
|
Andrew Marshall
|
Dagen Wang
|
Sudeep Gandhe
|
Anton Leuski
Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue
pdf
bib
Transonics: A Practical Speech-to-Speech Translator for English-Farsi Medical Dialogs
Robert Belvin
|
Emil Ettelaie
|
Sudeep Gandhe
|
Panayiotis Georgiou
|
Kevin Knight
|
Daniel Marcu
|
Scott Millward
|
Shrikanth Narayanan
|
Howard Neely
|
David Traum
Proceedings of the ACL Interactive Poster and Demonstration Sessions
2004
pdf
bib
Creation of a Doctor-Patient Dialogue Corpus Using Standardized Patients
Robert S. Melvin
|
Win May
|
Shrikanth Narayanan
|
Panayiotis Georgiou
|
Shadi Ganjavi
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)
pdf
bib
A Transcription Scheme for Languages Employing the Arabic Script Motivated by Speech Processing Applications
Shadi Ganjavi
|
Panayiotis G. Georgiou
|
Shrikanth Narayanan
Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages
2001
pdf
bib
Amount of Information Presented in a Complex List: Effects on User Performance
Dawn Dutton
|
Marilyn Walker
|
Selina Chu
|
James Hubbell
|
Shrikanth Narayanan
Proceedings of the First International Conference on Human Language Technology Research
1998
pdf
bib
Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email
Marilyn A. Walker
|
Jeanne C. Fromer
|
Shrikanth Narayanan
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics