2020
pdf
bib
abs
Predicting Depression in Screening Interviews from Latent Categorization of Interview Prompts
Alex Rinaldi
|
Jean Fox Tree
|
Snigdha Chaturvedi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Accurately diagnosing depression is difficult– requiring time-intensive interviews, assessments, and analysis. Hence, automated methods that can assess linguistic patterns in these interviews could help psychiatric professionals make faster, more informed decisions about diagnosis. We propose JLPC, a model that analyzes interview transcripts to identify depression while jointly categorizing interview prompts into latent categories. This latent categorization allows the model to define high-level conversational contexts that influence patterns of language in depressed individuals. We show that the proposed model not only outperforms competitive baselines, but that its latent prompt categories provide psycholinguistic insights about depression.
2018
pdf
bib
abs
Modeling Linguistic and Personality Adaptation for Natural Language Generation
Zhichao Hu
|
Jean Fox Tree
|
Marilyn Walker
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
Previous work has shown that conversants adapt to many aspects of their partners’ language. Other work has shown that while every person is unique, they often share general patterns of behavior. Theories of personality aim to explain these shared patterns, and studies have shown that many linguistic cues are correlated with personality traits. We propose an adaptation measure for adaptive natural language generation for dialogs that integrates the predictions of both personality theories and adaptation theories, that can be applied as a dialog unfolds, on a turn by turn basis. We show that our measure meets criteria for validity, and that adaptation varies according to corpora and task, speaker, and the set of features used to model it. We also produce fine-grained models according to the dialog segmentation or the speaker, and demonstrate the decaying trend of adaptation.
2016
pdf
bib
abs
Coordinating Communication in the Wild: The Artwalk Dialogue Corpus of Pedestrian Navigation and Mobile Referential Communication
Kris Liu
|
Jean Fox Tree
|
Marilyn Walker
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
The Artwalk Corpus is a collection of 48 mobile phone conversations between 24 pairs of friends and 24 pairs of strangers performing a novel, naturalistically-situated referential communication task. This task produced dialogues which, on average, are just under 40 minutes. The task requires the identification of public art while walking around and navigating pedestrian routes in the downtown area of Santa Cruz, California. The task involves a Director on the UCSC campus with access to maps providing verbal instructions to a Follower executing the task. The task provides a setting for real-world situated dialogic language and is designed to: (1) elicit entrainment and coordination of referring expressions between the dialogue participants, (2) examine the effect of friendship on dialogue strategies, and (3) examine how the need to complete the task while negotiating myriad, unanticipated events in the real world ― such as avoiding cars and other pedestrians ― affects linguistic coordination and other dialogue behaviors. Previous work on entrainment and coordinating communication has primarily focused on similar tasks in laboratory settings where there are no interruptions and no need to navigate from one point to another in a complex space. The corpus provides a general resource for studies on how coordinated task-oriented dialogue changes when we move outside the laboratory and into the world. It can also be used for studies of entrainment in dialogue, and the form and style of pedestrian instruction dialogues, as well as the effect of friendship on dialogic behaviors.
pdf
bib
abs
A Corpus of Gesture-Annotated Dialogues for Monologue-to-Dialogue Generation from Personal Narratives
Zhichao Hu
|
Michelle Dick
|
Chung-Ning Chang
|
Kevin Bowden
|
Michael Neff
|
Jean Fox Tree
|
Marilyn Walker
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Story-telling is a fundamental and prevalent aspect of human social behavior. In the wild, stories are told conversationally in social settings, often as a dialogue and with accompanying gestures and other nonverbal behavior. This paper presents a new corpus, the Story Dialogue with Gestures (SDG) corpus, consisting of 50 personal narratives regenerated as dialogues, complete with annotations of gesture placement and accompanying gesture forms. The corpus includes dialogues generated by human annotators, gesture annotations on the human generated dialogues, videos of story dialogues generated from this representation, video clips of each gesture used in the gesture annotations, and annotations of the original personal narratives with a deep representation of story called a Story Intention Graph. Our long term goal is the automatic generation of story co-tellings as animated dialogues from the Story Intention Graph. We expect this corpus to be a useful resource for researchers interested in natural language generation, intelligent virtual agents, generation of nonverbal behavior, and story and narrative representations.
pdf
bib
abs
A Verbal and Gestural Corpus of Story Retellings to an Expressive Embodied Virtual Character
Jackson Tolins
|
Kris Liu
|
Michael Neff
|
Marilyn Walker
|
Jean Fox Tree
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
We present a corpus of 44 human-agent verbal and gestural story retellings designed to explore whether humans would gesturally entrain to an embodied intelligent virtual agent. We used a novel data collection method where an agent presented story components in installments, which the human would then retell to the agent. At the end of the installments, the human would then retell the embodied animated agent the story as a whole. This method was designed to allow us to observe whether changes in the agent’s gestural behavior would result in human gestural changes. The agent modified its gestures over the course of the story, by starting out the first installment with gestural behaviors designed to manifest extraversion, and slowly modifying gestures to express introversion over time, or the reverse. The corpus contains the verbal and gestural transcripts of the human story retellings. The gestures were coded for type, handedness, temporal structure, spatial extent, and the degree to which the participants’ gestures match those produced by the agent. The corpus illustrates the variation in expressive behaviors produced by users interacting with embodied virtual characters, and the degree to which their gestures were influenced by the agent’s dynamic changes in personality-based expressive style.
pdf
bib
abs
A Multimodal Motion-Captured Corpus of Matched and Mismatched Extravert-Introvert Conversational Pairs
Jackson Tolins
|
Kris Liu
|
Yingying Wang
|
Jean E. Fox Tree
|
Marilyn Walker
|
Michael Neff
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
This paper presents a new corpus, the Personality Dyads Corpus, consisting of multimodal data for three conversations between three personality-matched, two-person dyads (a total of 9 separate dialogues). Participants were selected from a larger sample to be 0.8 of a standard deviation above or below the mean on the Big-Five Personality extraversion scale, to produce an Extravert-Extravert dyad, an Introvert-Introvert dyad, and an Extravert-Introvert dyad. Each pair carried out conversations for three different tasks. The conversations were recorded using optical motion capture for the body and data gloves for the hands. Dyads’ speech was transcribed and the gestural and postural behavior was annotated with ANVIL. The released corpus includes personality profiles, ANVIL files containing speech transcriptions and the gestural annotations, and BVH files containing body and hand motion in 3D.
2015
pdf
bib
Using Summarization to Discover Argument Facets in Online Idealogical Dialog
Amita Misra
|
Pranav Anand
|
Jean E. Fox Tree
|
Marilyn Walker
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2012
pdf
bib
abs
A Corpus for Research on Deliberation and Debate
Marilyn Walker
|
Jean Fox Tree
|
Pranav Anand
|
Rob Abbott
|
Joseph King
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Deliberative, argumentative discourse is an important component of opinion formation, belief revision, and knowledge discovery; it is a cornerstone of modern civil society. Argumentation is productively studied in branches ranging from theoretical artificial intelligence to political rhetoric, but empirical analysis has suffered from a lack of freely available, unscripted argumentative dialogs. This paper presents the Internet Argument Corpus (IAC), a set of 390,704 posts in 11,800 discussions extracted from the online debate site 4forums.com. A 2866 thread/130,206 post extract of the corpus has been manually sided for topic of discussion, and subsets of this topic-labeled extract have been annotated for several dialogic and argumentative markers: degrees of agreement with a previous post, cordiality, audience-direction, combativeness, assertiveness, emotionality of argumentation, and sarcasm. As an application of this resource, the paper closes with a discussion of the relationship between discourse marker pragmatics, agreement, emotionality, and sarcasm in the IAC corpus.
2011
pdf
bib
How can you say such things?!?: Recognizing Disagreement in Informal Political Argument
Rob Abbott
|
Marilyn Walker
|
Pranav Anand
|
Jean E. Fox Tree
|
Robeson Bowmani
|
Joseph King
Proceedings of the Workshop on Language in Social Media (LSM 2011)
pdf
bib
Cats Rule and Dogs Drool!: Classifying Stance in Online Debate
Pranav Anand
|
Marilyn Walker
|
Rob Abbott
|
Jean E. Fox Tree
|
Robeson Bowmani
|
Michael Minor
Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011)