Lara Martin


2022

pdf
Dungeons and Dragons as a Dialog Challenge for Artificial Intelligence
Chris Callison-Burch | Gaurav Singh Tomar | Lara Martin | Daphne Ippolito | Suma Bailis | David Reitter
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

AI researchers have posited Dungeons and Dragons (D&D) as a challenge problem to test systems on various language-related capabilities. In this paper, we frame D&D specifically as a dialogue system challenge, where the tasks are to both generate the next conversational turn in the game and predict the state of the game given the dialogue history. We create a gameplay dataset consisting of nearly 900 games, with a total of 7,000 players, 800,000 dialogue turns, 500,000 dice rolls, and 58 million words. We automatically annotate the data with partial state information about the game play. We train a large language model (LM) to generate the next game turn, conditioning it on different information. The LM can respond as a particular character or as the player who runs the game—i.e., the Dungeon Master (DM). It is trained to produce dialogue that is either in-character (roleplaying in the fictional world) or out-of-character (discussing rules or strategy). We perform a human evaluation to determine what factors make the generated output plausible and interesting. We further perform an automatic evaluation to determine how well the model can predict the game state given the history and examine how well tracking the game state improves its ability to produce plausible conversational output.

2019

pdf
Guided Neural Language Generation for Automated Storytelling
Prithviraj Ammanabrolu | Ethan Tien | Wesley Cheung | Zhaochen Luo | William Ma | Lara Martin | Mark Riedl
Proceedings of the Second Workshop on Storytelling

Neural network based approaches to automated story plot generation attempt to learn how to generate novel plots from a corpus of natural language plot summaries. Prior work has shown that a semantic abstraction of sentences called events improves neural plot generation and and allows one to decompose the problem into: (1) the generation of a sequence of events (event-to-event) and (2) the transformation of these events into natural language sentences (event-to-sentence). However, typical neural language generation approaches to event-to-sentence can ignore the event details and produce grammatically-correct but semantically-unrelated sentences. We present an ensemble-based model that generates natural language guided by events. Our method outperforms the baseline sequence-to-sequence model. Additionally, we provide results for a full end-to-end automated story generation system, demonstrating how our model works with existing systems designed for the event-to-event problem.

2014

pdf
Identifying Student Leaders from MOOC Discussion Forums through Language Influence
Seungwhan Moon | Saloni Potdar | Lara Martin
Proceedings of the EMNLP 2014 Workshop on Analysis of Large Scale Social Interaction in MOOCs