Eric Fosler-Lussier

Also published as: Eric Fosler, J. Eric Fosler


2024

pdf
A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models
Jaylen Jones | Lingbo Mo | Eric Fosler-Lussier | Huan Sun
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Counter narratives - informed responses to hate speech contexts designed to refute hateful claims and de-escalate encounters - have emerged as an effective hate speech intervention strategy. While previous work has proposed automatic counter narrative generation methods to aid manual interventions, the evaluation of these approaches remains underdeveloped. Previous automatic metrics for counter narrative evaluation lack alignment with human judgment as they rely on superficial reference comparisons instead of incorporating key aspects of counter narrative quality as evaluation criteria. To address prior evaluation limitations, we propose a novel evaluation framework prompting LLMs to provide scores and feedback for generated counter narrative candidates using 5 defined aspects derived from guidelines from counter narrative specialized NGOs. We found that LLM evaluators achieve strong alignment to human-annotated scores and feedback and outperform alternative metrics, indicating their potential as multi-aspect, reference-free and interpretable evaluators for counter narrative evaluation.

2023

pdf
Selective Demonstrations for Cross-domain Text-to-SQL
Shuaichen Chang | Eric Fosler-Lussier
Findings of the Association for Computational Linguistics: EMNLP 2023

Large language models (LLMs) with in-context learning have demonstrated impressive generalization capabilities in the cross-domain text-to-SQL task, without the use of in-domain annotations. However, incorporating in-domain demonstration examples has been found to greatly enhance LLMs’ performance. In this paper, we delve into the key factors within in-domain examples that contribute to the improvement and explore whether we can harness these benefits without relying on in-domain annotations. Based on our findings, we propose a demonstration selection framework, ODIS, which utilizes both out-of-domain examples and synthetically generated in-domain examples to construct demonstrations. By retrieving demonstrations from hybrid sources, ODIS leverages the advantages of both, showcasing its effectiveness compared to baseline methods that rely on a single data source. Furthermore, ODIS outperforms state-of-the-art approaches on two cross-domain text-to-SQL datasets, with improvements of 1.1 and 11.8 points in execution accuracy, respectively.

pdf
Bootstrapping a Conversational Guide for Colonoscopy Prep
Pulkit Arya | Madeleine Bloomquist | Subhankar Chakraborty | Andrew Perrault | William Schuler | Eric Fosler-Lussier | Michael White
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Creating conversational systems for niche domains is a challenging task, further exacerbated by a lack of quality datasets. We explore the construction of safer conversational systems for guiding patients in preparing for colonoscopies. This has required a data generation pipeline to generate a minimum viable dataset to bootstrap a semantic parser, augmented by automatic paraphrasing. Our study suggests large language models (e.g., GPT-3.5 and GPT-4) are a viable alternative to crowd sourced paraphrasing, but conversational systems that rely upon language models’ ability to do temporal reasoning struggle to provide accurate responses. A neural-symbolic system that performs temporal reasoning on an intermediate representation of user queries shows promising results compared to an end-to-end dialogue system, improving the number of correct responses while vastly reducing the number of incorrect or misleading ones.

2021

pdf
Learning Latent Structures for Cross Action Phrase Relations in Wet Lab Protocols
Chaitanya Kulkarni | Jany Chan | Eric Fosler-Lussier | Raghu Machiraju
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Wet laboratory protocols (WLPs) are critical for conveying reproducible procedures in biological research. They are composed of instructions written in natural language describing the step-wise processing of materials by specific actions. This process flow description for reagents and materials synthesis in WLPs can be captured by material state transfer graphs (MSTGs), which encode global temporal and causal relationships between actions. Here, we propose methods to automatically generate a MSTG for a given protocol by extracting all action relationships across multiple sentences. We also note that previous corpora and methods focused primarily on local intra-sentence relationships between actions and entities and did not address two critical issues: (i) resolution of implicit arguments and (ii) establishing long-range dependencies across sentences. We propose a new model that incrementally learns latent structures and is better suited to resolving inter-sentence relations and implicit arguments. This model draws upon a new corpus WLP-MSTG which was created by extending annotations in the WLP corpora for inter-sentence relations and implicit arguments. Our model achieves an F1 score of 54.53% for temporal and causal relations in protocols from our corpus, which is a significant improvement over previous models - DyGIE++:28.17%; spERT:27.81%. We make our annotated WLP-MSTG corpus available to the research community.

pdf
TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between Corpora
Denis Newman-Griffis | Venkatesh Sivaraman | Adam Perer | Eric Fosler-Lussier | Harry Hochheiser
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations

Embeddings of words and concepts capture syntactic and semantic regularities of language; however, they have seen limited use as tools to study characteristics of different corpora and how they relate to one another. We introduce TextEssence, an interactive system designed to enable comparative analysis of corpora using embeddings. TextEssence includes visual, neighbor-based, and similarity-based modes of embedding analysis in a lightweight, web-based interface. We further propose a new measure of embedding confidence based on nearest neighborhood overlap, to assist in identifying high-quality embeddings for corpus analysis. A case study on COVID-19 scientific literature illustrates the utility of the system. TextEssence can be found at https://textessence.github.io.

2020

pdf bib
Sequence-to-Set Semantic Tagging for Complex Query Reformulation and Automated Text Categorization in Biomedical IR using Self-Attention
Manirupa Das | Juanxi Li | Eric Fosler-Lussier | Simon Lin | Steve Rust | Yungui Huang | Rajiv Ramnath
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

Novel contexts, comprising a set of terms referring to one or more concepts, may often arise in complex querying scenarios such as in evidence-based medicine (EBM) involving biomedical literature. These may not explicitly refer to entities or canonical concept forms occurring in a fact-based knowledge source, e.g. the UMLS ontology. Moreover, hidden associations between related concepts meaningful in the current context, may not exist within a single document, but across documents in the collection. Predicting semantic concept tags of documents can therefore serve to associate documents related in unseen contexts, or categorize them, in information filtering or retrieval scenarios. Thus, inspired by the success of sequence-to-sequence neural models, we develop a novel sequence-to-set framework with attention, for learning document representations in a unique unsupervised setting, using no human-annotated document labels or external knowledge resources and only corpus-derived term statistics to drive the training, that can effect term transfer within a corpus for semantically tagging a large collection of documents. Our sequence-to-set modeling approach to predict semantic tags, gives to the best of our knowledge, the state-of-the-art for both, an unsupervised query expansion (QE) task for the TREC CDS 2016 challenge dataset when evaluated on an Okapi BM25–based document retrieval system; and also over the MLTM system baseline baseline (Soleimani and Miller, 2016), for both supervised and semi-supervised multi-label prediction tasks on the del.icio.us and Ohsumed datasets. We make our code and data publicly available.

pdf
How Self-Attention Improves Rare Class Performance in a Question-Answering Dialogue Agent
Adam Stiff | Qi Song | Eric Fosler-Lussier
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Contextualized language modeling using deep Transformer networks has been applied to a variety of natural language processing tasks with remarkable success. However, we find that these models are not a panacea for a question-answering dialogue agent corpus task, which has hundreds of classes in a long-tailed frequency distribution, with only thousands of data points. Instead, we find substantial improvements in recall and accuracy on rare classes from a simple one-layer RNN with multi-headed self-attention and static word embeddings as inputs. While much research has used attention weights to illustrate what input is important for a task, the complexities of our dialogue corpus offer a unique opportunity to examine how the model represents what it attends to, and we offer a detailed analysis of how that contributes to improved performance on rare classes. A particularly interesting phenomenon we observe is that the model picks up implicit meanings by splitting different aspects of the semantics of a single word across multiple attention heads.

pdf
Contextualized Embeddings for Enriching Linguistic Analyses on Politeness
Ahmad Aljanaideh | Eric Fosler-Lussier | Marie-Catherine de Marneffe
Proceedings of the 28th International Conference on Computational Linguistics

Linguistic analyses in natural language processing (NLP) have often been performed around the static notion of words where the context (surrounding words) is not considered. For example, previous analyses on politeness have focused on comparing the use of static words such as personal pronouns across (im)polite requests without taking the context of those words into account. Current word embeddings in NLP do capture context and thus can be leveraged to enrich linguistic analyses. In this work, we introduce a model which leverages the pre-trained BERT model to cluster contextualized representations of a word based on (1) the context in which the word appears and (2) the labels of items the word occurs in. Using politeness as case study, this model is able to automatically discover interpretable, fine-grained context patterns of words, some of which align with existing theories on politeness. Our model further discovers novel finer-grained patterns associated with (im)polite language. For example, the word please can occur in impolite contexts that are predictable from BERT clustering. The approach proposed here is validated by showing that features based on fine-grained patterns inferred from the clustering improve over politeness-word baselines.

2019

pdf bib
Characterizing the Impact of Geometric Properties of Word Embeddings on Task Performance
Brendan Whitaker | Denis Newman-Griffis | Aparajita Haldar | Hakan Ferhatosmanoglu | Eric Fosler-Lussier
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP

Analysis of word embedding properties to inform their use in downstream NLP tasks has largely been studied by assessing nearest neighbors. However, geometric properties of the continuous feature space contribute directly to the use of embedding features in downstream models, and are largely unexplored. We consider four properties of word embedding geometry, namely: position relative to the origin, distribution of features in the vector space, global pairwise distances, and local pairwise distances. We define a sequence of transformations to generate new embeddings that expose subsets of these properties to downstream models and evaluate change in task performance to understand the contribution of each property to NLP models. We transform publicly available pretrained embeddings from three popular toolkits (word2vec, GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model linguistic information in the vector space, and extrinsic tasks, which use vectors as input to machine learning models. We find that intrinsic evaluations are highly sensitive to absolute position, while extrinsic tasks rely primarily on local similarity. Our findings suggest that future embedding models and post-processing techniques should focus primarily on similarity to nearby points in vector space.

pdf
HARE: a Flexible Highlighting Annotator for Ranking and Exploration
Denis Newman-Griffis | Eric Fosler-Lussier
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations

Exploration and analysis of potential data sources is a significant challenge in the application of NLP techniques to novel information domains. We describe HARE, a system for highlighting relevant information in document collections to support ranking and triage, which provides tools for post-processing and qualitative analysis for model development and tuning. We apply HARE to the use case of narrative descriptions of mobility information in clinical data, and demonstrate its utility in comparing candidate embedding features. We provide a web-based interface for annotation visualization and document ranking, with a modular backend to support interoperability with existing annotation tools. Our system is available online at https://github.com/OSU-slatelab/HARE.

pdf
Writing habits and telltale neighbors: analyzing clinical concept usage patterns with sublanguage embeddings
Denis Newman-Griffis | Eric Fosler-Lussier
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)

Natural language processing techniques are being applied to increasingly diverse types of electronic health records, and can benefit from in-depth understanding of the distinguishing characteristics of medical document types. We present a method for characterizing the usage patterns of clinical concepts among different document types, in order to capture semantic differences beyond the lexical level. By training concept embeddings on clinical documents of different types and measuring the differences in their nearest neighborhood structures, we are able to measure divergences in concept usage while correcting for noise in embedding learning. Experiments on the MIMIC-III corpus demonstrate that our approach captures clinically-relevant differences in concept usage and provides an intuitive way to explore semantic characteristics of clinical document collections.

2018

pdf
Phrase2VecGLM: Neural generalized language model–based semantic tagging for complex query reformulation in medical IR
Manirupa Das | Eric Fosler-Lussier | Simon Lin | Soheil Moosavinasab | David Chen | Steve Rust | Yungui Huang | Rajiv Ramnath
Proceedings of the BioNLP 2018 workshop

In this work, we develop a novel, completely unsupervised, neural language model-based document ranking approach to semantic tagging of documents, using the document to be tagged as a query into the GLM to retrieve candidate phrases from top-ranked related documents, thus associating every document with novel related concepts extracted from the text. For this we extend the word embedding-based general language model due to Ganguly et al 2015, to employ phrasal embeddings, and use the semantic tags thus obtained for downstream query expansion, both directly and in feedback loop settings. Our method, evaluated using the TREC 2016 clinical decision support challenge dataset, shows statistically significant improvement not only over various baselines that use standard MeSH terms and UMLS concepts for query expansion, but also over baselines using human expert–assigned concept tags for the queries, run on top of a standard Okapi BM25–based document retrieval system.

pdf
Jointly Embedding Entities and Text with Distant Supervision
Denis Newman-Griffis | Albert M Lai | Eric Fosler-Lussier
Proceedings of the Third Workshop on Representation Learning for NLP

Learning representations for knowledge base entities and concepts is becoming increasingly important for NLP applications. However, recent entity embedding methods have relied on structured resources that are expensive to create for new domains and corpora. We present a distantly-supervised method for jointly learning embeddings of entities and text from an unnanotated corpus, using only a list of mappings between entities and surface forms. We learn embeddings from open-domain and biomedical corpora, and compare against prior methods that rely on human-annotated text or large knowledge graph structure. Our embeddings capture entity similarity and relatedness better than prior work, both in existing biomedical datasets and a new Wikipedia-based dataset that we release to the community. Results on analogy completion and entity sense disambiguation indicate that entities and words capture complementary information that can be effectively combined for downstream use.

2017

pdf
Cross-Lingual Transfer Learning for POS Tagging without Cross-Lingual Resources
Joo-Kyung Kim | Young-Bum Kim | Ruhi Sarikaya | Eric Fosler-Lussier
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Training a POS tagging model with crosslingual transfer learning usually requires linguistic knowledge and resources about the relation between the source language and the target language. In this paper, we introduce a cross-lingual transfer learning model for POS tagging without ancillary resources such as parallel corpora. The proposed cross-lingual model utilizes a common BLSTM that enables knowledge transfer from other languages, and private BLSTMs for language-specific representations. The cross-lingual model is trained with language-adversarial training and bidirectional language modeling as auxiliary objectives to better represent language-general information while not losing the information about a specific target language. Evaluating on POS datasets from 14 languages in the Universal Dependencies corpus, we show that the proposed transfer learning model improves the POS tagging performance of the target languages without exploiting any linguistic knowledge between the source language and the target language.

pdf
Insights into Analogy Completion from the Biomedical Domain
Denis Newman-Griffis | Albert Lai | Eric Fosler-Lussier
BioNLP 2017

Analogy completion has been a popular task in recent years for evaluating the semantic properties of word embeddings, but the standard methodology makes a number of assumptions about analogies that do not always hold, either in recent benchmark datasets or when expanding into other domains. Through an analysis of analogies in the biomedical domain, we identify three assumptions: that of a Single Answer for any given analogy, that the pairs involved describe the Same Relationship, and that each pair is Informative with respect to the other. We propose modifying the standard methodology to relax these assumptions by allowing for multiple correct answers, reporting MAP and MRR in addition to accuracy, and using multiple example pairs. We further present BMASS, a novel dataset for evaluating linguistic regularities in biomedical embeddings, and demonstrate that the relationships described in the dataset pose significant semantic challenges to current word embedding methods.

2016

pdf
Adjusting Word Embeddings with Semantic Intensity Orders
Joo-Kyung Kim | Marie-Catherine de Marneffe | Eric Fosler-Lussier
Proceedings of the 1st Workshop on Representation Learning for NLP

pdf
Identification, characterization, and grounding of gradable terms in clinical text
Chaitanya Shivade | Marie-Catherine de Marneffe | Eric Fosler-Lussier | Albert M. Lai
Proceedings of the 15th Workshop on Biomedical Natural Language Processing

2015

pdf
Interpreting Questions with a Log-Linear Ranking Model in a Virtual Patient Dialogue System
Evan Jaffe | Michael White | William Schuler | Eric Fosler-Lussier | Alex Rosenfeld | Douglas Danforth
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications

pdf
Extending NegEx with Kernel Methods for Negation Detection in Clinical Text
Chaitanya Shivade | Marie-Catherine de Marneffe | Eric Fosler-Lussier | Albert M. Lai
Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015)

pdf
Neural word embeddings with multiplicative feature interactions for tensor-based compositions
Joo-Kyung Kim | Marie-Catherine de Marneffe | Eric Fosler-Lussier
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing

pdf
Corpus-based discovery of semantic intensity scales
Chaitanya Shivade | Marie-Catherine de Marneffe | Eric Fosler-Lussier | Albert M. Lai
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf
Cross-narrative Temporal Ordering of Medical Events
Preethi Raghavan | Eric Fosler-Lussier | Noémie Elhadad | Albert M. Lai
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf
Associative and Semantic Features Extracted From Web-Harvested Corpora
Elias Iosif | Maria Giannoudaki | Eric Fosler-Lussier | Alexandros Potamianos
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., """"Cause-Effect"""", """"Instrument-Agency"""". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.

pdf
Comparing human versus automatic feature extraction for fine-grained elementary readability assessment
Yi Ma | Ritu Singh | Eric Fosler-Lussier | Robert Lofthus
Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations

pdf
Temporal Classification of Medical Events
Preethi Raghavan | Eric Fosler-Lussier | Albert Lai
BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing

pdf bib
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Eric Fosler-Lussier | Ellen Riloff | Srinivas Bangalore
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Ranking-based readability assessment for early primary children’s literature
Yi Ma | Eric Fosler-Lussier | Robert Lofthus
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Exploring Semi-Supervised Coreference Resolution of Medical Concepts using Semantic and Temporal Features
Preethi Raghavan | Eric Fosler-Lussier | Albert Lai
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Learning to Temporally Order Medical Events in Clinical Text
Preethi Raghavan | Albert Lai | Eric Fosler-Lussier
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2010

pdf
Investigations into the Crandem Approach to Word Recognition
Rohit Prabhavalkar | Preethi Jyothi | William Hartmann | Jeremy Morris | Eric Fosler-Lussier
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2009

pdf
Using the Wiktionary Graph Structure for Synonym Detection
Timothy Weale | Chris Brew | Eric Fosler-Lussier
Proceedings of the 2009 Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources (People’s Web)

2008

pdf
Strategies for Teaching “Mixed” Computational Linguistics Classes
Eric Fosler-Lussier
Proceedings of the Third Workshop on Issues in Teaching Computational Linguistics

pdf
SCARE: a Situated Corpus with Annotated Referring Expressions
Laura Stoia | Darla Magdalene Shockley | Donna K. Byron | Eric Fosler-Lussier
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Even though a wealth of speech data is available for the dialog systems research community, the particular field of situated language has yet to find an appropriate free resource. The corpus required to answer research questions related to situated language should connect world information to the human language. In this paper we report on the release of a corpus of English spontaneous instruction giving situated dialogs. The corpus was collected using the Quake environment, a first-person virtual reality game, and consists of pairs of participants completing a direction giver- direction follower scenario. The corpus contains the collected audio and video, as well as word-aligned transcriptions and the positional/gaze information of the player. Referring expressions in the corpus are annotated with the IDs of their virtual world referents.

2007

pdf
Joint Versus Independent Phonological Feature Models within CRF Phone Recognition
Ilana Bromberg | Jeremy Morris | Eric Fosler-Lussier
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2006

pdf
Sentence Planning for Realtime Navigational Instruction
Laura Stoia | Donna Byron | Darla Shockley | Eric Fosler-Lussier
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

pdf
Noun Phrase Generation for Situated Dialogs
Laura Stoia | Darla Magdalene Shockley | Donna K. Byron | Eric Fosler-Lussier
Proceedings of the Fourth International Natural Language Generation Conference

pdf
The OSU Quake 2004 corpus of two-party situated problem-solving dialogs
Donna K. Byron | Eric Fosler-Lussier
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This report describes the Ohio State University Quake 2004 corpus of English spontaneous task-oriented two-person situated dialog. The corpus was collected using a first-person display of an interior space (rooms, corridors, stairs) in which the partners collaborate on a treasure hunt task. The corpus contains exciting new features such as deictic and exophoric reference, language that is calibrated against the spatial arrangement of objects in the world, and partial-observability of the task world imposed by the perceptual limitations inherent in the physical arrangement of the world. The corpus differs from prior dialog collections which intentionally restricted the interacting subjects from sharing any perceptual context, and which allowed one subject (the direction-giver or system) to have total knowledge of the state of the task world. The corpus consists of audio/video recordings of each person's experience in the virtual world and orthographic transcriptions. The virtual world can also be used by other researchers who want to conduct additional studies using this stimulus.

2005

pdf
Robust Extraction of Subcategorization Data from Spoken Language
Jianguo Li | Chris Brew | Eric Fosler-Lussier
Proceedings of the Ninth International Workshop on Parsing Technology

pdf
A Cost-Benefit Analysis of Hybrid Phone-Manner Representations for ASR
Eric Fosler-Lussier | C. Anton Rytting
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2003

pdf
Discourse Segmentation of Multi-Party Conversation
Michel Galley | Kathleen R. McKeown | Eric Fosler-Lussier | Hongyan Jing
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

1996

pdf
On Reversing the Generation Process in Optimality Theory
J. Eric Fosler
34th Annual Meeting of the Association for Computational Linguistics

1995

pdf bib
Learning Phonological Rule Probabilities from Speech Corpora with Exploratory Computational Phonology
Gary Tajchman | Daniel Jurafsky | Eric Fosler
33rd Annual Meeting of the Association for Computational Linguistics