Jacob Eisenstein


2021

pdf bib
Sparse, Dense, and Attentional Representations for Text Retrieval
Yi Luan | Jacob Eisenstein | Kristina Toutanova | Michael Collins
Transactions of the Association for Computational Linguistics, Volume 9

Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.

pdf bib
On Writing a Textbook on Natural Language Processing
Jacob Eisenstein
Proceedings of the Fifth Workshop on Teaching NLP

There are thousands of papers about natural language processing and computational linguistics, but very few textbooks. I describe the motivation and process for writing a college textbook on natural language processing, and offer advice and encouragement for readers who may be interested in writing a textbook of their own.

pdf bib
Tuiteamos o pongamos un tuit? Investigating the Social Constraints of Loanword Integration in Spanish Social Media
Ian Stewart | Diyi Yang | Jacob Eisenstein
Proceedings of the Society for Computation in Linguistics 2021

pdf bib
Will it Unblend?
Yuval Pinter | Cassandra L. Jacobs | Jacob Eisenstein
Proceedings of the Society for Computation in Linguistics 2021

pdf bib
Learning to Recognize Dialect Features
Dorottya Demszky | Devyani Sharma | Jonathan Clark | Vinodkumar Prabhakaran | Jacob Eisenstein
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Building NLP systems that serve everyone requires accounting for dialect differences. But dialects are not monolithic entities: rather, distinctions between and within dialects are captured by the presence, absence, and frequency of dozens of dialect features in speech and text, such as the deletion of the copula in “He ∅ running”. In this paper, we introduce the task of dialect feature detection, and present two multitask learning approaches, both based on pretrained transformers. For most dialects, large-scale annotated corpora for these features are unavailable, making it difficult to train recognizers. We train our models on a small number of minimal pairs, building on how linguists typically define dialect features. Evaluation on a test set of 22 dialect features of Indian English demonstrates that these models learn to recognize many features with high accuracy, and that a few minimal pairs can be as effective for training as thousands of labeled examples. We also demonstrate the downstream applicability of dialect feature detection both as a measure of dialect density and as a dialect classifier.

pdf bib
Proceedings of the First Workshop on Causal Inference and NLP
Amir Feder | Katherine Keith | Emaad Manzoor | Reid Pryzant | Dhanya Sridhar | Zach Wood-Doughty | Jacob Eisenstein | Justin Grimmer | Roi Reichart | Molly Roberts | Uri Shalit | Brandon Stewart | Victor Veitch | Diyi Yang
Proceedings of the First Workshop on Causal Inference and NLP

2020

pdf bib
Will it Unblend?
Yuval Pinter | Cassandra L. Jacobs | Jacob Eisenstein
Findings of the Association for Computational Linguistics: EMNLP 2020

Natural language processing systems often struggle with out-of-vocabulary (OOV) terms, which do not appear in training data. Blends, such as “innoventor”, are one particularly challenging class of OOV, as they are formed by fusing together two or more bases that relate to the intended meaning in unpredictable manners and degrees. In this work, we run experiments on a novel dataset of English OOV blends to quantify the difficulty of interpreting the meanings of blends by large-scale contextual language models such as BERT. We first show that BERT’s processing of these blends does not fully access the component meanings, leaving their contextual representations semantically impoverished. We find this is mostly due to the loss of characters resulting from blend formation. Then, we assess how easily different models can recognize the structure and recover the origin of blends, and find that context-aware embedding systems outperform character-level and context-free embeddings, although their results are still far from satisfactory.

pdf bib
AdvAug: Robust Adversarial Augmentation for Neural Machine Translation
Yong Cheng | Lu Jiang | Wolfgang Macherey | Jacob Eisenstein
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, in which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth interpolated embedding space centered around observed training sentence pairs. We then discuss our approach, AdvAug, to train NMT models using the embeddings of virtual sentences in sequence-to-sequence learning. Experiments on Chinese-English, English-French, and English-German translation benchmarks show that AdvAug achieves significant improvements over theTransformer (up to 4.9 BLEU points), and substantially outperforms other data augmentation techniques (e.g.back-translation) without using extra corpora.

2019

pdf bib
The Referential Reader: A Recurrent Entity Network for Anaphora Resolution
Fei Liu | Luke Zettlemoyer | Jacob Eisenstein
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We present a new architecture for storing and accessing entity mentions during online text processing. While reading the text, entity references are identified, and may be stored by either updating or overwriting a cell in a fixed-length memory. The update operation implies coreference with the other mentions that are stored in the same cell; the overwrite operation causes these mentions to be forgotten. By encoding the memory operations as differentiable gates, it is possible to train the model end-to-end, using both a supervised anaphora resolution objective as well as a supplementary language modeling objective. Evaluation on a dataset of pronoun-name anaphora demonstrates strong performance with purely incremental text processing.

pdf bib
Correcting Whitespace Errors in Digitized Historical Texts
Sandeep Soni | Lauren Klein | Jacob Eisenstein
Proceedings of the 3rd Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

Whitespace errors are common to digitized archives. This paper describes a lightweight unsupervised technique for recovering the original whitespace. Our approach is based on count statistics from Google n-grams, which are converted into a likelihood ratio test computed from interpolated trigram and bigram probabilities. To evaluate this approach, we annotate a small corpus of whitespace errors in a digitized corpus of newspapers from the 19th century United States. Our technique identifies and corrects most whitespace errors while introducing a minimal amount of oversegmentation: it achieves 77% recall at a false positive rate of less than 1%, and 91% recall at a false positive rate of less than 3%.

pdf bib
Character Eyes: Seeing Language through Character-Level Taggers
Yuval Pinter | Marc Marone | Jacob Eisenstein
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Character-level models have been used extensively in recent years in NLP tasks as both supplements and replacements for closed-vocabulary token-level word representations. In one popular architecture, character-level LSTMs are used to feed token representations into a sequence tagger predicting token-level annotations such as part-of-speech (POS) tags. In this work, we examine the behavior of POS taggers across languages from the perspective of individual hidden units within the character LSTM. We aggregate the behavior of these units into language-level metrics which quantify the challenges that taggers face on languages with different morphological properties, and identify links between synthesis and affixation preference and emergent behavior of the hidden tagger layer. In a comparative experiment, we show how modifying the balance between forward and backward hidden units affects model arrangement and performance in these types of languages.

pdf bib
Clinical Concept Extraction for Document-Level Coding
Sarah Wiegreffe | Edward Choi | Sherry Yan | Jimeng Sun | Jacob Eisenstein
Proceedings of the 18th BioNLP Workshop and Shared Task

The text of clinical notes can be a valuable source of patient information and clinical assessments. Historically, the primary approach for exploiting clinical notes has been information extraction: linking spans of text to concepts in a detailed domain ontology. However, recent work has demonstrated the potential of supervised machine learning to extract document-level codes directly from the raw text of clinical notes. We propose to bridge the gap between the two approaches with two novel syntheses: (1) treating extracted concepts as features, which are used to supplement or replace the text of the note; (2) treating extracted concepts as labels, which are used to learn a better representation of the text. Unfortunately, the resulting concepts do not yield performance gains on the document-level clinical coding task. We explore possible explanations and future research directions.

pdf bib
Measuring and Modeling Language Change
Jacob Eisenstein
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials

This tutorial is designed to help researchers answer the following sorts of questions: - Are people happier on the weekend? - What was 1861’s word of the year? - Are Democrats and Republicans more different than ever? - When did “gay” stop meaning “happy”? - Are gender stereotypes getting weaker, stronger, or just different? - Who is a linguistic leader? - How can we get internet users to be more polite and objective? Such questions are fundamental to the social sciences and humanities, and scholars in these disciplines are increasingly turning to computational techniques for answers. Meanwhile, the ACL community is increasingly engaged with data that varies across time, and with the social insights that can be offered by analyzing temporal patterns and trends. The purpose of this tutorial is to facilitate this convergence in two main ways: 1. By synthesizing recent computational techniques for handling and modeling temporal data, such as dynamic word embeddings, the tutorial will provide a starting point for future computational research. It will also identify useful tools for social scientists and digital humanities scholars. 2. The tutorial will provide an overview of techniques and datasets from the quantitative social sciences and the digital humanities, which are not well-known in the computational linguistics community. These techniques include vector autoregressive models, multiple comparisons corrections for hypothesis testing, and causal inference. Datasets include historical newspaper archives and corpora of contemporary political speech.

pdf bib
Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling
Xiaochuang Han | Jacob Eisenstein
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a wide range of natural language processing tasks by pretraining on large corpora of unlabeled text. However, the applicability of this approach is unknown when the target domain varies substantially from the pretraining corpus. We are specifically interested in the scenario in which labeled data is available in only a canonical source domain such as newstext, and the target domain is distinct from both the labeled and pretraining texts. To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain. We test this approach on sequence labeling in two challenging domains: Early Modern English and Twitter. Both domains differ substantially from existing pretraining corpora, and domain-adaptive fine-tuning yields substantial improvements over strong BERT baselines, with particularly impressive results on out-of-vocabulary words. We conclude that domain-adaptive fine-tuning offers a simple and effective approach for the unsupervised adaptation of sequence labeling to difficult new domains.

pdf bib
Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation
Vladimir Karpukhin | Omer Levy | Jacob Eisenstein | Marjan Ghazvininejad
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

Contemporary machine translation systems achieve greater coverage by applying subword models such as BPE and character-level CNNs, but these methods are highly sensitive to orthographical variations such as spelling mistakes. We show how training on a mild amount of random synthetic noise can dramatically improve robustness to these variations, without diminishing performance on clean text. We focus on translation performance on natural typos, and show that robustness to such noise can be achieved using a balanced diet of simple synthetic noises at training time, without access to the natural noise data or distribution.

2018

pdf bib
Predicting Semantic Relations using Global Graph Properties
Yuval Pinter | Jacob Eisenstein
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Semantic graphs, such as WordNet, are resources which curate natural language on two distinguishable layers. On the local level, individual relations between synsets (semantic building blocks) such as hypernymy and meronymy enhance our understanding of the words used to express their meanings. Globally, analysis of graph-theoretic properties of the entire net sheds light on the structure of human language as a whole. In this paper, we combine global and local properties of semantic graphs through the framework of Max-Margin Markov Graph Models (M3GM), a novel extension of Exponential Random Graph Model (ERGM) that scales to large multi-relational graphs. We demonstrate how such global modeling improves performance on the local task of predicting semantic relations between synsets, yielding new state-of-the-art results on the WN18RR dataset, a challenging version of WordNet link prediction in which “easy” reciprocal cases are removed. In addition, the M3GM model identifies multirelational motifs that are characteristic of well-formed lexical semantic ontologies.

pdf bib
Making “fetch” happen: The influence of social and linguistic context on nonstandard word growth and decline
Ian Stewart | Jacob Eisenstein
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

In an online community, new words come and go: today’s “haha” may be replaced by tomorrow’s “lol.” Changes in online writing are usually studied as a social process, with innovations diffusing through a network of individuals in a speech community. But unlike other types of innovation, language change is shaped and constrained by the grammatical system in which it takes part. To investigate the role of social and structural factors in language change, we undertake a large-scale analysis of the frequencies of non-standard words in Reddit. Dissemination across many linguistic contexts is a predictor of success: words that appear in more linguistic contexts grow faster and survive longer. Furthermore, social dissemination plays a less important role in explaining word growth and decline than previously hypothesized.

pdf bib
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
Yoav Artzi | Jacob Eisenstein
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Explainable Prediction of Medical Codes from Clinical Text
James Mullenbach | Sarah Wiegreffe | Jon Duke | Jimeng Sun | Jacob Eisenstein
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Clinical notes are text documents that are created by clinicians for each patient encounter. They are typically accompanied by medical codes, which describe the diagnosis and treatment. Annotating these codes is labor intensive and error prone; furthermore, the connection between the codes and the text is not annotated, obscuring the reasons and details behind specific diagnoses and treatments. We present an attentional convolutional network that predicts medical codes from clinical text. Our method aggregates information across the document using a convolutional neural network, and uses an attention mechanism to select the most relevant segments for each of the thousands of possible codes. The method is accurate, achieving precision@8 of 0.71 and a Micro-F1 of 0.54, which are both better than the prior state of the art. Furthermore, through an interpretability evaluation by a physician, we show that the attention mechanism identifies meaningful explanations for each code assignment.

pdf bib
Si O No, Que Penses? Catalonian Independence and Linguistic Identity on Social Media
Ian Stewart | Yuval Pinter | Jacob Eisenstein
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.

pdf bib
Stylistic Variation in Social Media Part-of-Speech Tagging
Murali Raghu Babu Balusu | Taha Merghani | Jacob Eisenstein
Proceedings of the Second Workshop on Stylistic Variation

Social media features substantial stylistic variation, raising new challenges for syntactic analysis of online writing. However, this variation is often aligned with author attributes such as age, gender, and geography, as well as more readily-available social network metadata. In this paper, we report new evidence on the link between language and social networks in the task of part-of-speech tagging. We find that tagger error rates are correlated with network structure, with high accuracy in some parts of the network, and lower accuracy elsewhere. As a result, tagger accuracy depends on training from a balanced sample of the network, rather than training on texts from a narrow subcommunity. We also describe our attempts to add robustness to stylistic variation, by building a mixture-of-experts model in which each expert is associated with a region of the social network. While prior work found that similar approaches yield performance improvements in sentiment analysis and entity linking, we were unable to obtain performance improvements in part-of-speech tagging, despite strong evidence for the link between part-of-speech error rates and social network structure.

pdf bib
Interactional Stancetaking in Online Forums
Scott F. Kiesling | Umashanthi Pavalanathan | Jim Fitzpatrick | Xiaochuang Han | Jacob Eisenstein
Computational Linguistics, Volume 44, Issue 4 - December 2018

Language is shaped by the relationships between the speaker/writer and the audience, the object of discussion, and the talk itself. In turn, language is used to reshape these relationships over the course of an interaction. Computational researchers have succeeded in operationalizing sentiment, formality, and politeness, but each of these constructs captures only some aspects of social and relational meaning. Theories of interactional stancetaking have been put forward as holistic accounts, but until now, these theories have been applied only through detailed qualitative analysis of (portions of) a few individual conversations. In this article, we propose a new computational operationalization of interpersonal stancetaking. We begin with annotations of three linked stance dimensions—affect, investment, and alignment—on 68 conversation threads from the online platform Reddit. Using these annotations, we investigate thread structure and linguistic properties of stancetaking in online conversations. We identify lexical features that characterize the extremes along each stancetaking dimension, and show that these stancetaking properties can be predicted with moderate accuracy from bag-of-words features, even with a relatively small labeled training set. These quantitative analyses are supplemented by extensive qualitative analysis, highlighting the compatibility of computational and qualitative methods in synthesizing evidence about the creation of interactional meaning.

2017

pdf bib
A Kernel Independence Test for Geographical Language Variation
Dong Nguyen | Jacob Eisenstein
Computational Linguistics, Volume 43, Issue 3 - September 2017

Quantifying the degree of spatial dependence for linguistic variables is a key task for analyzing dialectal variation. However, existing approaches have important drawbacks. First, they are based on parametric models of dependence, which limits their power in cases where the underlying parametric assumptions are violated. Second, they are not applicable to all types of linguistic data: Some approaches apply only to frequencies, others to boolean indicators of whether a linguistic variable is present. We present a new method for measuring geographical language variation, which solves both of these problems. Our approach builds on Reproducing Kernel Hilbert Space (RKHS) representations for nonparametric statistics, and takes the form of a test statistic that is computed from pairs of individual geotagged observations without aggregation into predefined geographical bins. We compare this test with prior work using synthetic data as well as a diverse set of real data sets: a corpus of Dutch tweets, a Dutch syntactic atlas, and a data set of letters to the editor in North American newspapers. Our proposed test is shown to support robust inferences across a broad range of scenarios and types of data.

pdf bib
Mimicking Word Embeddings using Subword RNNs
Yuval Pinter | Robert Guthrie | Jacob Eisenstein
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised character-based model in low resource settings.

pdf bib
A Multidimensional Lexicon for Interpersonal Stancetaking
Umashanthi Pavalanathan | Jim Fitzpatrick | Scott Kiesling | Jacob Eisenstein
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The sociolinguistic construct of stancetaking describes the activities through which discourse participants create and signal relationships to their interlocutors, to the topic of discussion, and to the talk itself. Stancetaking underlies a wide range of interactional phenomena, relating to formality, politeness, affect, and subjectivity. We present a computational approach to stancetaking, in which we build a theoretically-motivated lexicon of stance markers, and then use multidimensional analysis to identify a set of underlying stance dimensions. We validate these dimensions intrinscially and extrinsically, showing that they are internally coherent, match pre-registered hypotheses, and correlate with social phenomena.

pdf bib
Overcoming Language Variation in Sentiment Analysis with Social Attention
Yi Yang | Jacob Eisenstein
Transactions of the Association for Computational Linguistics, Volume 5

Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random; it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. The key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. We formalize this idea in a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author’s position in the social network. This has the effect of smoothing the classification function across the social network, and makes it possible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. This model significantly improves the accuracies of sentiment analysis on Twitter and on review data.

2016

pdf bib
A Latent Variable Recurrent Neural Network for Discourse-Driven Language Models
Yangfeng Ji | Gholamreza Haffari | Jacob Eisenstein
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Part-of-Speech Tagging for Historical English
Yi Yang | Jacob Eisenstein
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Proceedings of the First Workshop on NLP and Computational Social Science
David Bamman | A. Seza Doğruöz | Jacob Eisenstein | Dirk Hovy | David Jurgens | Brendan O’Connor | Alice Oh | Oren Tsur | Svitlana Volkova
Proceedings of the First Workshop on NLP and Computational Social Science

pdf bib
Nonparametric Bayesian Storyline Detection from Microtexts
Vinodh Krishnan | Jacob Eisenstein
Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)

pdf bib
A Joint Model of Rhetorical Discourse Structure and Summarization
Naman Goyal | Jacob Eisenstein
Proceedings of the Workshop on Structured Prediction for NLP

pdf bib
Morphological Priors for Probabilistic Neural Word Embeddings
Parminder Bhatia | Robert Guthrie | Jacob Eisenstein
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Toward Socially-Infused Information Extraction: Embedding Authors, Mentions, and Entities
Yi Yang | Ming-Wei Chang | Jacob Eisenstein
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations
Yangfeng Ji | Jacob Eisenstein
Transactions of the Association for Computational Linguistics, Volume 3

Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank.

pdf bib
Confounds and Consequences in Geotagged Twitter Data
Umashanthi Pavalanathan | Jacob Eisenstein
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Better Document-level Sentiment Analysis from RST Discourse Parsing
Parminder Bhatia | Yangfeng Ji | Jacob Eisenstein
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Closing the Gap: Domain Adaptation from Explicit to Implicit Discourse Relations
Yangfeng Ji | Gongbo Zhang | Jacob Eisenstein
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Unsupervised Multi-Domain Adaptation with Feature Embeddings
Yi Yang | Jacob Eisenstein
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
“You’re Mr. Lebowski, I’m the Dude”: Inducing Address Term Formality in Signed Social Networks
Vinodh Krishnan | Jacob Eisenstein
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science
Cristian Danescu-Niculescu-Mizil | Jacob Eisenstein | Kathleen McKeown | Noah A. Smith
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science

pdf bib
Mining Themes and Interests in the Asperger’s and Autism Community
Yangfeng Ji | Hwajung Hong | Rosa Arriaga | Agata Rozga | Gregory Abowd | Jacob Eisenstein
Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality

pdf bib
Representation Learning for Text-level Discourse Parsing
Yangfeng Ji | Jacob Eisenstein
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
POS induction with distributional and morphological information using a distance-dependent Chinese restaurant process
Kairit Sirts | Jacob Eisenstein | Micha Elsner | Sharon Goldwater
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Modeling Factuality Judgments in Social Media Text
Sandeep Soni | Tanushree Mitra | Eric Gilbert | Jacob Eisenstein
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Fast Easy Unsupervised Domain Adaptation with Marginalized Structured Dropout
Yi Yang | Jacob Eisenstein
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf bib
A Log-Linear Model for Unsupervised Text Normalization
Yi Yang | Jacob Eisenstein
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Discriminative Improvements to Distributional Sentence Similarity
Yangfeng Ji | Jacob Eisenstein
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
What to do about bad language on the internet
Jacob Eisenstein
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Discourse Connectors for Latent Subjectivity in Sentiment Analysis
Rakshit Trivedi | Jacob Eisenstein
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Phonological Factors in Social Media Writing
Jacob Eisenstein
Proceedings of the Workshop on Language Analysis in Social Media

2012

pdf bib
Bootstrapping a Unified Model of Lexical and Phonetic Acquisition
Micha Elsner | Sharon Goldwater | Jacob Eisenstein
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

bib
Tutorial Abstracts at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Radu Florian | Jacob Eisenstein
Tutorial Abstracts at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf bib
Discovering Sociolinguistic Associations with Structured Sparsity
Jacob Eisenstein | Noah A. Smith | Eric P. Xing
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments
Kevin Gimpel | Nathan Schneider | Brendan O’Connor | Dipanjan Das | Daniel Mills | Jacob Eisenstein | Michael Heilman | Dani Yogatama | Jeffrey Flanigan | Noah A. Smith
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Structured Databases of Named Entities from Bayesian Nonparametrics
Jacob Eisenstein | Tae Yano | William Cohen | Noah Smith | Eric Xing
Proceedings of the First workshop on Unsupervised Learning in NLP

2010

pdf bib
A Latent Variable Model for Geographic Lexical Variation
Jacob Eisenstein | Brendan O’Connor | Noah A. Smith | Eric P. Xing
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Social Links from Latent Topics in Microblogs
Kriti Puniyani | Jacob Eisenstein | Shay B. Cohen | Eric Xing
Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media

2009

pdf bib
Adding More Languages Improves Unsupervised Multilingual Part-of-Speech Tagging: a Bayesian Non-Parametric Approach
Benjamin Snyder | Tahira Naseem | Jacob Eisenstein | Regina Barzilay
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Hierarchical Text Segmentation from Multi-Scale Lexical Cohesion
Jacob Eisenstein
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Reading to Learn: Constructing Features from Semantic Abstracts
Jacob Eisenstein | James Clarke | Dan Goldwasser | Dan Roth
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2008

pdf bib
Bayesian Unsupervised Topic Segmentation
Jacob Eisenstein | Regina Barzilay
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Unsupervised Multilingual Learning for POS Tagging
Benjamin Snyder | Tahira Naseem | Jacob Eisenstein | Regina Barzilay
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Learning Document-Level Semantic Properties from Free-Text Annotations
S.R.K. Branavan | Harr Chen | Jacob Eisenstein | Regina Barzilay
Proceedings of ACL-08: HLT

pdf bib
Gestural Cohesion for Topic Segmentation
Jacob Eisenstein | Regina Barzilay | Randall Davis
Proceedings of ACL-08: HLT

2007

pdf bib
Conditional Modality Fusion for Coreference Resolution
Jacob Eisenstein | Randall Davis
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf bib
Gesture Improves Coreference Resolution
Jacob Eisenstein | Randall Davis
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

pdf bib
Semantic Back-Pointers from Gesture
Jacob Eisenstein
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium

2004

pdf bib
A Salience-Based Approach to Gesture-Speech Alignment
Jacob Eisenstein | C. Mario Christoudias
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

Search
Co-authors