Daniel Hardt


2021

pdf
Ellipsis Resolution as Question Answering: An Evaluation
Rahul Aralikatte | Matthew Lamm | Daniel Hardt | Anders Søgaard
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Most, if not all forms of ellipsis (e.g., so does Mary) are similar to reading comprehension questions (what does Mary do), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F1).

pdf
Universal Joy A Data Set and Results for Classifying Emotions Across Languages
Sotiris Lamprinidis | Federico Bianchi | Daniel Hardt | Dirk Hovy
Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

While emotions are universal aspects of human psychology, they are expressed differently across different languages and cultures. We introduce a new data set of over 530k anonymized public Facebook posts across 18 languages, labeled with five different emotions. Using multilingual BERT embeddings, we show that emotions can be reliably inferred both within and across languages. Zero-shot learning produces promising results for low-resource languages. Following established theories of basic emotions, we provide a detailed analysis of the possibilities and limits of cross-lingual emotion classification. We find that structural and typological similarity between languages facilitates cross-lingual learning, as well as linguistic diversity of training data. Our results suggest that there are commonalities underlying the expression of emotion in different languages. We publicly release the anonymized data for future research.

2018

pdf
Classifying Sluice Occurrences in Dialogue
Austin Baird | Anissa Hamza | Daniel Hardt
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Predicting News Headline Popularity with Syntactic and Semantic Knowledge Using Multi-Task Learning
Sotiris Lamprinidis | Daniel Hardt | Dirk Hovy
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Newspapers need to attract readers with headlines, anticipating their readers’ preferences. These preferences rely on topical, structural, and lexical factors. We model each of these factors in a multi-task GRU network to predict headline popularity. We find that pre-trained word embeddings provide significant improvements over untrained embeddings, as do the combination of two auxiliary tasks, news-section prediction and part-of-speech tagging. However, we also find that performance is very similar to that of a simple Logistic Regression model over character n-grams. Feature analysis reveals structural patterns of headline popularity, including the use of forward-looking deictic expressions and second person pronouns.

pdf
Sluice Resolution without Hand-Crafted Features over Brittle Syntax Trees
Ola Rønning | Daniel Hardt | Anders Søgaard
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Sluice resolution in English is the problem of finding antecedents of wh-fronted ellipses. Previous work has relied on hand-crafted features over syntax trees that scale poorly to other languages and domains; in particular, to dialogue, which is one of the most interesting applications of sluice resolution. Syntactic information is arguably important for sluice resolution, but we show that multi-task learning with partial parsing as auxiliary tasks effectively closes the gap and buys us an additional 9% error reduction over previous work. Since we are not directly relying on features from partial parsers, our system is more robust to domain shifts, giving a 26% error reduction on embedded sluices in dialogue.

pdf
Linguistic representations in multi-task neural networks for ellipsis resolution
Ola Rønning | Daniel Hardt | Anders Søgaard
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Sluicing resolution is the task of identifying the antecedent to a question ellipsis. Antecedents are often sentential constituents, and previous work has therefore relied on syntactic parsing, together with complex linguistic features. A recent model instead used partial parsing as an auxiliary task in sequential neural network architectures to inject syntactic information. We explore the linguistic information being brought to bear by such networks, both by defining subsets of the data exhibiting relevant linguistic characteristics, and by examining the internal representations of the network. Both perspectives provide evidence for substantial linguistic knowledge being deployed by the neural networks.

2017

pdf bib
Predicting User Views in Online News
Daniel Hardt | Owen Rambow
Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism

We analyze user viewing behavior on an online news site. We collect data from 64,000 news articles, and use text features to predict frequency of user views. We compare predictiveness of the headline and “teaser” (viewed before clicking) and the body (viewed after clicking). Both are predictive of clicking behavior, with the full article text being most predictive.

2016

pdf
Antecedent Selection for Sluicing: Structure and Content
Pranav Anand | Daniel Hardt
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2010

pdf
Incremental Re-training for Post-editing SMT
Daniel Hardt | Jakob Elming
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

A method is presented for incremental re-training of an SMT system, in which a local phrase table is created and incrementally updated as a file is translated and post-edited. It is shown that translation data from within the same file has higher value than other domain-specific data. In two technical domains, within-file data increases BLEU score by several full points. Furthermore, a strong recency effect is documented; nearby data within the file has greater value than more distant data. It is also shown that the value of translation data is strongly correlated with a metric defined over new occurrences of n-grams. Finally, it is argued that the incremental re-training prototype could serve as the basis for a practical system which could be interactively updated in real time in a post-editing setting. Based on the results here, such an interactive system has the potential to dramatically improve translation quality.

2005

pdf
Syntactic Identification of Attribution in the RST Treebank
Peter Rossen Skadhauge | Daniel Hardt
Proceedings of the Sixth International Workshop on Linguistically Interpreted Corpora (LINC-2005)

2004

pdf
Dynamic Centering
Daniel Hardt
Proceedings of the Conference on Reference Resolution and Its Applications

2001

pdf
Generation of VP Ellipsis: A Corpus-Based Approach
Daniel Hardt | Owen Rambow
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

1997

pdf bib
An Empirical Approach to VP Ellipsis
Daniel Hardt
Computational Linguistics, Volume 23, Number 4, December 1997

1996

pdf
Centering in Dynamic Semantics
Daniel Hardt
COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics

1992

pdf
VP Ellipsis and Contextual Interpretation
Daniel Hardt
COLING 1992 Volume 1: The 14th International Conference on Computational Linguistics

pdf bib
An Algorithm for VP Ellipsis
Daniel Hardt
30th Annual Meeting of the Association for Computational Linguistics

pdf
Some Problematic Cases of VP Ellipsis
Daniel Hardt
30th Annual Meeting of the Association for Computational Linguistics