Dan Gillick

Also published as: Daniel Gillick


2022

pdf
Time-Aware Language Models as Temporal Knowledge Bases
Bhuwan Dhingra | Jeremy R. Cole | Julian Martin Eisenschlos | Daniel Gillick | Jacob Eisenstein | William W. Cohen
Transactions of the Association for Computational Linguistics, Volume 10

Many facts come with an expiration date, from the name of the President to the basketball team Lebron James plays for. However, most language models (LMs) are trained on snapshots of data collected at a specific moment in time. This can limit their utility, especially in the closed-book setting where the pretraining corpus must contain the facts the model should memorize. We introduce a diagnostic dataset aimed at probing LMs for factual knowledge that changes over time and highlight problems with LMs at either end of the spectrum—those trained on specific slices of temporal data, as well as those trained on a wide range of temporal data. To mitigate these problems, we propose a simple technique for jointly modeling text with its timestamp. This improves memorization of seen facts from the training time period, as well as calibration on predictions about unseen facts from future time periods. We also show that models trained with temporal context can be efficiently “refreshed” as new data arrives, without the need for retraining from scratch.

2021

pdf bib
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorials
Greg Kondrak | Kalina Bontcheva | Dan Gillick
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorials

pdf
MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network
Nicholas FitzGerald | Dan Bikel | Jan Botha | Daniel Gillick | Tom Kwiatkowski | Andrew McCallum
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We present an instance-based nearest neighbor approach to entity linking. In contrast to most prior entity retrieval systems which represent each entity with a single vector, we build a contextualized mention-encoder that learns to place similar mentions of the same entity closer in vector space than mentions of different entities. This approach allows all mentions of an entity to serve as “class prototypes” as inference involves retrieving from the full set of labeled entity mentions in the training set and applying the nearest mention neighbor’s entity label. Our model is trained on a large multilingual corpus of mention pairs derived from Wikipedia hyperlinks, and performs nearest neighbor inference on an index of 700 million mentions. It is simpler to train, gives more interpretable predictions, and outperforms all other systems on two multilingual entity linking benchmarks.

2020

pdf
Entity Linking in 100 Languages
Jan A. Botha | Zifei Shan | Daniel Gillick
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We propose a new formulation for multilingual entity linking, where language-specific mentions resolve to a language-agnostic Knowledge Base. We train a dual encoder in this new setting, building on prior work with improved feature representation, negative mining, and an auxiliary entity-pairing task, to obtain a single entity retrieval model that covers 100+ languages and 20 million entities. The model outperforms state-of-the-art results from a far more limited cross-lingual linking task. Rare entities and low-resource languages pose challenges at this large-scale, so we advocate for an increased focus on zero- and few-shot evaluation. To this end, we provide Mewsli-9, a large new multilingual dataset matched to our setting, and show how frequency-based analysis provided key insights for our model and training enhancements.

2019

pdf
Learning Dense Representations for Entity Retrieval
Daniel Gillick | Sayali Kulkarni | Larry Lansing | Alessandro Presta | Jason Baldridge | Eugene Ie | Diego Garcia-Olano
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

We show that it is feasible to perform entity linking by training a dual encoder (two-tower) model that encodes mentions and entities in the same dense vector space, where candidate entities are retrieved by approximate nearest neighbor search. Unlike prior work, this setup does not rely on an alias table followed by a re-ranker, and is thus the first fully learned entity retrieval model. We show that our dual encoder, trained using only anchor-text links in Wikipedia, outperforms discrete alias table and BM25 baselines, and is competitive with the best comparable results on the standard TACKBP-2010 dataset. In addition, it can retrieve candidates extremely fast, and generalizes well to a new dataset derived from Wikinews. On the modeling side, we demonstrate the dramatic value of an unsupervised negative mining algorithm for this task.

2018

pdf
A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Yuan Zhang | Jason Riesa | Daniel Gillick | Anton Bakalov | Jason Baldridge | David Weiss
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We address fine-grained multilingual language identification: providing a language code for every token in a sentence, including codemixed text containing multiple languages. Such text is prevalent online, in documents, social media, and message boards. We show that a feed-forward network with a simple globally constrained decoder can accurately and rapidly label both codemixed and monolingual text in 100 languages and 100 language pairs. This model outperforms previously published multilingual approaches in terms of both accuracy and speed, yielding an 800x speed-up and a 19.5% averaged absolute gain on three codemixed datasets. It furthermore outperforms several benchmark systems on monolingual language identification.

2016

pdf
Multilingual Language Processing From Bytes
Dan Gillick | Cliff Brunk | Oriol Vinyals | Amarnag Subramanya
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Exploring the steps of Verb Phrase Ellipsis
Zhengzhong Liu | Edgar Gonzàlez Pellicer | Daniel Gillick
Proceedings of the Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2016)

2015

pdf
Embedding Methods for Fine Grained Entity Type Classification
Dani Yogatama | Daniel Gillick | Nevena Lazic
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf
A New Entity Salience Task with Millions of Training Examples
Jesse Dunietz | Daniel Gillick
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers

2011

pdf
Jointly Learning to Extract and Compress
Taylor Berg-Kirkpatrick | Dan Gillick | Dan Klein
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
Non-Expert Evaluation of Summarization Systems is Risky
Dan Gillick | Yang Liu
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk

2009

pdf bib
A Scalable Global Model for Summarization
Dan Gillick | Benoit Favre
Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing

pdf
Sentence Boundary Detection and the Problem with the U.S.
Dan Gillick
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

2006

pdf
Why Generative Phrase Models Underperform Surface Heuristics
John DeNero | Dan Gillick | James Zhang | Dan Klein
Proceedings on the Workshop on Statistical Machine Translation