2019
pdf
abs
Paraphrase-Sense-Tagged Sentences
Anne Cocos
|
Chris Callison-Burch
Transactions of the Association for Computational Linguistics, Volume 7
Many natural language processing tasks require discriminating the particular meaning of a word in context, but building corpora for developing sense-aware models can be a challenge. We present a large resource of example usages for words having a particular meaning, called Paraphrase-Sense-Tagged Sentences (PSTS). Built on the premise that a word’s paraphrases instantiate its fine-grained meanings (i.e., bug has different meanings corresponding to its paraphrases fly and microbe) the resource contains up to 10,000 sentences for each of 3 million target-paraphrase pairs where the target word takes on the meaning of the paraphrase. We describe an automatic method based on bilingual pivoting used to enumerate sentences for PSTS, and present two models for ranking PSTS sentences based on their quality. Finally, we demonstrate the utility of PSTS by using it to build a dataset for the task of hypernym prediction in context. Training a model on this automatically generated dataset produces accuracy that is competitive with a model trained on smaller datasets crafted with some manual effort.
pdf
abs
A Comparison of Context-sensitive Models for Lexical Substitution
Aina Garí Soler
|
Anne Cocos
|
Marianna Apidianaki
|
Chris Callison-Burch
Proceedings of the 13th International Conference on Computational Semantics - Long Papers
Word embedding representations provide good estimates of word meaning and give state-of-the art performance in semantic tasks. Embedding approaches differ as to whether and how they account for the context surrounding a word. We present a comparison of different word and context representations on the task of proposing substitutes for a target word in context (lexical substitution). We also experiment with tuning contextualized word embeddings on a dataset of sense-specific instances for each target word. We show that powerful contextualized word representations, which give high performance in several semantics-related tasks, deal less well with the subtle in-context similarity relationships needed for substitution. This is better handled by models trained with this objective in mind, where the inter-dependence between word and context representations is explicitly modeled during training.
2018
pdf
abs
Learning Scalar Adjective Intensity from Paraphrases
Anne Cocos
|
Skyler Wharton
|
Ellie Pavlick
|
Marianna Apidianaki
|
Chris Callison-Burch
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Adjectives like “warm”, “hot”, and “scalding” all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrase-based method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair “really hot” <–> “scalding” suggests that “hot” < “scalding”. We show that combining this paraphrase evidence with existing, complementary pattern- and lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to “yes/no” questions.
pdf
abs
Comparing Constraints for Taxonomic Organization
Anne Cocos
|
Marianna Apidianaki
|
Chris Callison-Burch
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Building a taxonomy from the ground up involves several sub-tasks: selecting terms to include, predicting semantic relations between terms, and selecting a subset of relational instances to keep, given constraints on the taxonomy graph. Methods for this final step – taxonomic organization – vary both in terms of the constraints they impose, and whether they enable discovery of synonymous terms. It is hard to isolate the impact of these factors on the quality of the resulting taxonomy because organization methods are rarely compared directly. In this paper, we present a head-to-head comparison of six taxonomic organization algorithms that vary with respect to their structural and transitivity constraints, and treatment of synonymy. We find that while transitive algorithms out-perform their non-transitive counterparts, the top-performing transitive algorithm is prohibitively slow for taxonomies with as few as 50 entities. We propose a simple modification to a non-transitive optimum branching algorithm to explicitly incorporate synonymy, resulting in a method that is substantially faster than the best transitive algorithm while giving complementary performance.
pdf
abs
Automated Paraphrase Lattice Creation for HyTER Machine Translation Evaluation
Marianna Apidianaki
|
Guillaume Wisniewski
|
Anne Cocos
|
Chris Callison-Burch
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
We propose a variant of a well-known machine translation (MT) evaluation metric, HyTER (Dreyer and Marcu, 2012), which exploits reference translations enriched with meaning equivalent expressions. The original HyTER metric relied on hand-crafted paraphrase networks which restricted its applicability to new data. We test, for the first time, HyTER with automatically built paraphrase lattices. We show that although the metric obtains good results on small and carefully curated data with both manually and automatically selected substitutes, it achieves medium performance on much larger and noisier datasets, demonstrating the limits of the metric for tuning and evaluation of current MT systems.
2017
pdf
abs
KnowYourNyms? A Game of Semantic Relationships
Ross Mechanic
|
Dean Fulgoni
|
Hannah Cutler
|
Sneha Rajana
|
Zheyuan Liu
|
Bradley Jackson
|
Anne Cocos
|
Chris Callison-Burch
|
Marianna Apidianaki
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Semantic relation knowledge is crucial for natural language understanding. We introduce “KnowYourNyms?”, a web-based game for learning semantic relations. While providing users with an engaging experience, the application collects large amounts of data that can be used to improve semantic relation classifiers. The data also broadly informs us of how people perceive the relationships between words, providing useful insights for research in psychology and linguistics.
pdf
abs
The Language of Place: Semantic Value from Geospatial Context
Anne Cocos
|
Chris Callison-Burch
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
There is a relationship between what we say and where we say it. Word embeddings are usually trained assuming that semantically-similar words occur within the same textual contexts. We investigate the extent to which semantically-similar words occur within the same geospatial contexts. We enrich a corpus of geolocated Twitter posts with physical data derived from Google Places and OpenStreetMap, and train word embeddings using the resulting geospatial contexts. Intrinsic evaluation of the resulting vectors shows that geographic context alone does provide useful information about semantic relatedness.
pdf
abs
Mapping the Paraphrase Database to WordNet
Anne Cocos
|
Marianna Apidianaki
|
Chris Callison-Burch
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)
WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage. The Paraphrase Database (PPDB) covers 650 times more words, but lacks the semantic structure of WordNet that would make it more directly useful for downstream tasks. We present a method for mapping words from PPDB to WordNet synsets with 89% accuracy. The mapping also lays important groundwork for incorporating WordNet’s relations into PPDB so as to increase its utility for semantic reasoning in applications.
pdf
abs
Word Sense Filtering Improves Embedding-Based Lexical Substitution
Anne Cocos
|
Marianna Apidianaki
|
Chris Callison-Burch
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications
The role of word sense disambiguation in lexical substitution has been questioned due to the high performance of vector space models which propose good substitutes without explicitly accounting for sense. We show that a filtering mechanism based on a sense inventory optimized for substitutability can improve the results of these models. Our sense inventory is constructed using a clustering method which generates paraphrase clusters that are congruent with lexical substitution annotations in a development set. The results show that lexical substitution can still benefit from senses which can improve the output of vector space paraphrase ranking models.
2016
pdf
Clustering Paraphrases by Word Sense
Anne Cocos
|
Chris Callison-Burch
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2015
pdf
Effectively Crowdsourcing Radiology Report Annotations
Anne Cocos
|
Aaron Masino
|
Ting Qian
|
Ellie Pavlick
|
Chris Callison-Burch
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis