2022
pdf
abs
Domain-specific knowledge distillation yields smaller and better models for conversational commerce
Kristen Howell
|
Jian Wang
|
Akshay Hazare
|
Joseph Bradley
|
Chris Brew
|
Xi Chen
|
Matthew Dunn
|
Beth Hockey
|
Andrew Maurer
|
Dominic Widdows
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
We demonstrate that knowledge distillation can be used not only to reduce model size, but to simultaneously adapt a contextual language model to a specific domain. We use Multilingual BERT (mBERT; Devlin et al., 2019) as a starting point and follow the knowledge distillation approach of (Sahn et al., 2019) to train a smaller multilingual BERT model that is adapted to the domain at hand. We show that for in-domain tasks, the domain-specific model shows on average 2.3% improvement in F1 score, relative to a model distilled on domain-general data. Whereas much previous work with BERT has fine-tuned the encoder weights during task training, we show that the model improvements from distillation on in-domain data persist even when the encoder weights are frozen during task training, allowing a single encoder to support classifiers for multiple tasks and languages.
2021
pdf
abs
Should Semantic Vector Composition be Explicit? Can it be Linear?
Dominic Widdows
|
Kristen Howell
|
Trevor Cohen
Proceedings of the 2021 Workshop on Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science (SemSpace)
Vector representations have become a central element in semantic language modelling, leading to mathematical overlaps with many fields including quantum theory. Compositionality is a core goal for such representations: given representations for ‘wet’ and ‘fish’, how should the concept ‘wet fish’ be represented? This position paper surveys this question from two points of view. The first considers the question of whether an explicit mathematical representation can be successful using only tools from within linear algebra, or whether other mathematical tools are needed. The second considers whether semantic vector composition should be explicitly described mathematically, or whether it can be a model-internal side-effect of training a neural network. A third and newer question is whether a compositional model can be implemented on a quantum computer. Given the fundamentally linear nature of quantum mechanics, we propose that these questions are related, and that this survey may help to highlight candidate operations for future quantum implementation.
2018
pdf
abs
Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)
Trevor Cohen
|
Dominic Widdows
Proceedings of the 22nd Conference on Computational Natural Language Learning
Word order is clearly a vital part of human language, but it has been used comparatively lightly in distributional vector models. This paper presents a new method for incorporating word order information into word vector embedding models by combining the benefits of permutation-based order encoding with the more recent method of skip-gram with negative sampling. The new method introduced here is called Embeddings Augmented by Random Permutations (EARP). It operates by applying permutations to the coordinates of context vector representations during the process of training. Results show an 8% improvement in accuracy on the challenging Bigger Analogy Test Set, and smaller but consistent improvements on other analogy reference sets. These findings demonstrate the importance of order-based information in analogical retrieval tasks, and the utility of random permutations as a means to augment neural embeddings.
2008
pdf
abs
Semantic Vectors: a Scalable Open Source Package and Online Technology Management Application
Dominic Widdows
|
Kathleen Ferraro
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
This paper describes the open source SemanticVectors package that efficiently creates semantic vectors for words and documents from a corpus of free text articles. We believe that this package can play an important role in furthering research in distributional semantics, and (perhaps more importantly) can help to significantly reduce the current gap that exists between good research results and valuable applications in production software. Two clear principles that have guided the creation of the package so far include ease-of-use and scalability. The basic package installs and runs easily on any Java-enabled platform, and depends only on Apache Lucene. Dimension reduction is performed using Random Projection, which enables the system to scale much more effectively than other algorithms used for the same purpose. This paper also describes a trial application in the Technology Management domain, which highlights some user-centred design challenges which we believe are also key to successful deployment of this technology.
2006
pdf
abs
Ongoing Developments in Automatically Adapting Lexical Resources to the Biomedical Domain
Dominic Widdows
|
Adil Toumouh
|
Beate Dorow
|
Ahmed Lehireche
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
This paper describes a range of experiments using empirical methods to adapt theWordNet noun ontology for specific use in the biomedical domain. Our basic technique is to extract relationships between terms using the Ohsumed corpus, a large collection of abstracts from PubMed, and to compare the relationships extracted with those that would be expected for medical terms, given the structure of the WordNet ontology. The linguistic methods involve the use of a variety of lexicosyntactic patterns that enable us to extract pairs of coordinate noun terms, and also related groups of adjectives and nouns, using Markov clustering. This enables us in many cases to analyse ambiguous words and select the correct meaning for the biomedical domain. While results are often encouraging, the paper also highlights evident problems and drawbacks with the method, and outlines suggestions for future work.
pdf
abs
Collaborative Annotation that Lasts Forever: Using Peer-to-Peer Technology for Disseminating Corpora and Language Resources
Magesh Balasubramanya
|
Michael Higgins
|
Peter Lucas
|
Jeff Senn
|
Dominic Widdows
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
This paper describes a peer-to-peer architecture for representing and disseminating linguistic corpora, linguistic annotation, and resources such as lexical databases and gazetteers. The architecture is based upon a Universal Database technology in which all information is represented in globally identified, extensible bundles of attribute-value pairs. These objects are replicated at will between peers in the network, and the business rules that implement replication involve checking digital signatures and proper attribution of data, to avoid information being tampered with or abuse of copyright. Universal identifiers enable comprehensive standoff annotation and commentary. A carefully constructed publication mechanism is described that enables different users to subscribe to material provided by trusted publishers on recognized topics or themes. Access to content and related annotation is provided by distributed indexes, represented using the same underlying data objects as the rest of the database.
pdf
abs
The Information Commons Gazetteer
Peter Lucas
|
Magesh Balasubramanya
|
Dominic Widdows
|
Michael Higgins
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
Advances in location aware computing and the convergence of geographic and textual information systems will require a comprehensive, extensible, information rich framework called the Information Commons Gazetteer that can be freely disseminated to small devices in a modular fashion. This paper describes the infrastructure and datasets used to create such a resource. The Gazetteer makes use of MAYA Design's Universal Database Architecture; a peer-to-peer system based upon bundles of attribute-value pairs with universally unique identity, and sophisticated indexing and data fusion tools. The Gazetteer primarily constitutes publicly available geographic information from various agencies that is organized into a well-defined scalable hierarchy of worldwide administrative divisions and populated places. The data from various sources are imported into the commons incrementally and are fused with existing data in an iterative process allowing for rich information to evolve over time. Such a flexible and distributed public resource of the geographic places and place names allows for both researchers and practitioners to realize location aware computing in an efficient and useful way in the near future by eliminating redundant time consuming fusion of disparate sources.
2005
pdf
Automatic Extraction of Idioms using Graph Analysis and Asymmetric Lexicosyntactic Patterns
Dominic Widdows
|
Beate Dorow
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition
2004
pdf
Evaluation Resources for Concept-based Cross-Lingual Information Retrieval in the Medical Domain
Paul Buitelaar
|
Diana Steffen
|
Martin Volk
|
Dominic Widdows
|
Bogdan Sacaleanu
|
Špela Vintar
|
Stanley Peters
|
Hans Uszkoreit
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)
2003
pdf
Discovering Corpus-Specific Word Senses
Beate Dorow
|
Dominic Widdows
10th Conference of the European Chapter of the Association for Computational Linguistics
pdf
Orthogonal Negation in Vector Spaces for Modelling Word-Meanings and Document Retrieval
Dominic Widdows
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics
pdf
Using LSA and Noun Coordination Information to Improve the Recall and Precision of Automatic Hyponymy Extraction
Scott Cederberg
|
Dominic Widdows
Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003
pdf
bib
Unsupervised Monolingual and Bilingual Word-Sense Disambiguation of Medical Documents using UMLS
Dominic Widdows
|
Stanley Peters
|
Scott Cederberg
|
Chiu-Ki Chan
|
Diana Steffen
|
Paul Buitelaar
Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine
pdf
An Empirical Model of Multiword Expression Decomposability
Timothy Baldwin
|
Colin Bannard
|
Takaaki Tanaka
|
Dominic Widdows
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment
pdf
Unsupervised methods for developing taxonomies by combining syntactic and statistical information
Dominic Widdows
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics
pdf
Monolingual and Bilingual Concept Visualization from Corpora
Dominic Widdows
|
Scott Cederberg
Companion Volume of the Proceedings of HLT-NAACL 2003 - Demonstrations
2002
pdf
Using Parallel Corpora to enrich Multilingual Lexical Resources
Dominic Widdows
|
Beate Dorow
|
Chiu-Ki Chan
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)
pdf
A Graph Model for Unsupervised Lexical Acquisition
Dominic Widdows
|
Beate Dorow
COLING 2002: The 19th International Conference on Computational Linguistics