Proceedings of the 13th International Conference on Computational Semantics - Short Papers

Simon Dobnik, Stergios Chatzikyriakidis, Vera Demberg (Editors)


Anthology ID:
W19-05
Month:
May
Year:
2019
Address:
Gothenburg, Sweden
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/W19-05
DOI:
Bib Export formats:
BibTeX
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/W19-05.pdf

pdf bib
Proceedings of the 13th International Conference on Computational Semantics - Short Papers
Simon Dobnik | Stergios Chatzikyriakidis | Vera Demberg

pdf bib
A Distributional Model of Affordances in Semantic Type Coercion
Stephen McGregor | Elisabetta Jezek

We explore a novel application for interpreting semantic type coercions, motivated by insight into the role that perceptual affordances play in the selection of the semantic roles of artefactual nouns which are observed as arguments for verbs which would stereotypically select for objects of a different type. In order to simulate affordances, which we take to be direct perceptions of context-specific opportunities for action, we preform a distributional analysis dependency relationships between target words and their modifiers and adjuncts. We use these relationships as the basis for generating on-line transformations which project semantic subspaces in which the interpretations of coercive compositions are expected to emerge as salient word-vectors. We offer some preliminary examples of how this model operates on a dataset of sentences involving coercive interactions between verbs and objects specifically designed to evaluate this work.

pdf bib
Natural Language Inference with Monotonicity
Hai Hu | Qi Chen | Larry Moss

This paper describes a working system which performs natural language inference using polarity-marked parse trees. The system handles all of the instances of monotonicity inference in the FraCaS data set. Except for the initial parse, it is entirely deterministic. It handles multi-premise arguments, and the kind of inference performed is essentially “logical”, but it goes beyond what is representable in first-order logic. In any case, the system works on surface forms rather than on representations of any kind.

pdf
Distributional Semantics in the Real World: Building Word Vector Representations from a Truth-Theoretic Model
Elizaveta Kuzmenko | Aurélie Herbelot

Distributional semantics models (DSMs) are known to produce excellent representations of word meaning, which correlate with a range of behavioural data. As lexical representations, they have been said to be fundamentally different from truth-theoretic models of semantics, where meaning is defined as a correspondence relation to the world. There are two main aspects to this difference: a) DSMs are built over corpus data which may or may not reflect ‘what is in the world’; b) they are built from word co-occurrences, that is, from lexical types rather than entities and sets. In this paper, we inspect the properties of a distributional model built over a set-theoretic approximation of ‘the real world’. To achieve this, we take the annotation a large database of images marked with objects, attributes and relations, convert the data into a representation akin to first-order logic and build several distributional models using various combinations of features. We evaluate those models over both relatedness and similarity datasets, demonstrating their effectiveness in standard evaluations. This allows us to conclude that, despite prior claims, truth-theoretic models are good candidates for building graded lexical representations of meaning.

pdf
Linguistic Information in Neural Semantic Parsing with Multiple Encoders
Rik van Noord | Antonio Toral | Johan Bos

Recently, sequence-to-sequence models have achieved impressive performance on a number of semantic parsing tasks. However, they often do not exploit available linguistic resources, while these, when employed correctly, are likely to increase performance even further. Research in neural machine translation has shown that employing this information has a lot of potential, especially when using a multi-encoder setup. We employ a range of semantic and syntactic resources to improve performance for the task of Discourse Representation Structure Parsing. We show that (i) linguistic features can be beneficial for neural semantic parsing and (ii) the best method of adding these features is by using multiple encoders.

pdf
Making Sense of Conflicting (Defeasible) Rules in the Controlled Natural Language ACE: Design of a System with Support for Existential Quantification Using Skolemization
Martin Diller | Adam Wyner | Hannes Strass

We present the design of a system for making sense of conflicting rules expressed in a fragment of the prominent controlled natural language ACE, yet extended with means of expressing defeasible rules in the form of normality assumptions. The approach we describe is ultimately based on answer-set-programming (ASP); simulating existential quantification by using skolemization in a manner resembling a translation for ASP recently formalized in the context of ∃-ASP. We discuss the advantages of this approach to building on the existing ACE interface to rule-systems, ACERules.

pdf
Distributional Interaction of Concreteness and Abstractness in Verb–Noun Subcategorisation
Diego Frassinelli | Sabine Schulte im Walde

In recent years, both cognitive and computational research has provided empirical analyses of contextual co-occurrence of concrete and abstract words, partially resulting in inconsistent pictures. In this work we provide a more fine-grained description of the distributional nature in the corpus-based interaction of verbs and nouns within subcategorisation, by investigating the concreteness of verbs and nouns that are in a specific syntactic relationship with each other, i.e., subject, direct object, and prepositional object. Overall, our experiments show consistent patterns in the distributional representation of subcategorising and subcategorised concrete and abstract words. At the same time, the studies reveal empirical evidence why contextual abstractness represents a valuable indicator for automatic non-literal language identification.

pdf
Generating a Novel Dataset of Multimodal Referring Expressions
Nikhil Krishnaswamy | James Pustejovsky

Referring expressions and definite descriptions of objects in space exploit information both about object characteristics and locations. To resolve potential ambiguity, referencing strategies in language can rely on increasingly abstract concepts to distinguish an object in a given location from similar ones elsewhere, yet the description of the intended location may still be imprecise or difficult to interpret. Meanwhile, modalities such as gesture may communicate spatial information such as locations in a more concise manner. In real peer-to-peer communication, humans use language and gesture together to reference entities, with a capacity for mixing and changing modalities where needed. While recent progress in AI and human-computer interaction has created systems where a human can interact with a computer multimodally, computers often lack the capacity to intelligently mix modalities when generating referring expressions. We present a novel dataset of referring expressions combining natural language and gesture, describe its creation and evaluation, and its uses to train computational models for generating and interpreting multimodal referring expressions.

pdf
On Learning Word Embeddings From Linguistically Augmented Text Corpora
Amila Silva | Chathurika Amarathunga

Word embedding is a technique in Natural Language Processing (NLP) to map words into vector space representations. Since it has boosted the performance of many NLP downstream tasks, the task of learning word embeddings has been addressing significantly. Nevertheless, most of the underlying word embedding methods such as word2vec and GloVe fail to produce high-quality embeddings if the text corpus is small and sparse. This paper proposes a method to generate effective word embeddings from limited data. Through experiments, we show that our proposed model outperforms existing works for the classical word similarity task and for a domain-specific application.

pdf
Sentiment Independent Topic Detection in Rated Hospital Reviews
Christian Wartena | Uwe Sander | Christiane Patzelt

We present a simple method to find topics in user reviews that accompany ratings for products or services. Standard topic analysis will perform sub-optimal on such data since the word distributions in the documents are not only determined by the topics but by the sentiment as well. We reduce the influence of the sentiment on the topic selection by adding two explicit topics, representing positive and negative sentiment. We evaluate the proposed method on a set of over 15,000 hospital reviews. We show that the proposed method, Latent Semantic Analysis with explicit word features, finds topics with a much smaller bias for sentiments than other similar methods.

pdf
Investigating the Stability of Concrete Nouns in Word Embeddings
Bénédicte Pierrejean | Ludovic Tanguy

We know that word embeddings trained using neural-based methods (such as word2vec SGNS) are sensitive to stability problems and that across two models trained using the exact same set of parameters, the nearest neighbors of a word are likely to change. All words are not equally impacted by this internal instability and recent studies have investigated features influencing the stability of word embeddings. This stability can be seen as a clue for the reliability of the semantic representation of a word. In this work, we investigate the influence of the degree of concreteness of nouns on the stability of their semantic representation. We show that for English generic corpora, abstract words are more affected by stability problems than concrete words. We also found that to a certain extent, the difference between the degree of concreteness of a noun and its nearest neighbors can partly explain the stability or instability of its neighbors.