Shalom Lappin


2023

pdf bib
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
Ellen Breitholtz | Shalom Lappin | Sharid Loaiciga | Nikolai Ilinykh | Simon Dobnik
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)

2022

pdf bib
A Neural Model for Compositional Word Embeddings and Sentence Processing
Shalom Lappin | Jean-Philippe Bernardy
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

We propose a new neural model for word embeddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to derive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embeddings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offering a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation.

2020

pdf
How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context
Jey Han Lau | Carlos Armendariz | Shalom Lappin | Matthew Purver | Chang Shu
Transactions of the Association for Computational Linguistics, Volume 8

We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.

2019

pdf
Bayesian Inference Semantics: A Modelling System and A Test Suite
Jean-Philippe Bernardy | Rasmus Blanck | Stergios Chatzikyriakidis | Shalom Lappin | Aleksandre Maskharashvili
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

We present BIS, a Bayesian Inference Semantics, for probabilistic reasoning in natural language. The current system is based on the framework of Bernardy et al. (2018), but departs from it in important respects. BIS makes use of Bayesian learning for inferring a hypothesis from premises. This involves estimating the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syntactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phenomena, including frequency adverbs, generalised quantifiers, generics, and vague predicates. It performs well on a number of interesting probabilistic reasoning tasks. It also sustains most classically valid inferences (instantiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and classical inference patterns.

pdf
The Effect of Context on Metaphor Paraphrase Aptness Judgments
Yuri Bizzoni | Shalom Lappin
Proceedings of the 13th International Conference on Computational Semantics - Long Papers

We conduct two experiments to study the effect of context on metaphor paraphrase aptness judgments. The first is an AMT crowd source task in which speakers rank metaphor-paraphrase candidate sentence pairs in short document contexts for paraphrase aptness. In the second we train a composite DNN to predict these human judgments, first in binary classifier mode, and then as gradient ratings. We found that for both mean human judgments and our DNN’s predictions, adding document context compresses the aptness scores towards the center of the scale, raising low out-of-context ratings and decreasing high out-of-context scores. We offer a provisional explanation for this compression effect.

pdf
Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions
Adam Ek | Jean-Philippe Bernardy | Shalom Lappin
Proceedings of the 22nd Nordic Conference on Computational Linguistics

In this paper, we investigate the effect of enhancing lexical embeddings in LSTM language models (LM) with syntactic and semantic representations. We evaluate the language models using perplexity, and we evaluate the performance of the models on the task of predicting human sentence acceptability judgments. We train LSTM language models on sentences automatically annotated with universal syntactic dependency roles (Nivre, 2016), dependency depth and universal semantic tags (Abzianidze et al., 2017) to predict sentence acceptability judgments. Our experiments indicate that syntactic tags lower perplexity, while semantic tags increase it. Our experiments also show that neither syntactic nor semantic tags improve the performance of LSTM language models on the task of predicting sentence acceptability judgments.

pdf
Predicates as Boxes in Bayesian Semantics for Natural Language
Jean-Philippe Bernardy | Rasmus Blanck | Stergios Chatzikyriakidis | Shalom Lappin | Aleksandre Maskharashvili
Proceedings of the 22nd Nordic Conference on Computational Linguistics

In this paper, we present a Bayesian approach to natural language semantics. Our main focus is on the inference task in an environment where judgments require probabilistic reasoning. We treat nouns, verbs, adjectives, etc. as unary predicates, and we model them as boxes in a bounded domain. We apply Bayesian learning to satisfy constraints expressed as premises. In this way we construct a model, by specifying boxes for the predicates. The probability of the hypothesis (the conclusion) is evaluated against the model that incorporates the premises as constraints.

2018

pdf
The Influence of Context on Sentence Acceptability Judgements
Jean-Philippe Bernardy | Shalom Lappin | Jey Han Lau
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We investigate the influence that document context exerts on human acceptability judgements for English sentences, via two sets of experiments. The first compares ratings for sentences presented on their own with ratings for the same set of sentences given in their document contexts. The second assesses the accuracy with which two types of neural models — one that incorporates context during training and one that does not — predict these judgements. Our results indicate that: (1) context improves acceptability ratings for ill-formed sentences, but also reduces them for well-formed sentences; and (2) context helps unsupervised systems to model acceptability.

pdf
Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks
Yuri Bizzoni | Shalom Lappin
Proceedings of the Workshop on Figurative Language Processing

We propose a new annotated corpus for metaphor interpretation by paraphrase, and a novel DNN model for performing this task. Our corpus consists of 200 sets of 5 sentences, with each set containing one reference metaphorical sentence, and four ranked candidate paraphrases. Our model is trained for a binary classification of paraphrase candidates, and then used to predict graded paraphrase acceptability. It reaches an encouraging 75% accuracy on the binary classification task, and high Pearson (.75) and Spearman (.68) correlations on the gradient judgment prediction task.

pdf bib
A Compositional Bayesian Semantics for Natural Language
Jean-Philippe Bernardy | Rasmus Blanck | Stergios Chatzikyriakidis | Shalom Lappin
Proceedings of the First International Workshop on Language Cognition and Computational Models

We propose a compositional Bayesian semantics that interprets declarative sentences in a natural language by assigning them probability conditions. These are conditional probabilities that estimate the likelihood that a competent speaker would endorse an assertion, given certain hypotheses. Our semantics is implemented in a functional programming language. It estimates the marginal probability of a sentence through Markov Chain Monte Carlo (MCMC) sampling of objects in vector space models satisfying specified hypotheses. We apply our semantics to examples with several predicates and generalised quantifiers, including higher-order quantifiers. It captures the vagueness of predication (both gradable and non-gradable), without positing a precise boundary for classifier application. We present a basic account of semantic learning based on our semantic system. We compare our proposal to other current theories of probabilistic semantics, and we show that it offers several important advantages over these accounts.

2017

pdf
Deep Learning of Binary and Gradient Judgements for Semantic Paraphrase
Yuri Bizzoni | Shalom Lappin
Proceedings of the 12th International Conference on Computational Semantics (IWCS) — Short papers

pdf bib
Using Deep Neural Networks to Learn Syntactic Agreement
Jean-Phillipe Bernardy | Shalom Lappin
Linguistic Issues in Language Technology, Volume 15, 2017

We consider the extent to which different deep neural network (DNN) configurations can learn syntactic relations, by taking up Linzen et al.’s (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a ⇠24 million example part of the WaCky corpus, instead of their ⇠1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, memory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compatible with those that Linzen et al. report. However, we discovered some interesting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has interesting consequences for our understanding of the way in which DNNs represent syntactic information. It suggests that DNNs learn syntactic patterns more efficiently through rich lexical embeddings, with semantic as well as syntactic cues, than from training on lexically impoverished strings that highlight structural patterns.

2015

pdf bib
Probabilistic Type Theory and Natural Language Semantics
Robin Cooper | Simon Dobnik | Shalom Lappin | Staffan Larsson
Linguistic Issues in Language Technology, Volume 10, 2015

Type theory has played an important role in specifying the formal connection between syntactic structure and semantic interpretation within the history of formal semantics. In recent years rich type theories developed for the semantics of programming languages have become influential in the semantics of natural language. The use of probabilistic reasoning to model human learning and cognition has become an increasingly important part of cognitive science. In this paper we offer a probabilistic formulation of a rich type theory, Type Theory with Records (TTR), and we illustrate how this framework can be used to approach the problem of semantic learning. Our probabilistic version of TTR is intended to provide an interface between the cognitive process of classifying situations according to the types that they instantiate, and the compositional semantics of natural language.

pdf
Unsupervised Prediction of Acceptability Judgements
Jey Han Lau | Alexander Clark | Shalom Lappin
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)
Robin Cooper | Simon Dobnik | Shalom Lappin | Staffan Larsson
Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)

pdf
A Probabilistic Rich Type Theory for Semantic Interpretation
Robin Cooper | Simon Dobnik | Shalom Lappin | Staffan Larsson
Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)

pdf
Intensions as Computable Functions
Shalom Lappin
Linguistic Issues in Language Technology, Volume 9, 2014 - Perspectives on Semantic Representations for Textual Inference

Classical intensional semantic frameworks, like Montague’s Intensional Logic (IL), identify intensional identity with logical equivalence. This criterion of co-intensionality is excessively coarse-grained, and it gives rise to several well-known difficulties. Theories of fine-grained intensionality have been been proposed to avoid this problem. Several of these provide a formal solution to the problem, but they do not ground this solution in a substantive account of intensional difference. Applying the distinction between operational and denotational meaning, developed for the semantics of programming languages, to the interpretation of natural language expressions, offers the basis for such an account. It permits us to escape some of the complications generated by the traditional modal characterization of intensions.

2013

pdf
Statistical Representation of Grammaticality Judgements: the Limits of N-Gram Models
Alexander Clark | Gianluca Giorgolo | Shalom Lappin
Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL)

2009

pdf
Another Look at Indirect Negative Evidence
Alexander Clark | Shalom Lappin
Proceedings of the EACL 2009 Workshop on Cognitive Aspects of Computational Language Acquisition

2007

pdf
Classifying Non-Sentential Utterances in Dialogue: A Machine Learning Approach
Raquel Fernández | Jonathan Ginzburg | Shalom Lappin
Computational Linguistics, Volume 33, Number 3, September 2007

2005

pdf
Using Machine Learning for Non-Sentential Utterance Classification
Raquel Fernández | Jonathan Ginzburg | Shalom Lappin
Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue

2004

pdf
Classifying Ellipsis in Dialogue: A Machine Learning Approach
Raquel Fernández | Jonathan Ginzburg | Shalom Lappin
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2001

pdf
Generating Full Paraphrases of Fragments in a Dialogue Interpretation System
Christian Ebert | Shalom Lappin | Howard Gregory | Nicolas Nicolov
Proceedings of the Second SIGdial Workshop on Discourse and Dialogue

pdf bib
Introduction to the Special Issue on Computational Anaphora Resolution
Ruslan Mitkov | Branimir Boguraev | Shalom Lappin
Computational Linguistics, Volume 27, Number 4, December 2001

1996

pdf
A Generalized Reconstruction Algorithm for Ellipsis Resolution
Shalom Lappin | Hsue-Hueh Shih
COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics

1994

pdf bib
An Algorithm for Pronominal Anaphora Resolution
Shalom Lappin | Herbert J. Leass
Computational Linguistics, Volume 20, Number 4, December 1994

1990

pdf bib
Anaphora Resolution in Slot Grammar
Shalom Lappin | Michael McCord
Computational Linguistics, Volume 16, Number 4, December 1990

pdf
A Syntactic Filter on Pronominal Anaphora for Slot Grammar
Shalom Lappin | Michael McCord
28th Annual Meeting of the Association for Computational Linguistics

1988

pdf
An Active Bilingual Lexicon for Machine Translation
Igal Golan | Shalom Lappin | Mori Rimon
Coling Budapest 1988 Volume 1: International Conference on Computational Linguistics