Linguistic Issues in Language Technology (2014)


up

bib (full) Linguistic Issues in Language Technology, Volume 9, 2014 - Perspectives on Semantic Representations for Textual Inference

pdf bib
Introduction
Annie Zaenen | Cleo Condoravdi | Valeria de Paiva

pdf bib
The BIUTTE Research Platform for Transformation-based Textual Entailment Recognition
Asher Stern | Ido Dagan

Recent progress in research of the Recognizing Textual Entailment (RTE) task shows a constantly-increasing level of complexity in this research field. A way to avoid having this complexity becoming a barrier for researchers, especially for new-comers in the field, is to provide a freely available RTE system with a high level of flexibility and extensibility. In this paper, we introduce our RTE system, BiuTee2, and suggest it as an effective research framework for RTE. In particular, BiuTee follows the prominent transformation-based paradigm for RTE, and offers an accessible platform for research within this approach. We describe each of BiuTee’s components and point out the mechanisms and properties which directly support adaptations and integration of new components. In addition, we describe BiuTee’s visual tracing tool, which provides notable assistance for researchers in refining and “debugging” their knowledge resources and inference components.

pdf bib
Is there a place for logic in recognizing textual entailment
Johan Bos

From a purely theoretical point of view, it makes sense to approach recognizing textual entailment (RTE) with the help of logic. After all, entailment matters are all about logic. In practice, only few RTE systems follow the bumpy road from words to logic. This is probably because it requires a combination of robust, deep semantic analysis and logical inference—and why develop something with this complexity if you perhaps can get away with something simpler? In this article, with the help of an RTE system based on Combinatory Categorial Grammar, Discourse Representation Theory, and first-order theorem proving, we make an empirical assessment of the logic-based approach. High precision paired with low recall is a key characteristic of this system. The bottleneck in achieving high recall is the lack of a systematic way to produce relevant background knowledge. There is a place for logic in RTE, but it is (still) overshadowed by the knowledge acquisition problem.

pdf
Decomposing Semantic Inference
Elana Cabria | Bernardo Magnini

Beside formal approaches to semantic inference that rely on logical representation of meaning, the notion of Textual Entailment (TE) has been proposed as an applied framework to capture major semantic inference needs across applications in Computational Linguistics. Although several approaches have been tried and evaluation campaigns have shown improvements in TE, a renewed interest is rising in the research community towards a deeper and better understanding of the core phenomena involved in textual inference. Pursuing this direction, we are convinced that crucial progress will derive from a focus on decomposing the complexity of the TE task into basic phenomena and on their combination. In this paper, we carry out a deep analysis on TE data sets, investigating the relations among two relevant aspects of semantic inferences: the logical dimension, i.e. the capacity of the inference to prove the conclusion from its premises, and the linguistic dimension, i.e. the linguistic devices used to accomplish the goal of the inference. We propose a decomposition approach over TE pairs, where single linguistic phenomena are isolated in what we have called atomic inference pairs, and we show that at this granularity level the actual correlation between the linguistic and the logical dimensions of semantic inferences emerges and can be empirically observed.

pdf
Frege in Space: A Program for Composition Distributional Semantics
Marco Baroni | Raffaella Bernardi | Roberto Zamparelli

The lexicon of any natural language encodes a huge number of distinct word meanings. Just to understand this article, you will need to know what thousands of words mean. The space of possible sentential meanings is infinite: In this article alone, you will encounter many sentences that express ideas you have never heard before, we hope. Statistical semantics has addressed the issue of the vastness of word meaning by proposing methods to harvest meaning automatically from large collections of text (corpora). Formal semantics in the Fregean tradition has developed methods to account for the infinity of sentential meaning based on the crucial insight of compositionality, the idea that meaning of sentences is built incrementally by combining the meanings of their constituents. This article sketches a new approach to semantics that brings together ideas from statistical and formal semantics to account, in parallel, for the richness of lexical meaning and the combinatorial power of sentential semantics. We adopt, in particular, the idea that word meaning can be approximated by the patterns of co-occurrence of words in corpora from statistical semantics, and the idea that compositionality can be captured in terms of a syntax-driven calculus of function application from formal semantics.

pdf
Intensions as Computable Functions
Shalom Lappin

Classical intensional semantic frameworks, like Montague’s Intensional Logic (IL), identify intensional identity with logical equivalence. This criterion of co-intensionality is excessively coarse-grained, and it gives rise to several well-known difficulties. Theories of fine-grained intensionality have been been proposed to avoid this problem. Several of these provide a formal solution to the problem, but they do not ground this solution in a substantive account of intensional difference. Applying the distinction between operational and denotational meaning, developed for the semantics of programming languages, to the interpretation of natural language expressions, offers the basis for such an account. It permits us to escape some of the complications generated by the traditional modal characterization of intensions.

pdf
Recent Progress on Monotonicity
Thomas F. Icard III | Lawrence S. Moss

This paper serves two purposes. It is a summary of much work concerning formal treatments of monotonicity and polarity in natural language, and it also discusses connections to related work on exclusion relations, and connections to psycholinguistics and computational linguistics. The second part of the paper presents a summary of some new work on a formal Monotonicity Calculus.

pdf
The Relational Syllogistic Revisited
Ian Pratt-Hartmann

The relational syllogistic is an extension of the language of Classical syllogisms in which predicates are allowed to feature transitive verbs with quantified objects. It is known that the relational syllogistic does not admit a finite set of syllogism-like rules whose associated (direct) derivation relation is sound and complete. We present a modest extension of this language which does.

pdf
NLog-like Inference and Commonsense Reasoning
Lenhart Schubert

Recent implementations of Natural Logic (NLog) have shown that NLog provides a quite direct means of going from sentences in ordinary language to many of the obvious entailments of those sentences. We show here that Episodic Logic (EL) and its Epilog implementation are well-adapted to capturing NLog-like inferences, but beyond that, also support inferences that require a combination of lexical knowledge and world knowledge. However, broad language understanding and commonsense reasoning are still thwarted by the “knowledge acquisition bottleneck”, and we summarize some of our ongoing and contemplated attacks on that persistent difficulty.

pdf
Towards a Semantic Model for Textual Entailment Annotation
Assaf Toledo | Stavroula Alexandropoulou | Sophie Chesney | Sophia Katrenko | Heid Klockmann | Pepjin Kokke | Benno Kruit | Yoad Winter

We introduce a new formal semantic model for annotating textual entailments that describes restrictive, intersective, and appositive modification. The model contains a formally defined interpreted lexicon, which specifies the inventory of symbols and the supported semantic operators, and an informally defined annotation scheme that instructs annotators in which way to bind words and constructions from a given pair of premise and hypothesis to the interpreted lexicon. We explore the applicability of the proposed model to the Recognizing Textual Entailment (RTE) 1–4 corpora and describe a first-stage annotation scheme on which we based the manual annotation work. The constructions we annotated were found to occur in 80.65% of the entailments in RTE 1–4 and were annotated with cross-annotator agreement of 68% on average. The annotated parts of the RTE corpora are publicly available for further research.

pdf
Synthetic Logic
Alex J. Djalali

The role of inference as it relates to natural language (NL) semantics has often been neglected. Recently, there has been a move away by some NL semanticists from the heavy machinery of, say, Montagovianstyle semantics to a more proof-based approach. Although researchers tend to study each type of system independently, MacCartney (2009) and MacCartney and Manning (2009) (henceforth M&M) recently developed an algorithmic approach to natural logic that attempts to combine insights from both monotonicity calculi and various syllogistic fragments to derive compositionally the relation between two NL sentences from the relations of their parts. At the heart of their system, M&M begin with seven intuitive lexicalsemantic relations that NL expressions can stand in, e.g., synonymy and antonymy, and then ask the question: if ' stands in some lexicalsemantic relation to ; and stands in (a possibly different) lexicalsemantic relation to ✓; what lexical-semantic relation (if any) can be concluded about the relation between ' and ✓? This type of reasoning has the familiar shape of a logical inference rule. However, the logical properties of their join table have not been explored in any real detail. The purpose of this paper is to give M&M’s table a proper logical treatment. As I will show, the table has the underlying form of a syllogistic fragment and relies on a sort of generalized transitive reasoning.

up

bib (full) Linguistic Issues in Language Technology, Volume 10, 2014

pdf bib
Nominal Compound Interpretation by Intelligent Agents
Marjorie McShane | Stephen Beale | Petr Babkin

This paper presents a cognitively-inspired algorithm for the semantic analysis of nominal compounds by intelligent agents. The agents, modeled within the OntoAgent environment, are tasked to compute a full context-sensitive semantic interpretation of each compound using a battery of engines that rely on a high-quality computational lexicon and ontology. Rather than being treated as an isolated “task”, as in many NLP approaches, nominal compound analysis in OntoAgent represents a minimal extension to the core process of semantic analysis. We hypothesize that seeking similarities across language analysis tasks reflects the spirit of how people approach language interpretation, and that this approach will make feasible the long-term development of truly sophisticated, human-like intelligent agents. The initial evaluation of our approach to nominal compounds are fixed expressions, requiring individual semantic specification at the lexical level.

pdf bib
CALL-SLT: A Spoken CALL System Based on Grammar and Speech Recognition
Manny Rayner | Nikos Isourakis | Claudia Baur | Pierrette Bouillon | Johannna Gerlach

We describe CALL-SLT, a speech-enabled Computer-Assisted Language Learning application where the central idea is to prompt the student with an abstract representation of what they are supposed to say, and then use a combination of grammar-based speech recognition and rule-based translation to rate their response. The system has been developed to the level of a mature prototype, freely deployed on the web, with versions for several languages. We present an overview of the core system architecture and the various types of content we have developed. Finally, we describe several evaluations, the last of which is a study carried out over about a week using 130 subjects recruited through the Amazon Mechanical Turk, in which CALL-SLT was contrasted against a control version where the speech recognition component was disabled. The improvement in student learning performance between the two groups was significant at p < 0.02.

up

bib (full) Linguistic Issues in Language Technology, Volume 11, 2014 - Theoretical and Computational Morphology: New Trends and Synergies

pdf bib
Theoretical and Computational Morphology: New Trends and Synergies
Bruno Cartoni | Delphine Bernhard | Delphine Tribout

pdf bib
What is grammar like? A usage-based constructionist perspective
Vsevolod Kapatsinski

This paper is intended to elucidate some implications of usage-based linguistic theory for statistical and computational models of language acquisition, focusing on morphology and morphophonology. I discuss the need for grammar (a.k.a. abstraction), the contents of individual grammars (a potentially infinite number of constructions, paradigmatic mappings and predictive relationships between phonological units), the computational characteristics of constructions (complex non-crossover interactions among partially redundant features), resolution of competition among constructions (probability matching), and the need for multimodel inference in modeling internal grammars underlying the linguistic performance of a community.

pdf bib
Kolmogorov complexity of morphs and constructions in English
Katharina Ehret

This chapter demonstrates how compression algorithms can be used to address morphological and syntactic complexity in detail by analysing the contribution of specific linguistic features to English texts. The point of departure is the ongoing complexity debate and quest for complexity metrics. After decades of adhering to the equal complexity axiom, recent research seeks to define and measure linguistic complexity (Dahl 2004; Kortmann and Szmrecsanyi 2012; Miestamo et al. 2008). Against this backdrop, I present a new flavour of the Juola-style compression technique (Juola 1998), targeted manipulation. Essentially, compression algorithms are used to measure linguistic complexity via the relative informativeness in text samples. Thus, I assess the contribution of morphs such as –ing or –ed, and functional constructions such as progressive (be + verb-ing) or perfect (have + verb past participle) to the syntactic and morphological complexity in a mixedgenre corpus of Alice’s Adventures in Wonderland, the Gospel of Mark and newspaper texts. I find that a higher number of marker types leads to higher amounts of morphological complexity in the corpus. Syntactic complexity is reduced because the presence of morphological markers enhances the algorithmic prediction of linguistic patterns. To conclude, I show that information-theoretic methods yield linguistically meaningful results and can be used to measure the complexity of specific linguistic features in naturalistic copora.

pdf
Polyfunctionality and inflectional economy
Gregory Stump

This paper serves two purposes. It is a summary of much work concerning One compelling kind of evidence for the autonomy of a language’s morphology is the incidence of inflectional polyfunctionality, the systematic use of the same morphology to express distinct but related morphosyntactic content. Polyfunctionality is more complex than mere homophony. It can, in fact, arise in a number of ways: as an effect of rule invitation (wherein the same rule of exponence serves more than one function by interacting with other rules in more than one way), as an expression of morphosyntactic referral, as the effect of a rule of exponence realizing either a disjunction of property sets or a morphomic property set, or as the reflection of a morphosyntactic property set’s cross-categorial versatility. I distinguish these different sources of polyfunctionality in a formally precise way. It is inaccurate to see polyfunctionality as an ambiguating source of grammatical complexity; on the contrary, by enhancing the predictability of a language’s morphology, it may well enhance both the memorability of complex inflected forms and the ease with which they are processed.

pdf
Semi-separate exponence in cumulative paradigms. Information-theoretic properties exemplified by Ancient Greek verb endings
Paolo Milizia

By using the system of Ancient Greek verb endings as a case study, this paper deals with the cross-linguistically recurrent appearance of inflectional paradigms that, though generally characterized by cumulative exponence, contain segmentable “semi-separate” endings in correspondence with low-frequency cells. Such an exponence system has information-theoretic properties which may be relevant from the point of view of morphological theory. In particular, both the phenomena of semi-separate exponence and the instances of syncretism that conform to the Brøndalian Principle of Compensation may be viewed as different manifestations of a same cross-linguistic tendency not to let a paradigm’s exponent set be too distant from the situation of equiprobability.

pdf
Démonette, a French derivational morpho-semantic network
Nabil Hathout | Fiammetta Namer

Démonette is a derivational morphological network created from information provided by two existing lexical resources, DériF and Morphonette. It features a formal architecture in which words are associated with semantic types and where morphological relations, labelled with concrete and abstract bi-oriented definitions, connect derived words with their base and indirectly related words with each other.

pdf
Evaluative prefixes in translation: From automatic alignment to semantic categorization
Marie-Aude Lefer | Natalia Grabar

This article aims to assess to what extent translation can shed light on the semantics of French evaluative prefixation by adopting No ̈el (2003)’s ‘translations as evidence for semantics’ approach. In French, evaluative prefixes can be classified along two dimensions (cf. (Fradin and Montermini 2009)): (1) a quantity dimension along a maximum/minimum axis and the semantic values big and small, and (2) a quality dimension along a positive/negative axis and the values good (excess; higher degree) and bad (lack; lower degree). In order to provide corpus-based insights into this semantic categorization, we analyze French evaluative prefixes alongside their English translation equivalents in a parallel corpus. To do so, we focus on periphrastic translations, as they are likely to ‘spell out’ the meaning of the French prefixes. The data used were extracted from the Europarl parallel corpus (Koehn 2005; Cartoni and Meyer 2012). Using a tailormade program, we first aligned the French prefixed words with the corresponding word(s) in English target sentences, before proceeding to the evaluation of the aligned sequences and the manual analysis of the bilingual data. Results confirm that translation data can be used as evidence for semantics in morphological research and help refine existing semantic descriptions of evaluative prefixes.