Eva Maria Vecchi

Also published as: Eva Vecchi


2023

pdf
Node Placement in Argument Maps: Modeling Unidirectional Relations in High & Low-Resource Scenarios
Iman Jundi | Neele Falk | Eva Maria Vecchi | Gabriella Lapesa
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Argument maps structure discourse into nodes in a tree with each node being an argument that supports or opposes its parent argument. This format is more comprehensible and less redundant compared to an unstructured one. Exploring those maps and maintaining their structure by placing new arguments under suitable parents is more challenging for users with huge maps that are typical in online discussions. To support those users, we introduce the task of node placement: suggesting candidate nodes as parents for a new contribution. We establish an upper-bound of human performance, and conduct experiments with models of various sizes and training strategies. We experiment with a selection of maps from Kialo, drawn from a heterogeneous set of domains. Based on an annotation study, we highlight the ambiguity of the task that makes it challenging for both humans and models. We examine the unidirectional relation between tree nodes and show that encoding a node into different embeddings for each of the parent and child cases improves performance. We further show the few-shot effectiveness of our approach.

pdf bib
Mining, Assessing, and Improving Arguments in NLP and the Social Sciences
Gabriella Lapesa | Eva Maria Vecchi | Serena Villata | Henning Wachsmuth
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts

Computational argumentation is an interdisciplinary research field, connecting Natural Language Processing (NLP) to other disciplines such as the social sciences. This tutorial will focus on a task that recently got into the center of attention in the community: argument quality assessment, that is, what makes an argument good or bad? We structure the tutorial along three main coordinates: (1) the notions of argument quality across disciplines (how do we recognize good and bad arguments?), (2) the modeling of subjectivity (who argues to whom; what are their beliefs?), and (3) the generation of improved arguments (what makes an argument better?). The tutorial highlights interdisciplinary aspects of the field, ranging from the collaboration of theory and practice (e.g., in NLP and social sciences), to approaching different types of linguistic structures (e.g., social media versus parliamentary texts), and facing the ethical issues involved (e.g., how to build applications for the social good). A key feature of this tutorial is its interactive nature: We will involve the participants in two annotation studies on the assessment and the improvement of quality, and we will encourage them to reflect on the challenges and potential of these tasks.

2021

pdf
Predicting Moderation of Deliberative Arguments: Is Argument Quality the Key?
Neele Falk | Iman Jundi | Eva Maria Vecchi | Gabriella Lapesa
Proceedings of the 8th Workshop on Argument Mining

Human moderation is commonly employed in deliberative contexts (argumentation and discussion targeting a shared decision on an issue relevant to a group, e.g., citizens arguing on how to employ a shared budget). As the scale of discussion enlarges in online settings, the overall discussion quality risks to drop and moderation becomes more important to assist participants in having a cooperative and productive interaction. The scale also makes it more important to employ NLP methods for(semi-)automatic moderation, e.g. to prioritize when moderation is most needed. In this work, we make the first steps towards (semi-)automatic moderation by using state-of-the-art classification models to predict which posts require moderation, showing that while the task is undoubtedly difficult, performance is significantly above baseline. We further investigate whether argument quality is a key indicator of the need for moderation, showing that surprisingly, high quality arguments also trigger moderation. We make our code and data publicly available.

pdf
Towards Argument Mining for Social Good: A Survey
Eva Maria Vecchi | Neele Falk | Iman Jundi | Gabriella Lapesa
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

This survey builds an interdisciplinary picture of Argument Mining (AM), with a strong focus on its potential to address issues related to Social and Political Science. More specifically, we focus on AM challenges related to its applications to social media and in the multilingual domain, and then proceed to the widely debated notion of argument quality. We propose a novel definition of argument quality which is integrated with that of deliberative quality from the Social Science literature. Under our definition, the quality of a contribution needs to be assessed at multiple levels: the contribution itself, its preceding context, and the consequential effect on the development of the upcoming discourse. The latter has not received the deserved attention within the community. We finally define an application of AM for Social Good: (semi-)automatic moderation, a highly integrative application which (a) represents a challenging testbed for the integrated notion of quality we advocate, (b) allows the empirical quantification of argument/deliberative quality to benefit from the developments in other NLP fields (i.e. hate speech detection, fact checking, debiasing), and (c) has a clearly beneficial potential at the level of its societal thanks to its real-world application (even if extremely ambitious).

2016

pdf bib
Many speakers, many worlds: Interannotator variations in the quantification of feature norms
Aurélie Herbelot | Eva Maria Vecchi
Linguistic Issues in Language Technology, Volume 13, 2016

Quantification (see e.g. Peters and Westerst ̊ahl, 2006) is probably one of the most extensively studied phenomena in formal semantics. But because of the specific representation of meaning assumed by modeltheoretic semantics (one where a true model of the world is a priori available), research in the area has primarily focused on one question: what is the relation of a quantifier to the truth value of a sentence? In contrast, relatively little has been said about the way the underlying model comes about, and its relation to individual speakers’ conceptual knowledge. In this paper, we make a first step in investigating how native speakers of English model relations between non-grounded sets, by observing how they quantify simple statements. We first give some motivation for our task, from both a theoretical linguistic and computational semantic point of view (§2). We then describe our annotation setup (§3) and follow on with an analysis of the produced dataset, conducting a quantitative evaluation which includes inter-annotator agreement for different classes of predicates (§4). We observe that there is significant agreement between speakers but also noticeable variations. We posit that in settheoretic terms, there are as many worlds as there are speakers (§5), but the overwhelming use of underspecified quantification in ordinary language covers up the individual differences that might otherwise be observed.

pdf
SLEDDED: A Proposed Dataset of Event Descriptions for Evaluating Phrase Representations
Laura Rimell | Eva Maria Vecchi
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP

2015

pdf
From distributional semantics to feature norms: grounding semantic models in human perceptual data
Luana Fagarasan | Eva Maria Vecchi | Stephen Clark
Proceedings of the 11th International Conference on Computational Semantics

pdf
Building a shared world: mapping distributional to model-theoretic semantic spaces
Aurélie Herbelot | Eva Maria Vecchi
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation
Dekai Wu | Marine Carpuat | Xavier Carreras | Eva Maria Vecchi
Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation

2013

pdf
Studying the Recursive Behaviour of Adjectival Modification with Compositional Distributional Semantics
Eva Maria Vecchi | Roberto Zamparelli | Marco Baroni
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Fish Transporters and Miracle Homes: How Compositional Distributional Semantics can Help NP Parsing
Angeliki Lazaridou | Eva Maria Vecchi | Marco Baroni
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop
Anik Dey | Sebastian Krause | Ivelina Nikolova | Eva Vecchi | Steven Bethard | Preslav I. Nakov | Feiyu Xu
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop

2012

pdf
First Order vs. Higher Order Modification in Distributional Semantics
Gemma Boleda | Eva Maria Vecchi | Miquel Cornudella | Louise McNally
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
(Linear) Maps of the Impossible: Capturing Semantic Anomalies in Distributional Space
Eva Maria Vecchi | Marco Baroni | Roberto Zamparelli
Proceedings of the Workshop on Distributional Semantics and Compositionality

2008

pdf
An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems
Keith J. Miller | Mark Arehart | Catherine Ball | John Polk | Alan Rubenstein | Kenneth Samuel | Elizabeth Schroeder | Eva Vecchi | Chris Wolf
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the “ground truth” used to score the systems’ performance. Descriptions and snapshots of the lab’s various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.