María Leonor Pacheco

Also published as: Maria Leonor Pacheco


2024

pdf
Pauk at SemEval-2024 Task 4: A Neuro-Symbolic Method for Consistent Classification of Propaganda Techniques in Memes
Matt Pauk | Maria Leonor Pacheco
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)

Memes play a key role in most modern informa-tion campaigns, particularly propaganda cam-paigns. Identifying the persuasive techniquespresent in memes is an important step in de-veloping systems to recognize and curtail pro-paganda. This work presents a framework toidentify the persuasive techniques present inmemes for the SemEval 2024 Task 4, accordingto a hierarchical taxonomy of propaganda tech-niques. The framework involves a knowledgedistillation method, where the base model is acombination of DeBERTa and ResNET usedto classify the text and image, and the teachermodel consists of a group of weakly enforcedlogic rules that promote the hierarchy of per-suasion techniques. The addition of the logicrule layer for knowledge distillation shows im-provement in respecting the hierarchy of thetaxonomy with a slight boost in performance.

pdf
Accelerating UMR Adoption: Neuro-Symbolic Conversion from AMR-to-UMR with Low Supervision
Claire Benet Post | Marie C. McGregor | Maria Leonor Pacheco | Alexis Palmer
Proceedings of the Fifth International Workshop on Designing Meaning Representations @ LREC-COLING 2024

Despite Uniform Meaning Representation’s (UMR) potential for cross-lingual semantics, limited annotated data has hindered its adoption. There are large datasets of English AMRs (Abstract Meaning Representations), but the process of converting AMR graphs to UMR graphs is non-trivial. In this paper we address a complex piece of that conversion process, namely cases where one AMR role can be mapped to multiple UMR roles through a non-deterministic process. We propose a neuro-symbolic method for role conversion, integrating animacy parsing and logic rules to guide a neural network, and minimizing human intervention. On test data, the model achieves promising accuracy, highlighting its potential to accelerate AMR-to-UMR conversion. Future work includes expanding animacy parsing, incorporating human feedback, and applying the method to broader aspects of conversion. This research demonstrates the benefits of combining symbolic and neural approaches for complex semantic tasks.

pdf
A First Step towards Measuring Interdisciplinary Engagement in Scientific Publications: A Case Study on NLP + CSS Research
Alexandria Leto | Shamik Roy | Alexander Hoyle | Daniel Acuna | Maria Leonor Pacheco
Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)

With the rise in the prevalence of cross-disciplinary research, there is a need to develop methods to characterize its practices. Current computational methods to evaluate interdisciplinary engagement—such as affiliation diversity, keywords, and citation patterns—are insufficient to model the degree of engagement between disciplines, as well as the way in which the complementary expertise of co-authors is harnessed. In this paper, we propose an automated framework to address some of these issues on a large scale. Our framework tracks interdisciplinary citations in scientific articles and models: 1) the section and position in which they appear, and 2) the argumentative role that they play in the writing. To showcase our framework, we perform a preliminary analysis of interdisciplinary engagement in published work at the intersection of natural language processing and computational social science in the last decade.

pdf
Framing in the Presence of Supporting Data: A Case Study in U.S. Economic News
Alexandria Leto | Elliot Pickens | Coen Needell | David Rothschild | Maria Leonor Pacheco
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The mainstream media has much leeway in what it chooses to cover and how it covers it. These choices have real-world consequences on what people know and their subsequent behaviors. However, the lack of objective measures to evaluate editorial choices makes research in this area particularly difficult. In this paper, we argue that there are newsworthy topics where objective measures exist in the form of supporting data and propose a computational framework to analyze editorial choices in this setup. We focus on the economy because the reporting of economic indicators presents us with a relatively easy way to determine both the selection and framing of various publications. Their values provide a ground truth of how the economy is doing relative to how the publications choose to cover it. To do this, we define frame prediction as a set of interdependent tasks. At the article level, we learn to identify the reported stance towards the general state of the economy. Then, for every numerical quantity reported in the article, we learn to identify whether it corresponds to an economic indicator and whether it is being reported in a positive or negative way. To perform our analysis, we track six American publishers and each article that appeared in the top 10 slots of their landing page between 2015 and 2023.

2023

pdf
On the Potential and Limitations of Few-Shot In-Context Learning to Generate Metamorphic Specifications for Tax Preparation Software
Dananjay Srinivas | Rohan Das | Saeid Tizpaz-Niari | Ashutosh Trivedi | Maria Leonor Pacheco
Proceedings of the Natural Legal Language Processing Workshop 2023

Due to the ever-increasing complexity of income tax laws in the United States, the number of US taxpayers filing their taxes using tax preparation software henceforth, tax software) continues to increase. According to the U.S. Internal Revenue Service (IRS), in FY22, nearly 50% of taxpayers filed their individual income taxes using tax software. Given the legal consequences of incorrectly filing taxes for the taxpayer, ensuring the correctness of tax software is of paramount importance. Metamorphic testing has emerged as a leading solution to test and debug legal-critical tax software due to the absence of correctness requirements and trustworthy datasets. The key idea behind metamorphic testing is to express the properties of a system in terms of the relationship between one input and its slightly metamorphosed twinned input. Extracting metamorphic properties from IRS tax publications is a tedious and time-consuming process. As a response, this paper formulates the task of generating metamorphic specifications as a translation task between properties extracted from tax documents - expressed in natural language - to a contrastive first-order logic form. We perform a systematic analysis on the potential and limitations of in-context learning with Large Language Models (LLMs) for this task, and outline a research agenda towards automating the generation of metamorphic specifications for tax preparation software.

pdf
Interactive Concept Learning for Uncovering Latent Themes in Large Text Collections
Maria Leonor Pacheco | Tunazzina Islam | Lyle Ungar | Ming Yin | Dan Goldwasser
Findings of the Association for Computational Linguistics: ACL 2023

Experts across diverse disciplines are often interested in making sense of large text collections. Traditionally, this challenge is approached either by noisy unsupervised techniques such as topic models, or by following a manual theme discovery process. In this paper, we expand the definition of a theme to account for more than just a word distribution, and include generalized concepts deemed relevant by domain experts. Then, we propose an interactive framework that receives and encodes expert feedback at different levels of abstraction. Our framework strikes a balance between automation and manual coding, allowing experts to maintain control of their study while reducing the manual effort required.

2022

pdf
A Holistic Framework for Analyzing the COVID-19 Vaccine Debate
Maria Leonor Pacheco | Tunazzina Islam | Monal Mahajan | Andrey Shor | Ming Yin | Lyle Ungar | Dan Goldwasser
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

The Covid-19 pandemic has led to infodemic of low quality information leading to poor health decisions. Combating the outcomes of this infodemic is not only a question of identifying false claims, but also reasoning about the decisions individuals make. In this work we propose a holistic analysis framework connecting stance and reason analysis, and fine-grained entity level moral sentiment analysis. We study how to model the dependencies between the different level of analysis and incorporate human insights into the learning process. Experiments show that our framework provides reliable predictions even in the low-supervision settings.

pdf bib
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
Daphne Ippolito | Liunian Harold Li | Maria Leonor Pacheco | Danqi Chen | Nianwen Xue
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop

pdf
Hands-On Interactive Neuro-Symbolic NLP with DRaiL
Maria Leonor Pacheco | Shamik Roy | Dan Goldwasser
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

We recently introduced DRaiL, a declarative neural-symbolic modeling framework designed to support a wide variety of NLP scenarios. In this paper, we enhance DRaiL with an easy to use Python interface, equipped with methods to define, modify and augment DRaiL models interactively, as well as with methods to debug and visualize the predictions made. We demonstrate this interface with a challenging NLP task: predicting sentence and entity level moral sentiment in political tweets.

pdf
Interactively Uncovering Latent Arguments in Social Media Platforms: A Case Study on the Covid-19 Vaccine Debate
Maria Leonor Pacheco | Tunazzina Islam | Lyle Ungar | Ming Yin | Dan Goldwasser
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)

Automated methods for analyzing public opinion have grown in popularity with the proliferation of social media. While supervised methods can be very good at classifying text, the dynamic nature of social media discourse results in a moving target for supervised learning. Meanwhile, traditional unsupervised techniques for extracting themes from textual repositories, such as topic models, can result in incorrect outputs that are unusable to domain experts. For this reason, a non-trivial amount of research on social media discourse still relies on manual coding techniques. In this paper, we present an interactive, humans-in-the-loop framework that strikes a balance between unsupervised techniques and manual coding for extracting latent arguments from social media discussions. We use the COVID-19 vaccination debate as a case study, and show that our methodology can be used to obtain a more accurate, interpretable set of arguments when compared to traditional topic models. We do this at a relatively low manual cost, as 3 experts take approximately 2 hours to code close to 100k tweets.

pdf
Tackling Fake News Detection by Continually Improving Social Context Representations using Graph Neural Networks
Nikhil Mehta | Maria Leonor Pacheco | Dan Goldwasser
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Easy access, variety of content, and fast widespread interactions are some of the reasons making social media increasingly popular. However, this rise has also enabled the propagation of fake news, text published by news sources with an intent to spread misinformation and sway beliefs. Detecting it is an important and challenging problem to prevent large scale misinformation and maintain a healthy society. We view fake news detection as reasoning over the relations between sources, articles they publish, and engaging users on social media in a graph framework. After embedding this information, we formulate inference operators which augment the graph edges by revealing unobserved interactions between its elements, such as similarity between documents’ contents and users’ engagement patterns. Our experiments over two challenging fake news detection tasks show that using inference operators leads to a better understanding of the social media framework enabling fake news spread, resulting in improved performance.

2021

pdf
Randomized Deep Structured Prediction for Discourse-Level Processing
Manuel Widmoser | Maria Leonor Pacheco | Jean Honorio | Dan Goldwasser
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Expressive text encoders such as RNNs and Transformer Networks have been at the center of NLP models in recent work. Most of the effort has focused on sentence-level tasks, capturing the dependencies between words in a single sentence, or pairs of sentences. However, certain tasks, such as argumentation mining, require accounting for longer texts and complicated structural dependencies between them. Deep structured prediction is a general framework to combine the complementary strengths of expressive neural encoders and structured inference for highly structured domains. Nevertheless, when the need arises to go beyond sentences, most work relies on combining the output scores of independently trained classifiers. One of the main reasons for this is that constrained inference comes at a high computational cost. In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.

pdf
Modeling Content and Context with Deep Relational Learning
Maria Leonor Pacheco | Dan Goldwasser
Transactions of the Association for Computational Linguistics, Volume 9

Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.

pdf
Modeling Human Mental States with an Entity-based Narrative Graph
I-Ta Lee | Maria Leonor Pacheco | Dan Goldwasser
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Understanding narrative text requires capturing characters’ motivations, goals, and mental states. This paper proposes an Entity-based Narrative Graph (ENG) to model the internal- states of characters in a story. We explicitly model entities, their interactions and the context in which they appear, and learn rich representations for them. We experiment with different task-adaptive pre-training objectives, in-domain training, and symbolic inference to capture dependencies between different decisions in the output space. We evaluate our model on two narrative understanding tasks: predicting character mental states, and desire fulfillment, and conduct a qualitative analysis.

pdf
Identifying Morality Frames in Political Tweets using Relational Learning
Shamik Roy | Maria Leonor Pacheco | Dan Goldwasser
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Extracting moral sentiment from text is a vital component in understanding public opinion, social movements, and policy decisions. The Moral Foundation Theory identifies five moral foundations, each associated with a positive and negative polarity. However, moral sentiment is often motivated by its targets, which can correspond to individuals or collective entities. In this paper, we introduce morality frames, a representation framework for organizing moral attitudes directed at different entities, and come up with a novel and high-quality annotated dataset of tweets written by US politicians. Then, we propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly. We do qualitative and quantitative evaluations, showing that moral sentiment towards entities differs highly across political ideologies.

2020

pdf
Weakly-Supervised Modeling of Contextualized Event Embedding for Discourse Relations
I-Ta Lee | Maria Leonor Pacheco | Dan Goldwasser
Findings of the Association for Computational Linguistics: EMNLP 2020

Representing, and reasoning over, long narratives requires models that can deal with complex event structures connected through multiple relationship types. This paper suggests to represent this type of information as a narrative graph and learn contextualized event representations over it using a relational graph neural network model. We train our model to capture event relations, derived from the Penn Discourse Tree Bank, on a huge corpus, and show that our multi-relational contextualized event representation can improve performance when learning script knowledge without direct supervision and provide a better representation for the implicit discourse sense classification task.

pdf
Identifying Collaborative Conversations using Latent Discourse Behaviors
Ayush Jain | Maria Leonor Pacheco | Steven Lancette | Mahak Goindani | Dan Goldwasser
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

In this work, we study collaborative online conversations. Such conversations are rich in content, constructive and motivated by a shared goal. Automatically identifying such conversations requires modeling complex discourse behaviors, which characterize the flow of information, sentiment and community structure within discussions. To help capture these behaviors, we define a hybrid relational model in which relevant discourse behaviors are formulated as discrete latent variables and scored using neural networks. These variables provide the information needed for predicting the overall collaborative characterization of the entire conversational thread. We show that adding inductive bias in the form of latent variables results in performance improvement, while providing a natural way to explain the decision.

2017

pdf
PurdueNLP at SemEval-2017 Task 1: Predicting Semantic Textual Similarity with Paraphrase and Event Embeddings
I-Ta Lee | Mahak Goindani | Chang Li | Di Jin | Kristen Marie Johnson | Xiao Zhang | Maria Leonor Pacheco | Dan Goldwasser
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.

2016

pdf
Introducing DRAIL – a Step Towards Declarative Deep Relational Learning
Xiao Zhang | Maria Leonor Pacheco | Chang Li | Dan Goldwasser
Proceedings of the Workshop on Structured Prediction for NLP

pdf
Adapting Event Embedding for Implicit Discourse Relation Recognition
Maria Leonor Pacheco | I-Ta Lee | Xiao Zhang | Abdullah Khan Zehady | Pranjal Daga | Di Jin | Ayush Parolia | Dan Goldwasser
Proceedings of the CoNLL-16 shared task