2024
pdf
abs
The Grid: A semi-automated tool to support expert-driven modeling
Allegra A. Beal Cohen
|
Maria Alexeeva
|
Keith Alcock
|
Mihai Surdeanu
Proceedings of the 1st Workshop on NLP for Science (NLP4Science)
When building models of human behavior, we often struggle to find data that capture important factors at the right level of granularity. In these cases, we must rely on expert knowledge to build models. To help partially automate the organization of expert knowledge for modeling, we combine natural language processing (NLP) and machine learning (ML) methods in a tool called the Grid. The Grid helps users organize textual knowledge into clickable cells aLong two dimensions using iterative, collaborative clustering. We conduct a user study to explore participants’ reactions to the Grid, as well as to investigate whether its clustering feature helps participants organize a corpus of expert knowledge. We find that participants using the Grid’s clustering feature appeared to work more efficiently than those without it, but written feedback about the clustering was critical. We conclude that the general design of the Grid was positively received and that some of the user challenges can likely be mitigated through the use of LLMs.
pdf
abs
Retrieval Augmented Generation of Subjective Explanations for Socioeconomic Scenarios
Razvan-Gabriel Dumitru
|
Maria Alexeeva
|
Keith Alcock
|
Nargiza Ludgate
|
Cheonkam Jeong
|
Zara Fatima Abdurahaman
|
Prateek Puri
|
Brian Kirchhoff
|
Santadarshan Sadhu
|
Mihai Surdeanu
Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)
We introduce a novel retrieval augmented generation approach that explicitly models causality and subjectivity. We use it to generate explanations for socioeconomic scenarios that capture beliefs of local populations. Through intrinsic and extrinsic evaluation, we show that our explanations, contextualized using causal and subjective information retrieved from local news sources, are rated higher than those produced by other large language models both in terms of mimicking the real population and the explanations quality. We also provide a discussion of the role subjectivity plays in evaluation of this natural language generation task.
2023
pdf
abs
Annotating and Training for Population Subjective Views
Maria Alexeeva
|
Caroline Hyland
|
Keith Alcock
|
Allegra A. Beal Cohen
|
Hubert Kanyamahanga
|
Isaac Kobby Anni
|
Mihai Surdeanu
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
In this paper, we present a dataset of subjective views (beliefs and attitudes) held by individuals or groups. We analyze the usefulness of the dataset by training a neural classifier that identifies belief-containing sentences that are relevant for our broader project of interest—scientific modeling of complex systems. We also explore and discuss difficulties related to annotation of subjective views and propose ways of addressing them.
2022
pdf
abs
Combining Extraction and Generation for Constructing Belief-Consequence Causal Links
Maria Alexeeva
|
Allegra A. Beal Cohen
|
Mihai Surdeanu
Proceedings of the Third Workshop on Insights from Negative Results in NLP
In this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.
2020
pdf
abs
MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions
Maria Alexeeva
|
Rebecca Sharp
|
Marco A. Valenzuela-Escárcega
|
Jennifer Kadowaki
|
Adarsh Pyarelal
|
Clayton Morrison
Proceedings of the Twelfth Language Resources and Evaluation Conference
Extending machine reading approaches to extract mathematical concepts and their descriptions is useful for a variety of tasks, ranging from mathematical information retrieval to increasing accessibility of scientific documents for the visually impaired. This entails segmenting mathematical formulae into identifiers and linking them to their natural language descriptions. We propose a rule-based approach for this task, which extracts LaTeX representations of formula identifiers and links them to their in-text descriptions, given only the original PDF and the location of the formula of interest. We also present a novel evaluation dataset for this task, as well as the tool used to create it.
2019
pdf
abs
Lightly-supervised Representation Learning with Global Interpretability
Andrew Zupon
|
Maria Alexeeva
|
Marco Valenzuela-Escárcega
|
Ajay Nagesh
|
Mihai Surdeanu
Proceedings of the Third Workshop on Structured Prediction for NLP
We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.