2023
pdf
abs
Annotating and Training for Population Subjective Views
Maria Alexeeva
|
Caroline Hyland
|
Keith Alcock
|
Allegra A. Beal Cohen
|
Hubert Kanyamahanga
|
Isaac Kobby Anni
|
Mihai Surdeanu
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
In this paper, we present a dataset of subjective views (beliefs and attitudes) held by individuals or groups. We analyze the usefulness of the dataset by training a neural classifier that identifies belief-containing sentences that are relevant for our broader project of interest—scientific modeling of complex systems. We also explore and discuss difficulties related to annotation of subjective views and propose ways of addressing them.
2022
pdf
abs
Combining Extraction and Generation for Constructing Belief-Consequence Causal Links
Maria Alexeeva
|
Allegra A. Beal Cohen
|
Mihai Surdeanu
Proceedings of the Third Workshop on Insights from Negative Results in NLP
In this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.
2020
pdf
abs
MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions
Maria Alexeeva
|
Rebecca Sharp
|
Marco A. Valenzuela-Escárcega
|
Jennifer Kadowaki
|
Adarsh Pyarelal
|
Clayton Morrison
Proceedings of the Twelfth Language Resources and Evaluation Conference
Extending machine reading approaches to extract mathematical concepts and their descriptions is useful for a variety of tasks, ranging from mathematical information retrieval to increasing accessibility of scientific documents for the visually impaired. This entails segmenting mathematical formulae into identifiers and linking them to their natural language descriptions. We propose a rule-based approach for this task, which extracts LaTeX representations of formula identifiers and links them to their in-text descriptions, given only the original PDF and the location of the formula of interest. We also present a novel evaluation dataset for this task, as well as the tool used to create it.
2019
pdf
abs
Lightly-supervised Representation Learning with Global Interpretability
Andrew Zupon
|
Maria Alexeeva
|
Marco Valenzuela-Escárcega
|
Ajay Nagesh
|
Mihai Surdeanu
Proceedings of the Third Workshop on Structured Prediction for NLP
We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.