Chunwei Liu
2025
Variable Extraction for Model Recovery in Scientific Literature
Chunwei Liu
|
Enrique Noriega-Atala
|
Adarsh Pyarelal
|
Clayton T Morrison
|
Mike Cafarella
Proceedings of the 1st Workshop on AI and Scientific Discovery: Directions and Opportunities
Due to the increasing productivity in the scientific community, it is difficult to keep up with the literature without the assistance of AI methods. This paper evaluates various methods for extracting mathematical model variables from epidemiological studies, such as ‘infection rate (𝛼),” ‘recovery rate (𝛾),” and ‘mortality rate (𝜇).” Variable extraction appears to be a basic task, but plays a pivotal role in recovering models from scientific literature. Once extracted, we can use these variables for automatic mathematical modeling, simulation, and replication of published results. We also introduce a benchmark dataset comprising manually-annotated variable descriptions and variable values extracted from scientific papers. Our analysis shows that LLM-based solutions perform the best. Despite the incremental benefits of combining rule-based extraction outputs with LLMs, the leap in performance attributed to the transfer-learning and instruction-tuning capabilities of LLMs themselves is far more significant. This investigation demonstrates the potential of LLMs to enhance automatic comprehension of scientific artifacts and for automatic model recovery and simulation.
2024
MDCR: A Dataset for Multi-Document Conditional Reasoning
Peter Baile Chen
|
Yi Zhang
|
Chunwei Liu
|
Sejal Gupta
|
Yoon Kim
|
Mike Cafarella
Findings of the Association for Computational Linguistics: EMNLP 2024
The same real-life questions posed to different individuals may lead to different answers based on their unique situations. For instance, whether a student is eligible for a scholarship depends on eligibility conditions, such as major or degree required. ConditionalQA was proposed to evaluate models’ capability of reading a document and answering eligibility questions, considering *unmentioned* conditions. However, it is limited to questions on single documents, neglecting harder cases that may require *cross-document reasoning* and *optimization*, for example, “What is the maximum number of scholarships attainable?” Such questions over multiple documents are not only more challenging due to more context to understand, but also because the model has to (1) explore all possible combinations of unmentioned conditions and (2) understand the relationship between conditions across documents, to reason about the optimal outcome. To evaluate models’ capability of answering such questions, we propose a new dataset MDCR, which can reflect real-world challenges and serve as a new test bed for complex conditional reasoning that requires optimization. We evaluate this dataset using the most recent LLMs and demonstrate their limitations in solving this task. We believe this dataset will facilitate future research in answering optimization questions with unknown conditions.
Search
Fix data
Co-authors
- Mike Cafarella 2
- Peter Baile Chen 1
- Sejal Gupta 1
- Yoon Kim 1
- Clayton T. Morrison 1
- show all...