Tianmai M. Zhang
2025
UW-BioNLP at ChemoTimelines 2025: Thinking, Fine-Tuning, and Dictionary-Enhanced LLM Systems for Chemotherapy Timeline Extraction
Tianmai M. Zhang
|
Zhaoyi Sun
|
Sihang Zeng
|
Chenxi Li
|
Neil F. Abernethy
|
Barbara D. Lam
|
Fei Xia
|
Meliha Yetisgen
Proceedings of the 7th Clinical Natural Language Processing Workshop
The ChemoTimelines shared task benchmarks methods for constructing timelines of systemic anticancer treatment from electronic health records of cancer patients. This paper describes our methods, results, and findings for subtask 2—generating patient chemotherapy timelines from raw clinical notes. We evaluated strategies involving chain-of-thought thinking, supervised fine-tuning, direct preference optimization, and dictionary-based lookup to improve timeline extraction. All of our approaches followed a two-step workflow, wherein an LLM first extracted chemotherapy events from individual clinical notes, and then an algorithm normalized and aggregated events into patient-level timelines. Each specific method differed in how the associated LLM was utilized and trained. Multiple approaches yielded competitive performances on the test set leaderboard, with fine-tuned Qwen3-14B achieving the best official score of 0.678. Our results and analyses could provide useful insights for future attempts on this task as well as the design of similar tasks.
2024
BLADE: Benchmarking Language Model Agents for Data-Driven Science
Ken Gu
|
Ruoxi Shang
|
Ruien Jiang
|
Keying Kuang
|
Richard-John Lin
|
Donghe Lyu
|
Yue Mao
|
Youran Pan
|
Teng Wu
|
Jiaqian Yu
|
Yikun Zhang
|
Tianmai M. Zhang
|
Lanyi Zhu
|
Mike A Merrill
|
Jeffrey Heer
|
Tim Althoff
Findings of the Association for Computational Linguistics: EMNLP 2024
Data-driven scientific discovery requires the iterative integration of scientific domain knowledge, statistical expertise, and an understanding of data semantics to make nuanced analytical decisions, e.g., about which variables, transformations, and statistical models to consider. LM-based agents equipped with planning, memory, and code execution capabilities have the potential to support data-driven science. However, evaluating agents on such open-ended tasks is challenging due to multiple valid approaches, partially correct steps, and different ways to express the same decisions. To address these challenges, we present BLADE, a benchmark to automatically evaluate agents’ multifaceted approaches to open-ended research questions. BLADE consists of 12 datasets and research questions drawn from existing scientific literature, with ground truth collected from independent analyses by expert data scientists and researchers. To automatically evaluate agent responses, we developed corresponding computational methods to match different representations of analyses to this ground truth. Though language models possess considerable world knowledge, our evaluation shows that they are often limited to basic analyses. However, agents capable of interacting with the underlying data demonstrate improved, but still non-optimal, diversity in their analytical decision making. Our work enables the evaluation of agents for data-driven science and provides researchers deeper insights into agents’ analysis approaches.
Search
Fix author
Co-authors
- Neil F. Abernethy 1
- Tim Althoff 1
- Ken Gu 1
- Jeffrey Heer 1
- Ruien Jiang 1
- show all...