Serena Yeung-Levy
2026
PaperSearchQA: Learning to Search and Reason over Scientific Papers with RLVR
James Burgess | Jan N. Hansen | Duo Peng | Yuhui Zhang | Alejandro Lozano | Min Woo Sun | Emma Lundberg | Serena Yeung-Levy
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
James Burgess | Jan N. Hansen | Duo Peng | Yuhui Zhang | Alejandro Lozano | Min Woo Sun | Emma Lundberg | Serena Yeung-Levy
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Search agents are language models (LMs) that reason and search knowledge bases (or the web) to answer questions; recent methods supervise only the final answer accuracy using reinforcement learning with verifiable rewards (RLVR). Most RLVR search agents tackle general-domain QA, which limits their relevance to technical AI systems in science, engineering, and medicine. In this work we propose training agents to search and reason over scientific papers – this tests technical question-answering, it is directly relevant to real scientists, and the capabilities will be crucial to future AI Scientist systems. Concretely, we release a search corpus of 16 million biomedical paper abstracts and construct a challenging factoid QA dataset called PaperSearchQA with 60k samples answerable from the corpus, along with benchmarks. We train search agents in this environment to outperform non-RL retrieval baselines; we also perform further quantitative analysis and observe interesting agent behaviors like planning, reasoning, and self-verification. Our corpus, datasets, and benchmarks are usable with the popular Search-R1 codebase for RLVR training; they are available on Hugging Face. Finally, our data creation methods are scalable and easily extendable to other scientific domains.
2025
NegVQA: Can Vision Language Models Understand Negation?
Yuhui Zhang | Yuchang Su | Yiming Liu | Serena Yeung-Levy
Findings of the Association for Computational Linguistics: ACL 2025
Yuhui Zhang | Yuchang Su | Yiming Liu | Serena Yeung-Levy
Findings of the Association for Computational Linguistics: ACL 2025
Negation is a fundamental linguistic phenomenon that can entirely reverse the meaning of a sentence. As vision language models (VLMs) continue to advance and are deployed in high-stakes applications, assessing their ability to comprehend negation becomes essential. To address this, we introduce NegVQA, a visual question answering (VQA) benchmark consisting of 7,379 two-choice questions covering diverse negation scenarios and image-question distributions. We construct NegVQA by leveraging large language models to generate negated versions of questions from existing VQA datasets. Evaluating 20 state-of-the-art VLMs across seven model families, we find that these models struggle significantly with negation, exhibiting a substantial performance drop compared to their responses to the original questions. Furthermore, we uncover a U-shaped scaling trend, where increasing model size initially degrades performance on NegVQA before leading to improvements. Our benchmark reveals critical gaps in VLMs’ negation understanding and offers insights into future VLM development. Project page available at https://yuhui-zh15.github.io/NegVQA/.
Data or Language Supervision: What Makes CLIP Better than DINO?
Yiming Liu | Yuhui Zhang | Dhruba Ghosh | Ludwig Schmidt | Serena Yeung-Levy
Findings of the Association for Computational Linguistics: EMNLP 2025
Yiming Liu | Yuhui Zhang | Dhruba Ghosh | Ludwig Schmidt | Serena Yeung-Levy
Findings of the Association for Computational Linguistics: EMNLP 2025
CLIP outperforms self-supervised models like DINO as vision encoders for vision-language models (VLMs), but it remains unclear whether this advantage stems from CLIP’s language supervision or its much larger training data. To disentangle these factors, we pre-train CLIP and DINO under controlled settings—using the same architecture, dataset, and training configuration—achieving similar ImageNet accuracy. Embedding analysis shows that CLIP captures high-level semantics (e.g., object categories, text), while DINO is more responsive to low-level features like colors and styles. When integrated into VLMs and evaluated on 20 VQA benchmarks, CLIP excels at text-intensive tasks, while DINO slightly outperforms on vision-centric ones. Variants of language supervision (e.g., sigmoid loss, pre-trained language encoders) yield limited gains. Our findings provide scientific insights into vision encoder design and its impact on VLM performance.