Keyang Xuan
2025
TinyScientist: An Interactive, Extensible, and Controllable Framework for Building Research Agents
Haofei Yu
|
Keyang Xuan
|
Fenghai Li
|
Kunlun Zhu
|
Zijie Lei
|
Jiaxun Zhang
|
Ziheng Qi
|
Kyle Richardson
|
Jiaxuan You
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Automatic research with Large Language Models (LLMs) is rapidly gaining importance, driving the development of increasingly complex workflows involving multi-agent systems, planning, tool usage, code execution, and human-agent interaction to accelerate research processes. However, as more researchers and developers begin to use and build upon these tools and platforms, the complexity and difficulty of extending and maintaining such agentic workflows have become a significant challenge, particularly as algorithms and architectures continue to advance. To address this growing complexity, TinyScientist identifies the essential components of the automatic research workflow and proposes an interactive, extensible, and controllable framework that adapts easily to new tools and supports iterative growth. We provide an open-source codebase, an interactive web demonstration, and a PyPI Python package to make state-of-the-art auto-research pipelines broadly accessible to every researcher and developer.
TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models
Jaewoo Lee
|
Keyang Xuan
|
Chanakya Ekbote
|
Sandeep Polisetty
|
Yi R. Fung
|
Paul Pu Liang
Findings of the Association for Computational Linguistics: ACL 2025
Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token attributes across layers and modalities inherent to MLLMs. Inspired by this observation, we propose TAMP, a simple yet effective pruning framework tailored for MLLMs, featuring two key components: (1) Diversity-Aware Sparsity, which adjusts sparsity ratio per layer based on diversities among multimodal output tokens, preserving more parameters in high-diversity layers; and (2) Adaptive Multimodal Input Activation, which identifies representative multimodal input tokens using attention scores to guide unstructured weight pruning. We validate our method on two state-of-the-art MLLMs: LLaVA-NeXT, designed for vision-language tasks, and VideoLLaMA2, capable of processing audio, visual, and language modalities. Empirical experiments across various multimodal evaluation benchmarks demonstrate that each component of our approach substantially outperforms existing pruning techniques. Our code is available at https://github.com/G-JWLee/TAMP
Search
Fix author
Co-authors
- Chanakya Ekbote 1
- Yi R. Fung 1
- Jaewoo Lee 1
- Zijie Lei 1
- Fenghai Li 1
- show all...