Yifei Xin
2025
Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents
Long Li
|
Weiwen Xu
|
Jiayan Guo
|
Ruochen Zhao
|
Xingxuan Li
|
Yuqian Yuan
|
Boqiang Zhang
|
Yuming Jiang
|
Yifei Xin
|
Ronghao Dang
|
Yu Rong
|
Deli Zhao
|
Tian Feng
|
Lidong Bing
Findings of the Association for Computational Linguistics: EMNLP 2025
Research ideation is crucial for scientific progress, but the exponential increase in scientific literature makes it challenging to stay updated and identify impactful directions. Recent developments in large language models(LLMs) offer a promising avenue to automate this process. However, existing methods for idea generation either trivially prompt LLMs or expose LLMs to extensive literature without indicating useful information. Inspired by human research processes, we propose a Chain-of-Ideas (CoI) agent, an LLM-based agent that organizes relevant literature in a chain structure to effectively mirror the progressive development in a research domain. This organization helps LLMs better grasp current advancements, thereby improving ideation capabilities. Further, we present Idea Arena, a protocol for evaluating idea-generation methods from different perspectives, which aligns closely with the preferences of human researchers. Experiments show that CoI agent consistently outperforms existing methods and matches human quality in idea generation. Moreover, CoI agent is budget-friendly, requiring only $0.50 to generate a candidate idea and its experimental design.
2024
Soul-Mix: Enhancing Multimodal Machine Translation with Manifold Mixup
Xuxin Cheng
|
Ziyu Yao
|
Yifei Xin
|
Hao An
|
Hongxiang Li
|
Yaowei Li
|
Yuexian Zou
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multimodal machine translation (MMT) aims to improve the performance of machine translation with the help of visual information, which has received widespread attention recently. It has been verified that visual information brings greater performance gains when the textual information is limited. However, most previous works ignore to take advantage of the complete textual inputs and the limited textual inputs at the same time, which limits the overall performance. To solve this issue, we propose a mixup method termed Soul-Mix to enhance MMT by using visual information more effectively. We mix the predicted translations of complete textual input and the limited textual inputs. Experimental results on the Multi30K dataset of three translation directions show that our Soul-Mix significantly outperforms existing approaches and achieves new state-of-the-art performance with fewer parameters than some previous models. Besides, the strength of Soul-Mix is more obvious on more challenging MSCOCO dataset which includes more out-of-domain instances with lots of ambiguous verbs.