Yeqin Zhang
2024
Retrospex: Language Agent Meets Offline Reinforcement Learning Critic
Yufei Xiang
|
Yiqun Shen
|
Yeqin Zhang
|
Nguyen Cam-Tu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past experiences for improvement. This work introduces a new LLM-based agent framework called Retrospex, which addresses this challenge by analyzing past experiences in depth. Unlike previous approaches, Retrospex does not directly integrate experiences into the LLM’s context. Instead, it combines the LLM’s action likelihood with action values estimated by a Reinforcement Learning (RL) Critic, which is trained on past experiences through an offline “retrospection” process. Additionally, Retrospex employs a dynamic action rescoring mechanism that increases the importance of experience-based values for tasks that require more interaction with the environment. We evaluate Retrospex in ScienceWorld, ALFWorld and Webshop environments, demonstrating its advantages over strong baselines.
2022
Doc2Bot: Accessing Heterogeneous Documents via Conversational Bots
Haomin Fu
|
Yeqin Zhang
|
Haiyang Yu
|
Jian Sun
|
Fei Huang
|
Luo Si
|
Yongbin Li
|
Cam Tu Nguyen
Findings of the Association for Computational Linguistics: EMNLP 2022
This paper introduces Doc2Bot, a novel dataset for building machines that help users seek information via conversations. This is of particular interest for companies and organizations that own a large number of manuals or instruction books. Despite its potential, the nature of our task poses several challenges: (1) documents contain various structures that hinder the ability of machines to comprehend, and (2) user information needs are often underspecified. Compared to prior datasets that either focus on a single structural type or overlook the role of questioning to uncover user needs, the Doc2Bot dataset is developed to target such challenges systematically. Our dataset contains over 100,000 turns based on Chinese documents from five domains, larger than any prior document-grounded dialog dataset for information seeking. We propose three tasks in Doc2Bot: (1) dialog state tracking to track user intentions, (2) dialog policy learning to plan system actions and contents, and (3) response generation which generates responses based on the outputs of the dialog policy. Baseline methods based on the latest deep learning models are presented, indicating that our proposed tasks are challenging and worthy of further research.
Search
Co-authors
- Yufei Xiang 1
- Yiqun Shen 1
- Nguyen Cam-Tu 1
- Haomin Fu 1
- Haiyang Yu 1
- show all...