Abstract
One major challenge for Large Language Models (LLMs) is completing complex tasks involving multiple entities, such as tool APIs. To tackle this, one approach is to retrieve relevant entities to enhance LLMs in task completion. A crucial issue here is obtaining accurate natural language representations for each entity to aid in retriever precision. In this paper, we propose the Natural Language Representation Optimization Problem, which aims to refine entity descriptions for improved retrieval and LLM utilization. We introduce the Learning to Represent with Natural Language method, which utilizes LLMs to optimize entity representations consisting of text patterns based on environmental feedback. We iteratively prompt LLMs to enhance or adjust patterns based on entity samples and evaluate their effectiveness through environmental feedback. Our method successfully learns human-understandable representations for classification tasks (e.g., instructions and documents) and API call tasks (e.g., APIbench and Virtual Home), significantly improving GPT-4’s task performance.- Anthology ID:
- 2024.findings-acl.542
- Volume:
- Findings of the Association for Computational Linguistics ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand and virtual meeting
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9145–9154
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.542
- DOI:
- Cite (ACL):
- Yiduo Guo, Yaobo Liang, Dongyan Zhao, and Nan Duan. 2024. Large Language Models Can Learn Representation in Natural Language. In Findings of the Association for Computational Linguistics ACL 2024, pages 9145–9154, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- Large Language Models Can Learn Representation in Natural Language (Guo et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.542.pdf