How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances
Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad, Jun Wang
Abstract
Although large language models (LLMs) are impressive in solving various tasks, they can quickly be outdated after deployment. Maintaining their up-to-date status is a pressing concern in the current era. This paper provides a comprehensive review of recent advances in aligning deployed LLMs with the ever-changing world knowledge. We categorize research works systemically and provide in-depth comparisons and discussions. We also discuss existing challenges and highlight future directions to facilitate research in this field.- Anthology ID:
- 2023.emnlp-main.516
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8289–8311
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.516
- DOI:
- 10.18653/v1/2023.emnlp-main.516
- Cite (ACL):
- Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad, and Jun Wang. 2023. How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8289–8311, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances (Zhang et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2023.emnlp-main.516.pdf