AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text Summarization
Anum Afzal, Ribin Chalumattu, Florian Matthes, Laura Mascarell
Abstract
Despite the advances in the abstractive summarization task using Large Language Models (LLM), there is a lack of research that asses their abilities to easily adapt to different domains. We evaluate the domain adaptation abilities of a wide range of LLMs on the summarization task across various domains in both fine-tuning and in-context learning settings. We also present AdaptEval, the first domain adaptation evaluation suite. AdaptEval includes a domain benchmark and a set of metrics to facilitate the analysis of domain adaptation. Our results demonstrate that LLMs exhibit comparable performance in the in-context learning setting, regardless of their parameter scale.- Anthology ID:
- 2024.customnlp4u-1.8
- Volume:
- Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Sachin Kumar, Vidhisha Balachandran, Chan Young Park, Weijia Shi, Shirley Anugrah Hayati, Yulia Tsvetkov, Noah Smith, Hannaneh Hajishirzi, Dongyeop Kang, David Jurgens
- Venue:
- CustomNLP4U
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 76–85
- Language:
- URL:
- https://aclanthology.org/2024.customnlp4u-1.8
- DOI:
- 10.18653/v1/2024.customnlp4u-1.8
- Cite (ACL):
- Anum Afzal, Ribin Chalumattu, Florian Matthes, and Laura Mascarell. 2024. AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text Summarization. In Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U), pages 76–85, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text Summarization (Afzal et al., CustomNLP4U 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.customnlp4u-1.8.pdf