On the role of resources in the age of large language models

Simon Dobnik, John Kelleher


Abstract
We evaluate the role of expert-based domain knowledge and resources in relation to training large language models by referring to our work on training and evaluating neural models, also in under-resourced scenarios which we believe also informs training models for “well-resourced” languages and domains. We argue that our community needs both large-scale datasets and small but high-quality data based on expert knowledge and that both activities should work hand-in-hand.
Anthology ID:
2023.clasp-1.20
Volume:
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
Month:
September
Year:
2023
Address:
Gothenburg, Sweden
Editors:
Ellen Breitholtz, Shalom Lappin, Sharid Loaiciga, Nikolai Ilinykh, Simon Dobnik
Venue:
CLASP
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
191–197
Language:
URL:
https://aclanthology.org/2023.clasp-1.20
DOI:
Bibkey:
Cite (ACL):
Simon Dobnik and John Kelleher. 2023. On the role of resources in the age of large language models. In Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), pages 191–197, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
On the role of resources in the age of large language models (Dobnik & Kelleher, CLASP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2023.clasp-1.20.pdf