Abstract
Large language models (LLMs) show extraordinary performance in a broad range of cognitive tasks, yet their capability to reproduce human semantic similarity judgements remains disputed. We report an experiment in which we fine-tune two LLMs for Slovene, a monolingual SloT5 and a multilingual mT5, as well as an mT5 for English, to generate word associations. The models are fine-tuned on human word association norms created within the Small World of Words project, which recently started to collect data for Slovene. Since our aim was to explore differences between human and model-generated outputs, the model parameters were minimally adjusted to fit the association task. We perform automatic evaluation using a set of methods to measure the overlap and ranking, and in addition a subset of human and model-generated responses were manually classified into four categories (meaning-, positionand form-based, and erratic). Results show that human-machine overlap is very small, but that the models produce a similar distribution of association categories as humans.- Anthology ID:
- 2024.cogalex-1.5
- Volume:
- Proceedings of the Workshop on Cognitive Aspects of the Lexicon @ LREC-COLING 2024
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Michael Zock, Emmanuele Chersoni, Yu-Yin Hsu, Simon de Deyne
- Venue:
- CogALex
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 42–48
- Language:
- URL:
- https://aclanthology.org/2024.cogalex-1.5
- DOI:
- Cite (ACL):
- Špela Vintar, Mojca Brglez, and Aleš Žagar. 2024. How Human-Like Are Word Associations in Generative Models? An Experiment in Slovene. In Proceedings of the Workshop on Cognitive Aspects of the Lexicon @ LREC-COLING 2024, pages 42–48, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- How Human-Like Are Word Associations in Generative Models? An Experiment in Slovene (Vintar et al., CogALex 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2024.cogalex-1.5.pdf