Are Decoder-Only Language Models Better than Encoder-Only Language Models in Understanding Word Meaning?

Muhammad Qorib, Geonsik Moon, Hwee Tou Ng


Abstract
The natural language processing field has been evolving around language models for the past few years, from the usage of n-gram language models for re-ranking, to transfer learning with encoder-only (BERT-like) language models, and finally to large language models (LLMs) as general solvers. LLMs are dominated by the decoder-only type, and they are popular for their efficacy in numerous tasks. LLMs are regarded as having strong comprehension abilities and strong capabilities to solve new unseen tasks. As such, people may quickly assume that decoder-only LLMs always perform better than the encoder-only ones, especially for understanding word meaning. In this paper, we demonstrate that decoder-only LLMs perform worse on word meaning comprehension than an encoder-only language model that has vastly fewer parameters.
Anthology ID:
2024.findings-acl.967
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16339–16347
Language:
URL:
https://aclanthology.org/2024.findings-acl.967
DOI:
Bibkey:
Cite (ACL):
Muhammad Qorib, Geonsik Moon, and Hwee Tou Ng. 2024. Are Decoder-Only Language Models Better than Encoder-Only Language Models in Understanding Word Meaning?. In Findings of the Association for Computational Linguistics ACL 2024, pages 16339–16347, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Are Decoder-Only Language Models Better than Encoder-Only Language Models in Understanding Word Meaning? (Qorib et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.967.pdf