Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs)

Abhijit Mishra, Shreya Shukla, Jose Torres, Jacek Gwizdka, Shounak Roychowdhury


Abstract
Decoding and expressing brain activity in a comprehensible form is a challenging frontier in AI. This paper presents *Thought2Text*, which uses instruction-tuned Large Language Models (LLMs) fine-tuned with EEG data to achieve this goal. The approach involves three stages: (1) training an EEG encoder for visual feature extraction, (2) fine-tuning LLMs on image and text data, enabling multimodal description generation, and (3) further fine-tuning on EEG embeddings to generate text directly from EEG during inference. Experiments on a public EEG dataset collected for six subjects with image stimuli and text captions demonstrate the efficacy of multimodal LLMs (*LLaMA-v3*, *Mistral-v0.3*, *Qwen2.5*), validated using traditional language generation evaluation metrics, as well as *fluency* and *adequacy* measures. This approach marks a significant advancement towards portable, low-cost “thoughts-to-text” technology with potential applications in both neuroscience and natural language processing.
Anthology ID:
2025.findings-naacl.207
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3747–3759
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.207/
DOI:
Bibkey:
Cite (ACL):
Abhijit Mishra, Shreya Shukla, Jose Torres, Jacek Gwizdka, and Shounak Roychowdhury. 2025. Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs). In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3747–3759, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs) (Mishra et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.207.pdf