Ashesh Mehta
2025
AAD-LLM: Neural Attention-Driven Auditory Scene Understanding
Xilin Jiang
|
Sukru Samet Dindar
|
Vishal Choudhari
|
Stephan Bickel
|
Ashesh Mehta
|
Guy M McKhann
|
Daniel Friedman
|
Adeen Flinker
|
Nima Mesgarani
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Auditory foundation models, including auditory large language models (LLMs), process all sound inputs equally, independent of listener perception. However, human auditory perception is inherently selective: listeners focus on specific speakers while ignoring others in complex auditory scenes. Existing models do not incorporate this selectivity, limiting their ability to generate perception-aligned responses. To address this, we introduce intention-informed auditory scene understanding (II-ASU) and present Auditory Attention-Driven LLM (AAD-LLM), a prototype system that integrates brain signals to infer listener attention. AAD-LLM extends an auditory LLM by incorporating intracranial electroencephalography (iEEG) recordings to decode which speaker a listener is attending to and refine responses accordingly. The model first predicts the attended speaker from neural activity, then conditions response generation on this inferred attentional state. We evaluate AAD-LLM on speaker description, speech transcription and extraction, and question answering in multitalker scenarios, with both objective and subjective ratings showing improved alignment with listener intention. By taking a first step toward intention-aware auditory AI, this work explores a new paradigm where listener perception informs machine listening, paving the way for future listener-centered auditory systems. Demo available.
Search
Fix author
Co-authors
- Stephan Bickel 1
- Vishal Choudhari 1
- Sukru Samet Dindar 1
- Adeen Flinker 1
- Daniel Friedman 1
- show all...
Venues
- acl1