MIDI-PHOR: Multi-View Distillation for Music Understanding and Captioning

Steven Au


Abstract
A central limitation of current music understanding frameworks is the reliance on audio embeddings, which frequently yields interpretations lacking traceable ties to explicit musical elements such as notes, dynamics, and instrumentation. We address this gap with MIDIPHOR, a MIDI-first framework that converts symbolic data into structured, queryable representations for reasoning. MIDI-PHOR distills each piece into three complementary views: a symbolic view capturing pitch, meter, and key; a time-series (TS) view that tracks rhythmic salience, texture, and role activity; and an instrument-role graph encoding ensemble interactions. With evidence-linked claims, experiments demonstrate reduced hallucinations compared to raw-MIDI baselines and offer a robust, auditable bridge between symbolic data and semantic music understanding.
Anthology ID:
2026.nlp4musa-1.6
Volume:
Proceedings of the 4th Workshop on NLP for Music and Audio (NLP4MusA 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Elena V. Epure, Sergio Oramas, SeungHeon Doh, Pedro Ramoneda, Anna Kruspe, Mohamed Sordo
Venues:
NLP4MusA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–43
Language:
URL:
https://preview.aclanthology.org/volume-cite/2026.nlp4musa-1.6/
DOI:
10.18653/v1/2026.nlp4musa-1.6
Bibkey:
Cite (ACL):
Steven Au. 2026. MIDI-PHOR: Multi-View Distillation for Music Understanding and Captioning. In Proceedings of the 4th Workshop on NLP for Music and Audio (NLP4MusA 2026), pages 33–43, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
MIDI-PHOR: Multi-View Distillation for Music Understanding and Captioning (Au, NLP4MusA 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/volume-cite/2026.nlp4musa-1.6.pdf