MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model

Sumin Ha, Jun Hyeong Kim, Yinhua Piao, Changyun Cho, Sun Kim


Abstract
Deciphering molecular meaning in chemistry and biomedicine depends on context — a capability that large language models (LLMs) can enhance by aligning molecular structures with language. However, existing molecule-text models ignore complementary information in different molecular views and rely on single-view representations, limiting molecule structural understanding. Moreover, naïve multi-view alignment strategies face two challenges: (1) the aligned spaces differ across views due to inconsistent molecule-text mappings, and (2) existing loss objectives fail to preserve complementary information necessary for finegrained alignment. To enhance LLM’s ability to understand molecular structure, we propose MV-CLAM, a novel framework that aligns multi-view molecular representations into a unified textual space using a multi-querying transformer (MQ-Former). Our approach ensures cross-view consistency while the proposed token-level contrastive loss preserves diverse molecular features across textual queries. MV-CLAM enhances molecular reasoning, improving retrieval and captioning accuracy. The source code of MV-CLAM is available in https://github.com/sumin124/mv-clam.
Anthology ID:
2025.findings-emnlp.1174
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21528–21549
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1174/
DOI:
10.18653/v1/2025.findings-emnlp.1174
Bibkey:
Cite (ACL):
Sumin Ha, Jun Hyeong Kim, Yinhua Piao, Changyun Cho, and Sun Kim. 2025. MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 21528–21549, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model (Ha et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1174.pdf
Checklist:
 2025.findings-emnlp.1174.checklist.pdf