Shenguang Huang


Fixing paper assignments

  1. Please select all papers that do not belong to this person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
LCAN: A Label-Aware Contrastive Attention Network for Multi-Intent Recognition and Slot Filling in Task-Oriented Dialogue Systems
Shuli Zhang | Zhiqiang You | Xiao Xiang Qi | Peng Liu | Gaode Wu | Kan Xia | Shenguang Huang
Findings of the Association for Computational Linguistics: EMNLP 2025

Multi-intent utterances processing remains a persistent challenge due to intricate intent-slot dependencies and semantic ambiguities. Traditional methods struggle to model these complex interactions, particularly when handling overlapping slot structures across multiple intents. This paper introduces a label-aware contrastive attention network (LCAN), a joint modeling approach for multi-intent recognition and slot filling in task-oriented dialogue systems. LCAN addresses this issue by integrating label-aware attention and contrastive learning strategies, improving semantic understanding and generalization in multi-intent scenarios. Extensive experiments on the MixATIS and MixSNIPS datasets demonstrate LCAN’s superiority over existing models, achieving improved intent recognition and slot filling performance, particularly in handling overlapping or complex semantic structures in multi-intent settings.