Xuyang Xie

Also published as: 旭阳


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
MIRA: Empowering One-Touch AI Services on Smartphones with MLLM-based Instruction Recommendation
Zhipeng Bian | Jieming Zhu | Xuyang Xie | Quanyu Dai | Zhou Zhao | Zhenhua Dong
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)

The rapid advancement of generative AI technologies is driving the integration of diverse AI-powered services into smartphones, transforming how users interact with their devices. To simplify access to predefined AI services, this paper introduces MIRA, a pioneering framework for task instruction recommendation that enables intuitive one-touch AI tasking on smartphones. With MIRA, users can long-press on images or text objects to receive contextually relevant instruction recommendations for executing AI tasks. Our work introduces three key innovations: 1) A multimodal large language model (MLLM)-based recommendation pipeline with structured reasoning to extract key entities, infer user intent, and generate precise instructions; 2) A template-augmented reasoning mechanism that integrates high-level reasoning templates, enhancing task inference accuracy; 3) A prefix-tree-based constrained decoding strategy that restricts outputs to predefined instruction candidates, ensuring coherence and intent alignment. Through evaluation using a real-world annotated datasets and a user study, MIRA has demonstrated substantial improvements in recommendation accuracy. The encouraging results highlight MIRA’s potential to revolutionize the way users engage with AI services on their smartphones, offering a more seamless and efficient experience.

2021

pdf bib
融合多层语义特征图的缅甸语图像文本识别方法(Burmese Image Text Recognition Method Fused with Multi-layer Semantic Feature Maps)
Fuhao Liu (刘福浩) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Chengxiang Gao (高盛祥) | Linqin Wang (王琳钦) | Xuyang Xie (谢旭阳)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

由于缅甸语存在特殊的字符组合结构,在图像文本识别研究方面存在较大的困难,直接利用现有的图像文本识别方法识别缅甸语图片存在字符缺失和复杂背景下识别效果不佳的问题。因此,本文提出一种融合多层语义特征图的缅甸语图像文本识别方法,利用深度卷积网络获得多层图像特征并对其融合获取多层语义信息,缓解缅甸语图像中由于字符嵌套导致特征丢失的问题。另外,在训练阶段采用MIX UP的策略进行网络参数优化,提高模型的泛化能力,降低模型在测试阶段对训练样本产生的依赖。实验结果表明,提出方法相比基线模型准确率提升了2.2%。