Qiang Huang


2025

pdf bib
Don’t Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation
Yingchaojie Feng | Yiqun Sun | Yandong Sun | Minfeng Zhu | Qiang Huang | Anthony Kum Hoe Tung | Wei Chen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this work, we investigate an important task named instruction-following text embedding, which generates dynamic text embeddings that adapt to user instructions, highlighting specific attributes of text. Despite recent advancements, existing approaches suffer from significant computational overhead, as they require re-encoding the entire corpus for each new instruction. To address this challenge, we propose GSTransform, a novel instruction-following text embedding framework based on Guided Space Transformation. Our key observation is that instruction-relevant information is inherently encoded in generic embeddings but remains underutilized. Instead of repeatedly encoding the corpus for each instruction, GSTransform is a lightweight transformation mechanism that adapts pre-computed embeddings in real time to align with user instructions, guided by a small amount of text data with instruction-focused label annotation. We conduct extensive experiments on three instruction-awareness downstream tasks across nine real-world datasets, demonstrating that GSTransform improves instruction-following text embedding quality over state-of-the-art methods while achieving dramatic speedups of 6~300× in real-time processing on large-scale datasets. The source code is available at https://github.com/YingchaojieFeng/GSTransform.

pdf bib
PRISM: A Framework for Producing Interpretable Political Bias Embeddings with Political-Aware Cross-Encoder
Yiqun Sun | Qiang Huang | Anthony Kum Hoe Tung | Jun Yu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Semantic Text Embedding is a fundamental NLP task that encodes textual content into vector representations, where proximity in the embedding space reflects semantic similarity. While existing embedding models excel at capturing general meaning, they often overlook ideological nuances, limiting their effectiveness in tasks that require an understanding of political bias. To address this gap, we introduce PRISM, the first framework designed to Produce inteRpretable polItical biaS eMbeddings. PRISM operates in two key stages: (1) Controversial Topic Bias Indicator Mining, which systematically extracts fine-grained political topics and corresponding bias indicators from weakly labeled news data, and (2) Cross-Encoder Political Bias Embedding, which assigns structured bias scores to news articles based on their alignment with these indicators. This approach ensures that embeddings are explicitly tied to bias-revealing dimensions, enhancing both interpretability and predictive power. Through extensive experiments on large-scale datasets, we demonstrate that PRISM outperforms state-of-the-art text embedding models in political bias classification while offering highly interpretable representations that facilitate diversified retrieval and ideological analysis. The source code is available at https://anonymous.4open.science/r/PRISM-80B4/.

2020

pdf bib
A Joint Model for Aspect-Category Sentiment Analysis with Shared Sentiment Prediction Layer
Yuncong Li | Zhe Yang | Cunxiang Yin | Xu Pan | Lunan Cui | Qiang Huang | Ting Wei
Proceedings of the 19th Chinese National Conference on Computational Linguistics

Aspect-category sentiment analysis (ACSA) aims to predict the aspect categories mentioned in texts and their corresponding sentiment polarities. Some joint models have been proposed to address this task. Given a text, these joint models detect all the aspect categories mentioned in the text and predict the sentiment polarities toward them at once. Although these joint models obtain promising performances, they train separate parameters for each aspect category and therefore suffer from data deficiency of some aspect categories. To solve this problem, we propose a novel joint model which contains a shared sentiment prediction layer. The shared sentiment prediction layer transfers sentiment knowledge between aspect categories and alleviates the problem caused by data deficiency. Experiments conducted on SemEval-2016 Datasets demonstrate the effectiveness of our model.

2015

pdf bib
Chinese Spelling Check System Based on N-gram Model
Weijian Xie | Peijie Huang | Xinrui Zhang | Kaiduo Hong | Qiang Huang | Bingzhou Chen | Lei Huang
Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing

2014

pdf bib
Ch2R: A Chinese Chatter Robot for Online Shopping Guide
Peijie Huang | Xianmao Lin | Zeqi Lian | De Yang | Xiaoling Tang | Li Huang | Qiang Huang | Xiupeng Wu | Guisheng Wu | Xinrui Zhang
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
Chinese Spelling Check System Based on Tri-gram Model
Qiang Huang | Peijie Huang | Xinrui Zhang | Weijian Xie | Kaiduo Hong | Bingzhou Chen | Lei Huang
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

2004

pdf bib
Automatic Call Routing with Multiple Language Models
Qiang Huang | Stephen Cox
Proceedings of the HLT-NAACL 2004 Workshop on Spoken Language Understanding for Conversational Systems and Higher Level Linguistic Information for Speech Processing