Jing Su


2024

pdf
An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Ziwei Chai | Guoyin Wang | Jing Su | Tianjie Zhang | Xuanwen Huang | Xuwu Wang | Jingjing Xu | Jianbo Yuan | Hongxia Yang | Fei Wu | Yang Yang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We present Expert-Token-Routing, a unified generalist framework that facilitates seamless integration of multiple expert LLMs. Our framework represents expert LLMs as special expert tokens within the vocabulary of a meta LLM. The meta LLM can route to an expert LLM like generating new tokens. Expert-Token-Routing not only supports learning the implicit expertise of expert LLMs from existing instruction dataset but also allows for dynamic extension of new expert LLMs in a plug-and-play manner. It also conceals the detailed collaboration process from the user’s perspective, facilitating interaction as though it were a singular LLM. Our framework outperforms various existing multi-LLM collaboration paradigms across benchmarks that incorporate six diverse expert domains, demonstrating effectiveness and robustness in building generalist LLM system via synergizing multiple expert LLMs.

2023

pdf
Supervised Gradual Machine Learning for Aspect-Term Sentiment Analysis
Yanyan Wang | Qun Chen | Murtadha H.M. Ahmed | Zhaoqiang Chen | Jing Su | Wei Pan | Zhanhuai Li
Transactions of the Association for Computational Linguistics, Volume 11

Recent work has shown that Aspect-Term Sentiment Analysis (ATSA) can be effectively performed by Gradual Machine Learning (GML). However, the performance of the current unsupervised solution is limited by inaccurate and insufficient knowledge conveyance. In this paper, we propose a supervised GML approach for ATSA, which can effectively exploit labeled training data to improve knowledge conveyance. It leverages binary polarity relations between instances, which can be either similar or opposite, to enable supervised knowledge conveyance. Besides the explicit polarity relations indicated by discourse structures, it also separately supervises a polarity classification DNN and a binary Siamese network to extract implicit polarity relations. The proposed approach fulfills knowledge conveyance by modeling detected relations as binary features in a factor graph. Our extensive experiments on real benchmark data show that it achieves the state-of-the-art performance across all the test workloads. Our work demonstrates clearly that, in collaboration with DNN for feature extraction, GML outperforms pure DNN solutions.

2018

pdf bib
Generating Description for Sequential Images with Local-Object Attention Conditioned on Global Semantic Context
Jing Su | Chenghua Lin | Mian Zhou | Qingyun Dai | Haoyu Lv
Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG)

2016

pdf
Topic Stability over Noisy Sources
Jing Su | Derek Greene | Oisín Boydell
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

Topic modelling techniques such as LDA have recently been applied to speech transcripts and OCR output. These corpora may contain noisy or erroneous texts which may undermine topic stability. Therefore, it is important to know how well a topic modelling algorithm will perform when applied to noisy data. In this paper we show that different types of textual noise can have diverse effects on the stability of topic models. On the other hand, topic model stability is not consistent with the same type but different levels of noise. We introduce a dictionary filtering approach to address this challenge, with the result that a topic model with the correct number of topics is always identified across different levels of noise.

2010

pdf
Assessing the effectiveness of conversational features for dialogue segmentation in medical team meetings and in the AMI corpus
Saturnino Luz | Jing Su
Proceedings of the SIGDIAL 2010 Conference