Jing Su


2023

pdf
Supervised Gradual Machine Learning for Aspect-Term Sentiment Analysis
Yanyan Wang | Qun Chen | Murtadha H.M. Ahmed | Zhaoqiang Chen | Jing Su | Wei Pan | Zhanhuai Li
Transactions of the Association for Computational Linguistics, Volume 11

Recent work has shown that Aspect-Term Sentiment Analysis (ATSA) can be effectively performed by Gradual Machine Learning (GML). However, the performance of the current unsupervised solution is limited by inaccurate and insufficient knowledge conveyance. In this paper, we propose a supervised GML approach for ATSA, which can effectively exploit labeled training data to improve knowledge conveyance. It leverages binary polarity relations between instances, which can be either similar or opposite, to enable supervised knowledge conveyance. Besides the explicit polarity relations indicated by discourse structures, it also separately supervises a polarity classification DNN and a binary Siamese network to extract implicit polarity relations. The proposed approach fulfills knowledge conveyance by modeling detected relations as binary features in a factor graph. Our extensive experiments on real benchmark data show that it achieves the state-of-the-art performance across all the test workloads. Our work demonstrates clearly that, in collaboration with DNN for feature extraction, GML outperforms pure DNN solutions.

2018

pdf bib
Generating Description for Sequential Images with Local-Object Attention Conditioned on Global Semantic Context
Jing Su | Chenghua Lin | Mian Zhou | Qingyun Dai | Haoyu Lv
Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG)

2016

pdf
Topic Stability over Noisy Sources
Jing Su | Derek Greene | Oisín Boydell
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

Topic modelling techniques such as LDA have recently been applied to speech transcripts and OCR output. These corpora may contain noisy or erroneous texts which may undermine topic stability. Therefore, it is important to know how well a topic modelling algorithm will perform when applied to noisy data. In this paper we show that different types of textual noise can have diverse effects on the stability of topic models. On the other hand, topic model stability is not consistent with the same type but different levels of noise. We introduce a dictionary filtering approach to address this challenge, with the result that a topic model with the correct number of topics is always identified across different levels of noise.

2010

pdf
Assessing the effectiveness of conversational features for dialogue segmentation in medical team meetings and in the AMI corpus
Saturnino Luz | Jing Su
Proceedings of the SIGDIAL 2010 Conference