Chang Shu


2023

pdf
Improving Visual-Semantic Embedding with Adaptive Pooling and Optimization Objective
Zijian Zhang | Chang Shu | Ya Xiao | Yuan Shen | Di Zhu | Youxin Chen | Jing Xiao | Jey Han Lau | Qian Zhang | Zheng Lu
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Visual-Semantic Embedding (VSE) aims to learn an embedding space where related visual and semantic instances are close to each other. Recent VSE models tend to design complex structures to pool visual and semantic features into fixed-length vectors and use hard triplet loss for optimization. However, we find that: (1) combining simple pooling methods is no worse than these sophisticated methods; and (2) only considering the most difficult-to-distinguish negative sample leads to slow convergence and poor Recall@K improvement. To this end, we propose an adaptive pooling strategy that allows the model to learn how to aggregate features through a combination of simple pooling methods. We also introduce a strategy to dynamically select a group of negative samples to make the optimization converge faster and perform better. Experimental results on Flickr30K and MS-COCO demonstrate that a standard VSE using our pooling and optimization strategies outperforms current state-of-the-art systems (at least 1.0{% on the metrics of recall) in image-to-text and text-to-image retrieval. Source code of our experiments is available at https://github.com/96-Zachary/vse_2ad .

2021

pdf
Logic-Consistency Text Generation from Semantic Parses
Chang Shu | Yusen Zhang | Xiangyu Dong | Peng Shi | Tao Yu | Rui Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context
Jey Han Lau | Carlos Armendariz | Shalom Lappin | Matthew Purver | Chang Shu
Transactions of the Association for Computational Linguistics, Volume 8

We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.

2019

pdf
Early Rumour Detection
Kaimin Zhou | Chang Shu | Binyang Li | Jey Han Lau
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Rumours can spread quickly through social media, and malicious ones can bring about significant economical and social impact. Motivated by this, our paper focuses on the task of rumour detection; particularly, we are interested in understanding how early we can detect them. Although there are numerous studies on rumour detection, few are concerned with the timing of the detection. A successfully-detected malicious rumour can still cause significant damage if it isn’t detected in a timely manner, and so timing is crucial. To address this, we present a novel methodology for early rumour detection. Our model treats social media posts (e.g. tweets) as a data stream and integrates reinforcement learning to learn the number minimum number of posts required before we classify an event as a rumour. Experiments on Twitter and Weibo demonstrate that our model identifies rumours earlier than state-of-the-art systems while maintaining a comparable accuracy.