Liangliang Cao


2020

pdf
A Large Scale Speech Sentiment Corpus
Eric Chen | Zhiyun Lu | Hao Xu | Liangliang Cao | Yu Zhang | James Fan
Proceedings of the Twelfth Language Resources and Evaluation Conference

We present a multimodal corpus for sentiment analysis based on the existing Switchboard-1 Telephone Speech Corpus released by the Linguistic Data Consortium. This corpus extends the Switchboard-1 Telephone Speech Corpus by adding sentiment labels from 3 different human annotators for every transcript segment. Each sentiment label can be one of three options: positive, negative, and neutral. Annotators are recruited using Google Cloud’s data labeling service and the labeling task was conducted over the internet. The corpus contains a total of 49500 labeled speech segments covering 140 hours of audio. To the best of our knowledge, this is the largest multimodal Corpus for sentiment analysis that includes both speech and text features.

pdf
Zero-shot Entity Linking with Efficient Long Range Sequence Modeling
Zonghai Yao | Liangliang Cao | Huapu Pan
Findings of the Association for Computational Linguistics: EMNLP 2020

This paper considers the problem of zero-shot entity linking, in which a link in the test time may not present in training. Following the prevailing BERT-based research efforts, we find a simple yet effective way is to expand the long-range sequence modeling. Unlike many previous methods, our method does not require expensive pre-training of BERT with long position embeddings. Instead, we propose an efficient position embeddings initialization method called Embedding-repeat, which initializes larger position embeddings based on BERT-Base. On the zero-shot entity linking dataset, our method improves the STOA from 76.06% to 79.08%, and for its long data, the corresponding improvement is from 74.57% to 82.14%. Our experiments suggest the effectiveness of long-range sequence modeling without retraining the BERT model.