Haiyun Peng
2021
Cross-lingual Aspect-based Sentiment Analysis with Aspect Term Code-Switching
Wenxuan Zhang
|
Ruidan He
|
Haiyun Peng
|
Lidong Bing
|
Wai Lam
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Many efforts have been made in solving the Aspect-based sentiment analysis (ABSA) task. While most existing studies focus on English texts, handling ABSA in resource-poor languages remains a challenging problem. In this paper, we consider the unsupervised cross-lingual transfer for the ABSA task, where only labeled data in the source language is available and we aim at transferring its knowledge to the target language having no labeled data. To this end, we propose an alignment-free label projection method to obtain high-quality pseudo-labeled data of the target language with the help of the translation system, which could preserve more accurate task-specific knowledge in the target language. For better utilizing the source and translated data, as well as enhancing the cross-lingual alignment, we design an aspect code-switching mechanism to augment the training data with code-switched bilingual sentences. To further investigate the importance of language-specific knowledge in solving the ABSA problem, we distill the above model on the unlabeled target language data which improves the performance to the same level of the supervised method.
2019
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
Wei Zhao
|
Haiyun Peng
|
Steffen Eger
|
Erik Cambria
|
Min Yang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Obstacles hindering the development of capsule networks for challenging NLP applications include poor scalability to large output spaces and less reliable routing processes. In this paper, we introduce: (i) an agreement score to evaluate the performance of routing processes at instance-level; (ii) an adaptive optimizer to enhance the reliability of routing; (iii) capsule compression and partial routing to improve the scalability of capsule networks. We validate our approach on two NLP tasks, namely: multi-label text classification and question answering. Experimental results show that our approach considerably improves over strong competitors on both tasks. In addition, we gain the best results in low-resource settings with few training instances.
Search
Co-authors
- Wei Zhao 1
- Steffen Eger 1
- Erik Cambria 1
- Min Yang 1
- Wenxuan Zhang 1
- show all...