Sijie Cheng
2022
Can Pre-trained Language Models Interpret Similes as Smart as Human?
Qianyu He
|
Sijie Cheng
|
Zhixu Li
|
Rui Xie
|
Yanghua Xiao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Simile interpretation is a crucial task in natural language processing. Nowadays, pre-trained language models (PLMs) have achieved state-of-the-art performance on many tasks. However, it remains under-explored whether PLMs can interpret similes or not. In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i.e., to let the PLMs infer the shared properties of similes. We construct our simile property probing datasets from both general textual corpora and human-designed questions, containing 1,633 examples covering seven main categories. Our empirical study based on the constructed datasets shows that PLMs can infer similes’ shared properties while still underperforming humans. To bridge the gap with human performance, we additionally design a knowledge-enhanced training objective by incorporating the simile knowledge into PLMs via knowledge embedding methods. Our method results in a gain of 8.58% in the probing task and 1.37% in the downstream task of sentiment classification. The datasets and code are publicly available at https://github.com/Abbey4799/PLMs-Interpret-Simile.
A Self-supervised Joint Training Framework for Document Reranking
Xiaozhi Zhu
|
Tianyong Hao
|
Sijie Cheng
|
Fu Lee Wang
|
Hai Liu
Findings of the Association for Computational Linguistics: NAACL 2022
Pretrained language models such as BERT have been successfully applied to a wide range of natural language processing tasks and also achieved impressive performance in document reranking tasks. Recent works indicate that further pretraining the language models on the task-specific datasets before fine-tuning helps improve reranking performance. However, the pre-training tasks like masked language model and next sentence prediction were based on the context of documents instead of encouraging the model to understand the content of queries in document reranking task. In this paper, we propose a new self-supervised joint training framework (SJTF) with a self-supervised method called Masked Query Prediction (MQP) to establish semantic relations between given queries and positive documents. The framework randomly masks a token of query and encodes the masked query paired with positive documents, and uses a linear layer as a decoder to predict the masked token. In addition, the MQP is used to jointly optimize the models with supervised ranking objective during fine-tuning stage without an extra further pre-training stage. Extensive experiments on the MS MARCO passage ranking and TREC Robust datasets show that models trained with our framework obtain significant improvements compared to original models.
2021
On Commonsense Cues in BERT for Solving Commonsense Tasks
Leyang Cui
|
Sijie Cheng
|
Yu Wu
|
Yue Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Qianyu He 1
- Zhixu Li 1
- Rui Xie 1
- Yanghua Xiao 1
- Xiaozhi Zhu 1
- show all...