Aobo Kong
2023
PromptRank: Unsupervised Keyphrase Extraction Using Prompt
Aobo Kong
|
Shiwan Zhao
|
Hao Chen
|
Qicheng Li
|
Yong Qin
|
Ruiqi Sun
|
Xiaoyan Bai
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The keyphrase extraction task refers to the automatic selection of phrases from a given document to summarize its core content. State-of-the-art (SOTA) performance has recently been achieved by embedding-based algorithms, which rank candidates according to how similar their embeddings are to document embeddings. However, such solutions either struggle with the document and candidate length discrepancies or fail to fully utilize the pre-trained language model (PLM) without further fine-tuning. To this end, in this paper, we propose a simple yet effective unsupervised approach, PromptRank, based on the PLM with an encoder-decoder architecture. Specifically, PromptRank feeds the document into the encoder and calculates the probability of generating the candidate with a designed prompt by the decoder. We extensively evaluate the proposed PromptRank on six widely used benchmarks. PromptRank outperforms the SOTA approach MDERank, improving the F1 score relatively by 34.18%, 24.87%, and 17.57% for 5, 10, and 15 returned results, respectively. This demonstrates the great potential of using prompt for unsupervised keyphrase extraction. We release our code at https://github.com/HLT-NLP/PromptRank.
Search
Co-authors
- Shiwan Zhao 1
- Hao Chen 1
- Qicheng Li 1
- Yong Qin 1
- Ruiqi Sun 1
- show all...
Venues
- acl1