Ge Yu
2023
Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data
Xinze Li
|
Zhenghao Liu
|
Chenyan Xiong
|
Shi Yu
|
Yu Gu
|
Zhiyuan Liu
|
Ge Yu
Findings of the Association for Computational Linguistics: ACL 2023
This paper presents Structure Aware Dense Retrieval (SANTA) model, which encodes user queries and structured data in one universal embedding space for retrieving structured data. SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining. It contrastively trains language models to represent multi-modal text data and teaches models to distinguish matched structured data for unstructured texts. 2) Masked Entity Prediction, which designs an entity-oriented mask strategy and asks language models to fill in the masked entities. Our experiments show that SANTA achieves state-of-the-art on code search and product search and conducts convincing results in the zero-shot setting. SANTA learns tailored representations for multi-modal text data by aligning structured and unstructured data pairs and capturing structural semantics by masking and predicting entities in the structured data. All codes are available at https://github.com/OpenMatch/OpenMatch.
2013
Is Twitter A Better Corpus for Measuring Sentiment Similarity?
Shi Feng
|
Le Zhang
|
Binyang Li
|
Daling Wang
|
Ge Yu
|
Kam-Fai Wong
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
Search
Co-authors
- Xinze Li 1
- Zhenghao Liu 1
- Chenyan Xiong 1
- Shi Yu 1
- Yu Gu 1
- show all...