A Multi-Perspective Architecture for Semantic Code Search

Rajarshi Haldar, Lingfei Wu, JinJun Xiong, Julia Hockenmaier


Abstract
The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories. In this paper, we propose a novel multi-perspective cross-lingual neural framework for code–text matching, inspired in part by a previous model for monolingual text-to-text matching, to capture both global and local similarities. Our experiments on the CoNaLa dataset show that our proposed model yields better performance on this cross-lingual text-to-code matching task than previous approaches that map code and text to a single joint embedding space.
Anthology ID:
2020.acl-main.758
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8563–8568
Language:
URL:
https://aclanthology.org/2020.acl-main.758
DOI:
10.18653/v1/2020.acl-main.758
Bibkey:
Cite (ACL):
Rajarshi Haldar, Lingfei Wu, JinJun Xiong, and Julia Hockenmaier. 2020. A Multi-Perspective Architecture for Semantic Code Search. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8563–8568, Online. Association for Computational Linguistics.
Cite (Informal):
A Multi-Perspective Architecture for Semantic Code Search (Haldar et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2020.acl-main.758.pdf
Video:
 http://slideslive.com/38929341
Data
CoNaLa