Low-resource Entity Set Expansion: A Comprehensive Study on User-generated Text

Yutong Shao, Nikita Bhutani, Sajjadur Rahman, Estevam Hruschka


Abstract
Entity set expansion (ESE) aims at obtaining a more complete set of entities given a textual corpus and a seed set of entities of a concept. Although it is a critical task in many NLP applications, existing benchmarks are limited to well-formed text (e.g., Wikipedia) and well-defined concepts (e.g., countries and diseases). Furthermore, only a small number of predictions are evaluated compared to the actual size of an entity set. A rigorous assessment of ESE methods warrants more comprehensive benchmarks and evaluation. In this paper, we consider user-generated text to understand the generalizability of ESE methods. We develop new benchmarks and propose more rigorous evaluation metrics for assessing the performance of ESE methods. Additionally, we identify phenomena such as non-named entities, multifaceted entities, vague concepts that are more prevalent in user-generated text than well-formed text, and use them to profile ESE methods. We observe that the strong performance of state-of-the-art ESE methods does not generalize well to user-generated text. We conduct comprehensive empirical analysis and draw insights from the findings.
Anthology ID:
2022.findings-naacl.100
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1343–1353
Language:
URL:
https://aclanthology.org/2022.findings-naacl.100
DOI:
10.18653/v1/2022.findings-naacl.100
Bibkey:
Cite (ACL):
Yutong Shao, Nikita Bhutani, Sajjadur Rahman, and Estevam Hruschka. 2022. Low-resource Entity Set Expansion: A Comprehensive Study on User-generated Text. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1343–1353, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Low-resource Entity Set Expansion: A Comprehensive Study on User-generated Text (Shao et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-naacl.100.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.findings-naacl.100.mp4
Code
 megagonlabs/esebench