LT Expertfinder: An Evaluation Framework for Expert Finding Methods

Tim Fischer, Steffen Remus, Chris Biemann


Abstract
Expert finding is the task of ranking persons for a predefined topic or search query. Finding experts for a specified area is an important task and has attracted much attention in the information retrieval community. Most approaches for this task are evaluated in a supervised fashion, which depend on predefined topics of interest as well as gold standard expert rankings. Famous representatives of such datasets are enriched versions of DBLP provided by the ArnetMiner projet or the W3C Corpus of TREC. However, manually ranking experts can be considered highly subjective and detailed rankings are hardly distinguishable. Evaluating these datasets does not necessarily guarantee a good or bad performance of the system. Particularly for dynamic systems, where topics are not predefined but formulated as a search query, we believe a more informative approach is to perform user studies for directly comparing different methods in the same view. In order to accomplish this in a user-friendly way, we present the LT Expert Finder web-application, which is equipped with various query-based expert finding methods that can be easily extended, a detailed expert profile view, detailed evidence in form of relevant documents and statistics, and an evaluation component that allows the qualitative comparison between different rankings.
Anthology ID:
N19-4017
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Waleed Ammar, Annie Louis, Nasrin Mostafazadeh
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
98–104
Language:
URL:
https://aclanthology.org/N19-4017
DOI:
10.18653/v1/N19-4017
Bibkey:
Cite (ACL):
Tim Fischer, Steffen Remus, and Chris Biemann. 2019. LT Expertfinder: An Evaluation Framework for Expert Finding Methods. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 98–104, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
LT Expertfinder: An Evaluation Framework for Expert Finding Methods (Fischer et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/N19-4017.pdf
Code
 uhh-lt/lt-expertfinder