Thomas Bailleux
2025
Grouping Entities with Shared Properties using Multi-Facet Prompting and Property Embeddings
Amit Gajbhiye
|
Thomas Bailleux
|
Zied Bouraoui
|
Luis Espinosa-Anke
|
Steven Schockaert
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Methods for learning taxonomies from data have been widely studied. We study a specific version of this task, called commonality identification, where only the set of entities is given and we need to find meaningful ways to group those entities. While LLMs should intuitively excel at this task, it is difficult to directly use such models in large domains. In this paper, we instead use LLMs to describe the different properties that are satisfied by each of the entities individually. We then use pre-trained embeddings to cluster these properties, and finally group entities that have properties which belong to the same cluster. To achieve good results, it is paramount that the properties predicted by the LLM are sufficiently diverse. We find that this diversity can be improved by prompting the LLM to structure the predicted properties into different facets of knowledge.
Connecting Concept Layers and Rationales to Enhance Language Model Interpretability
Thomas Bailleux
|
Tanmoy Mukherjee
|
Pierre Marquis
|
Zied Bouraoui
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
With the introduction of large language models, NLP has undergone a paradigm shift where these models now serve as the backbone of most developed systems. However, while highly effective, they remain opaque and difficult to interpret, which limits their adoption in critical applications that require transparency and trust. Two major approaches aim to address this: rationale extraction, which highlights input spans that justify predictions, and concept bottleneck models, which make decisions through human-interpretable concepts. Yet each has limitations. Crucially, current models lack a unified framework that connects where a model looks (rationales) with why it makes a decision (concepts). We introduce CLARITY, a model that first selects key input spans, maps them to interpretable concepts, and then predicts using only those concepts. This design supports faithful, multi-level explanations and allows users to intervene at both the rationale and concept levels. CLARITY, achieves competitive accuracy while offering improved transparency and controllability.
2024
CONTOR: Benchmarking Strategies for Completing Ontologies with Plausible Missing Rules
Na Li
|
Thomas Bailleux
|
Zied Bouraoui
|
Steven Schockaert
Findings of the Association for Computational Linguistics: EMNLP 2024
We consider the problem of finding plausible rules that are missing from a given ontology. A number of strategies for this problem have already been considered in the literature. Little is known about the relative performance of these strategies, however, as they have thus far been evaluated on different ontologies. Moreover, existing evaluations have focused on distinguishing held-out ontology rules from randomly corrupted ones, which often makes the task unrealistically easy and leads to the presence of incorrectly labelled negative examples. To address these concerns, we introduce a benchmark with manually annotated hard negatives and use this benchmark to evaluate ontology completion models. In addition to previously proposed models, we test the effectiveness of several approaches that have not yet been considered for this task, including LLMs and simple but effective hybrid strategies.
Search
Fix author
Co-authors
- Zied Bouraoui 3
- Steven Schockaert 2
- Luis Espinosa Anke 1
- Amit Gajbhiye 1
- Na Li 1
- show all...