Pascal Wullschleger


2025

pdf bib
FoodTaxo: Generating Food Taxonomies with Large Language Models
Pascal Wullschleger | Majid Zarharan | Donnacha Daly | Marc Pouly | Jennifer Foster
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)

We investigate the utility of Large Language Models for automated taxonomy generation and completion specifically applied to taxonomies from the food technology industry. We explore the extent to which taxonomies can be completed from a seed taxonomy or generated without a seed from a set of known concepts, in an iterative fashion using recent prompting techniques.Experiments on five taxonomies using an open-source LLM (Llama-3), while promising, point to the difficulty of correctly placing inner nodes.

2024

pdf bib
Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models
Majid Zarharan | Pascal Wullschleger | Babak Behkam Kia | Mohammad Taher Pilehvar | Jennifer Foster
Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)

This paper presents a comprehensive analysis of explainable fact-checking through a series of experiments, focusing on the ability of large language models to verify public health claims and provide explanations or justifications for their veracity assessments. We examine the effectiveness of zero/few-shot prompting and parameter-efficient fine-tuning across various open and closed-source models, examining their performance in both isolated and joint tasks of veracity prediction and explanation generation. Importantly, we employ a dual evaluation approach comprising previously established automatic metrics and a novel set of criteria through human evaluation. Our automatic evaluation indicates that, within the zero-shot scenario, GPT-4 emerges as the standout performer, but in few-shot and parameter-efficient fine-tuning contexts, open-source models demonstrate their capacity to not only bridge the performance gap but, in some instances, surpass GPT-4. Human evaluation reveals yet more nuance as well as indicating potential problems with the gold explanations.