Mustapha Lebbah


2025

pdf bib
Leveraging Text-to-Text Transformers as Classifier Chain for Few-Shot Multi-Label Classification
Quang Anh Nguyen | Nadi Tomeh | Mustapha Lebbah | Thierry Charnois | Hanane Azzag
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Multilabel text classification (MLTC) is an essential task in NLP applications. Traditional methods require extensive labeled data and are limited to fixed label sets. Extracting labels by LLMs is more effective and universal, but incurs high computational costs. In this work, we introduce a distillation-based T5 generalist model for zero-shot MLTC and few-shot fine-tuning. Our model accommodates variable label sets with general domain-agnostic pertaining, while modeling dependency between labels. Experiments show that our approach outperforms baselines of similar size on three few-shot tasks.Our code is available at https://anonymous.4open.science/r/t5-multilabel-0C32/README.md

2024

pdf bib
Enhancing Few-Shot Topic Classification with Verbalizers. a Study on Automatic Verbalizer and Ensemble Methods
Quang Anh Nguyen | Nadi Tomeh | Mustapha Lebbah | Thierry Charnois | Hanene Azzag | Santiago Cordoba Muñoz
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

As pretrained language model emerge and consistently develop, prompt-based training has become a well-studied paradigm to improve the exploitation of models for many natural language processing tasks. Furthermore, prompting demonstrates great performance compared to conventional fine-tuning in scenarios with limited annotated data, such as zero-shot or few-shot situations. Verbalizers are crucial in this context, as they help interpret masked word distributions generated by language models into output predictions. This study introduces a benchmarking approach to assess three common baselines of verbalizers for topic classification in few-shot learning scenarios. Additionally, we find that increasing the number of label words for automatic label word searching enhances model performance. Moreover, we investigate the effectiveness of template assembling with various aggregation strategies to develop stronger classifiers that outperform models trained with individual templates. Our approach achieves comparable results to prior research while using significantly fewer resources. Our code is available at https://github.com/quang-anh-nguyen/verbalizer_benchmark.git.