Teddy Ferdinan


2024

pdf
Self-training Large Language Models through Knowledge Detection
Yeo Wei Jie | Teddy Ferdinan | Przemyslaw Kazienko | Ranjan Satapathy | Erik Cambria
Findings of the Association for Computational Linguistics: EMNLP 2024

Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM autonomously curates its own labels and selectively trains on unknown data samples identified through a reference-free consistency method. Empirical evaluations demonstrate significant improvements in reducing hallucination in generation across multiple subjects. Furthermore, the selective training framework mitigates catastrophic forgetting in out-of-distribution benchmarks, addressing a critical limitation in training LLMs. Our findings suggest that such an approach can substantially reduce the dependency on large labeled datasets, paving the way for more scalable and cost-effective language model training.

2022

pdf
StudEmo: A Non-aggregated Review Dataset for Personalized Emotion Recognition
Anh Ngo | Agri Candri | Teddy Ferdinan | Jan Kocon | Wojciech Korczynski
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022

Humans’ emotional perception is subjective by nature, in which each individual could express different emotions regarding the same textual content. Existing datasets for emotion analysis commonly depend on a single ground truth per data sample, derived from majority voting or averaging the opinions of all annotators. In this paper, we introduce a new non-aggregated dataset, namely StudEmo, that contains 5,182 customer reviews, each annotated by 25 people with intensities of eight emotions from Plutchik’s model, extended with valence and arousal. We also propose three personalized models that use not only textual content but also the individual human perspective, providing the model with different approaches to learning human representations. The experiments were carried out as a multitask classification on two datasets: our StudEmo dataset and GoEmotions dataset, which contains 28 emotional categories. The proposed personalized methods significantly improve prediction results, especially for emotions that have low inter-annotator agreement.