Teddy Ferdinan
2026
Breaking the Illusion of Reasoning in Polish LLMs: Quality over Quantity of Thought
Dzmitry Pihulski | Mikołaj Langner | Jan Eliasz | Przemyslaw Kazienko | Jan Kocon | Teddy Ferdinan
Findings of the Association for Computational Linguistics: EACL 2026
Dzmitry Pihulski | Mikołaj Langner | Jan Eliasz | Przemyslaw Kazienko | Jan Kocon | Teddy Ferdinan
Findings of the Association for Computational Linguistics: EACL 2026
Recent advances in large language models (LLMs) have introduced explicit reasoning capabilities, yet the factors that truly drive their improved performance remain unclear. In this work, we disentangle the effects of reasoning quality and sequence length by fine-tuning 8B models on several Polish variants of the Mixture-of-Thoughts (MoT-PL) dataset, each representing a distinct reasoning style: *Detailed*, *Summarized*, *BabyThink*, *Lengthy*. We found that the model trained on high-quality reasoning traces achieved better average performance than all other models; neither *longer reasoning with similar quality* nor *low-quality reasoning with similar length* achieved similar gains. Qualitative and quantitative analyses further reveal that reasoning clarity, rather than verbosity, is the dominant factor driving model performance. These findings underscore the importance of reasoning content quality in LLM training and provide new insights into designing more effective reasoning-oriented datasets and models.
2025
Crowdsource, Crawl, or Generate? Creating SEA-VL, a Multicultural Vision-Language Dataset for Southeast Asia
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite Southeast Asia’s (SEA) extraordinary linguistic and cultural diversity, the region remains significantly underrepresented in vision-language (VL) research, resulting in AI models that inadequately capture SEA cultural nuances. To fill this gap, we present SEA-VL, an open-source initiative dedicated to developing culturally relevant high-quality datasets for SEA languages. By involving contributors from SEA countries, SEA-VL ensures better cultural relevance and diversity, fostering greater inclusivity of underrepresented languages and cultural depictions in VL research. Our methodology employed three approaches: community-driven crowdsourcing with SEA contributors, automated image crawling, and synthetic image generation. We evaluated each method’s effectiveness in capturing cultural relevance. We found that image crawling achieves approximately ~85% cultural relevance while being more cost- and time-efficient than crowdsourcing, whereas synthetic image generation failed to accurately reflect SEA cultural nuances and contexts. Collectively, we gathered 1.28 million SEA culturally relevant images, more than 50 times larger than other existing datasets. This work bridges the representation gap in SEA, establishes a foundation for developing culturally aware AI systems for this region, and provides a replicable framework for addressing representation gaps in other underrepresented regions.
2024
Self-training Large Language Models through Knowledge Detection
Yeo Wei Jie | Teddy Ferdinan | Przemyslaw Kazienko | Ranjan Satapathy | Erik Cambria
Findings of the Association for Computational Linguistics: EMNLP 2024
Yeo Wei Jie | Teddy Ferdinan | Przemyslaw Kazienko | Ranjan Satapathy | Erik Cambria
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM autonomously curates its own labels and selectively trains on unknown data samples identified through a reference-free consistency method. Empirical evaluations demonstrate significant improvements in reducing hallucination in generation across multiple subjects. Furthermore, the selective training framework mitigates catastrophic forgetting in out-of-distribution benchmarks, addressing a critical limitation in training LLMs. Our findings suggest that such an approach can substantially reduce the dependency on large labeled datasets, paving the way for more scalable and cost-effective language model training.
2022
StudEmo: A Non-aggregated Review Dataset for Personalized Emotion Recognition
Anh Ngo | Agri Candri | Teddy Ferdinan | Jan Kocon | Wojciech Korczynski
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Anh Ngo | Agri Candri | Teddy Ferdinan | Jan Kocon | Wojciech Korczynski
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Humans’ emotional perception is subjective by nature, in which each individual could express different emotions regarding the same textual content. Existing datasets for emotion analysis commonly depend on a single ground truth per data sample, derived from majority voting or averaging the opinions of all annotators. In this paper, we introduce a new non-aggregated dataset, namely StudEmo, that contains 5,182 customer reviews, each annotated by 25 people with intensities of eight emotions from Plutchik’s model, extended with valence and arousal. We also propose three personalized models that use not only textual content but also the individual human perspective, providing the model with different approaches to learning human representations. The experiments were carried out as a multitask classification on two datasets: our StudEmo dataset and GoEmotions dataset, which contains 28 emotional categories. The proposed personalized methods significantly improve prediction results, especially for emotions that have low inter-annotator agreement.
Search
Fix author
Co-authors
- Przemyslaw Kazienko 2
- Jan Kocon 2
- Amit Agarwal 1
- Alham Fikri Aji 1
- Fadil Risdian Ansori 1
- David Anugraha 1
- Michael Anugraha 1
- Phakphum Artkaew 1
- Yeshil Bangera 1
- Mithil Bangera 1
- Anab Maulana Barik 1
- Samuel Cahyawijaya 1
- Erik Cambria 1
- Agri Candri 1
- Carlos Rafael Catalan 1
- Jan Christian Blaise Cruz 1
- John Amadeo Daniswara 1
- Meisyarah Dwiastuti 1
- Jan Eliasz 1
- Evan Evan 1
- Mohammad Rifqi Farhansyah 1
- Tirana Noor Fatyanosa 1
- Vicky Feliren 1
- Isaiah Edri W. Flores 1
- Rifo Ahmad Genadi 1
- Muhammad Ravi Shulthan Habibi 1
- M.Alif Al Hakim 1
- Kenneth Chen Ko Han 1
- Ikhlasul Akmal Hanif 1
- Rochana Prih Hastuti 1
- Ming Shan Hee 1
- Frederikus Hudi 1
- Mahardika Krisna Ihsani 1
- Fenal Ashokbhai Ilasariya 1
- Mohamed Fazli Mohamed Imam 1
- Joseph Marvin Imperial 1
- Piyalitt Ittichaiwong 1
- Audra Aurora Izzani 1
- Onno P. Kampman 1
- Börje F. Karlsson 1
- Kun Kerdthaisong 1
- Jun Kevin 1
- Aye Hninn Khine 1
- Wojciech Korczynski 1
- Fajri Koto 1
- Takdanai Kreangphet 1
- Jauza Akbar Krito 1
- Mikołaj Langner 1
- Haochen Li 1
- Adrian Xuan Wei Lim 1
- Wan Shen Lim 1
- Peerat Limkonchotiwat 1
- Holy Lovenia 1
- Jiayun Luo 1
- Thant Thiri Maung 1
- Lester James Validad Miranda 1
- Patricia Nicole Monderin 1
- Joel Ruben Antony Moniz 1
- Adisai Na-Thalang 1
- Bahrul Ilmi Nasution 1
- Lynnette Hui Xian Ng 1
- Anh Ngo 1
- Giang Nguyen 1
- William Nixon 1
- Kadek Hendrawan Palgunadi 1
- Hitesh Laxmichand Patel 1
- Priyaranjan Pattnayak 1
- Kaung Si Phyo 1
- Dzmitry Pihulski 1
- Salsabila Zahirah Pranida 1
- Kevin Pratama 1
- Muhammad Reza Qorib 1
- Taki Hasan Rafi 1
- Rian Adam Rajagede 1
- Matthew Theodore Roque 1
- Jostin Jerico Rosal 1
- Manuel Antonio Rufino 1
- Saptarshi Saha 1
- Anjela Gail D. Santos 1
- Tim Santos 1
- Richardy Lobo Sapan 1
- Ranjan Satapathy 1
- Christian Simon 1
- Ayushman Singh 1
- Yueqi Song 1
- Supryadi 1
- Muhammad Rizky Sya’ban 1
- Filbert Aurelian Tjiaranata 1
- Can Udomcharoenchaikit 1
- Kanyakorn Veerakanjana 1
- Dan John Velasco 1
- Karissa Vincentio 1
- Bin Wang 1
- Chengwei Wei 1
- Yeo Wei Jie 1
- Robert Wijaya 1
- Genta Indra Winata 1
- Tack Hwa Wong 1
- Yanzhi Yu 1
- Eryawan Presma Yulianrifat 1
- Hanif Muhammad Zhafran 1
- Ruochen Zhang 1