@inproceedings{rahman-caragea-2025-llm,
    title = "{LLM}-Guided Co-Training for Text Classification",
    author = "Rahman, Md Mezbaur  and
      Caragea, Cornelia",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1583/",
    pages = "31092--31109",
    ISBN = "979-8-89176-332-6",
    abstract = "In this paper, we introduce a novel weighted co-training approach that is guided by Large Language Models (LLMs). Namely, in our co-training approach, we use LLM labels on unlabeled data as target labels and co-train two encoder-only based networks that train each other over multiple iterations: first, all samples are forwarded through each network and historical estimates of each network{'}s confidence in the LLM label are recorded; second, a dynamic importance weight is derived for each sample according to each network{'}s belief (or confidence) in the quality of the LLM label for that sample; finally, the two networks exchange importance weights with each other{---}each network back-propagates all samples weighted with the importance weights coming from its peer network and updates its own parameters. By strategically utilizing LLM-generated guidance, our approach significantly outperforms conventional SSL methods, particularly in settings with abundant unlabeled data. Empirical results show that it achieves state-of-the-art performance on 4 out of 5 benchmark datasets and ranks first among 14 compared methods according to the Friedman test. Our results highlight a new direction in semi-supervised learning{---}where LLMs serve as knowledge amplifiers, enabling backbone co-training models to achieve SOTA performance efficiently."
}Markdown (Informal)
[LLM-Guided Co-Training for Text Classification](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1583/) (Rahman & Caragea, EMNLP 2025)
ACL
- Md Mezbaur Rahman and Cornelia Caragea. 2025. LLM-Guided Co-Training for Text Classification. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31092–31109, Suzhou, China. Association for Computational Linguistics.