@inproceedings{lee-park-2025-dunamu,
    title = "Dunamu {ML} at the Financial Misinformation Detection Challenge Task: Improving Supervised Fine-Tuning with {LLM}-based Data Augmentation",
    author = "Lee, Dongjun  and
      Park, Heesoo",
    editor = "Chen, Chung-Chi  and
      Moreno-Sandoval, Antonio  and
      Huang, Jimin  and
      Xie, Qianqian  and
      Ananiadou, Sophia  and
      Chen, Hsin-Hsi",
    booktitle = "Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal)",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.finnlp-1.34/",
    pages = "297--301",
    abstract = "In this paper, we describe Dunamu ML{'}s submission to the Financial Misinformation Detection (FMD) 2025 shared task. To address the low-resource challenge in FMD, we augmented a general domain misinformation detection dataset for training. We first collected claims, contexts, and misinformation labels from a public dataset. Then, we generated evidence for each label based on a closed LLM with few-shot examples extracted from the FMD training dataset. Finally, we oversampled the training data specific to the financial domain and augmented it with the generated data to perform supervised fine-tuning (SFT) on the LLM. When evaluated on the blind test dataset, our model achieved an F1 score of 84.67 in misinformation classification and a ROUGE-1 score of 81.21 in evidence generation, ranking first on the leaderboard in both aspects."
}Markdown (Informal)
[Dunamu ML at the Financial Misinformation Detection Challenge Task: Improving Supervised Fine-Tuning with LLM-based Data Augmentation](https://preview.aclanthology.org/ingest-emnlp/2025.finnlp-1.34/) (Lee & Park, FinNLP 2025)
ACL