Benoît Laurent
2026
Can Small-Scale LLMs Balance Content Accuracy and Speaker Faithfulness in Noisy French Dialogue Summarization?
Rim Abrougui | Guillaume Lechien | Elisabeth Savatier | Benoît Laurent
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Rim Abrougui | Guillaume Lechien | Elisabeth Savatier | Benoît Laurent
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Summarizing domain-specific and multi-speaker conversations, such as political debates, remains challenging under noisy ASR conditions. In industrial contexts, large language models (LLMs) are often impractical due to resource and confidentiality constraints. This work evaluates whether smaller LLMs (up to 8B parameters) can produce reliable summaries in such settings. Experiments on French debates show that noise significantly degrades accuracy and readability, while fine-tuning on clean, domain-related data improves robustness and reduces hallucinations. We also analyze person-name mentions as indicators of speaker faithfulness, finding that fine-tuning can help identify all speakers in far more debates than chain-of-thought prompting. However, evaluations on limited industrial data show that fine-tuning still struggles to generalize to unseen speakers and topics.