Juseon Do
Also published as: Juseon-Do
2026
ConRAS: Contrastive In-context Learning Framework for Retrieval-Augmented Summarization
Juseon Do | Sungwoo Han | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: EACL 2026
Juseon Do | Sungwoo Han | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: EACL 2026
Contrastive learning (CL) has achieved remarkable progress in natural language processing (NLP), primarily as a paradigm for pre-training and fine-tuning. However, its potential during the generation phase, particularly in in-context learning (ICL)-based retrieval-augmented summarization, remains largely unexplored. While previous studies have attempted to incorporate negative samples into ICL prompts, these methods do not enforce a true contrastive objective that encourages separation of positive and negative samples in the representation space. In this paper, we first demonstrate through preliminary experiments that small language models (SLMs) can interpret contrastive prompts and effectively distinguish between positive and negative samples during inference, without any parameter updates. Building on these findings, we propose ConRAS, a novel framework that injects contrastive objectives into ICL-based retrieval-augmented summarization. Extensive experiments and in-depth analysis on three summarization benchmarks using four SLMs show that ConRAS consistently outperforms state-of-the-art retrieval-augmented methods, achieving significant improvements in summary quality.
Beyond Sampling: Self-Sorting for Long-Context Ranking
Juseon Do | Sungwoo Han | Jingun Kwon | Hidetaka Kamigaito | Katsuhiko Hayashi | Taro Watanabe
Findings of the Association for Computational Linguistics: EACL 2026
Juseon Do | Sungwoo Han | Jingun Kwon | Hidetaka Kamigaito | Katsuhiko Hayashi | Taro Watanabe
Findings of the Association for Computational Linguistics: EACL 2026
Ranking is a fundamental component in a wide range of AI applications. However, large language models (LLMs) remain unstable on long-context ranking. Sliding-window processing is costly and listwise prompting over full candidates still yields inconsistent orders. We show that sampling alone, even with selection-based methods, cannot stabilize ranking because LLM consistency decomposes into within-list order and cross-list preference, in which a single stochastic process cannot align. To address this, we introduce Self-Sorting (SS), which generates m candidate lists and performs n selection-time re-rankings over those lists. SS fuses explicit within-list positions with implicit cross-list preferences to score entities and return a top-k set. Experimental results on five widely used ranking benchmarks show significant improvements in nDCG@1,5,10, highlighting the critical role of implicit consistency.
2025
Considering Length Diversity in Retrieval-Augmented Summarization
Juseon-Do | Jaesung Hwang | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: NAACL 2025
Juseon-Do | Jaesung Hwang | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: NAACL 2025
This study investigates retrieval-augmented summarization by specifically examining the impact of exemplar summary lengths because previous methods have not considered length constraints. We propose a Diverse Length-aware Maximal Marginal Relevance (DL-MMR) algorithm to better control summary lengths. This algorithm combines the query relevance with diverse target lengths in retrieval-augmented summarization. Unlike previous methods that necessitate exhaustive exemplar-exemplar relevance comparisons using MMR, DL-MMR considers the exemplar target length as well and avoids comparing exemplars to each other, thereby reducing computational cost and conserving memory during the construction of an exemplar pool. Experimental results showed the effectiveness of DL-MMR, which considers length diversity, compared to the original MMR algorithm. DL-MMR additionally showed the effectiveness in memory saving of 781,513 times and computational cost reduction of 500,092 times, while maintaining the same level of informativeness.
2024
InstructCMP: Length Control in Sentence Compression through Instruction-based Large Language Models
Juseon-Do | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: ACL 2024
Juseon-Do | Jingun Kwon | Hidetaka Kamigaito | Manabu Okumura
Findings of the Association for Computational Linguistics: ACL 2024
Extractive summarization can produce faithful summaries but often requires additional constraints such as a desired summary length. Traditional sentence compression models do not typically consider the constraints because of their restricted model abilities, which require model modifications for coping with them. To bridge this gap, we propose Instruction-based Compression (InstructCMP), an approach to the sentence compression task that can consider the length constraint through instructions by leveraging the zero-shot task-solving abilities of Large Language Models (LLMs). For this purpose, we created new evaluation datasets by transforming traditional sentence compression datasets into an instruction format. By using the datasets, we first reveal that the current LLMs still face challenges in accurately controlling the length for a compressed text. To address this issue, we propose an approach named length priming, that incorporates additional length information into the instructions without external resources. While the length priming effectively works in a zero-shot setting, a training dataset with the instructions would further improve the ability of length control. Thus, we additionally created a training dataset in an instruction format to fine-tune the model on it. Experimental results and analysis show that applying the length priming significantly improves performances of InstructCMP in both zero-shot and fine-tuning settings without the need of any model modifications.