Yi Yu
2025
MMLF: Multi-query Multi-passage Late Fusion Retrieval
Yuan-Ching Kuo
|
Yi Yu
|
Chih-Ming Chen
|
Chuan-Ju Wang
Findings of the Association for Computational Linguistics: NAACL 2025
Leveraging large language models (LLMs) for query expansion has proven highly effective across diverse tasks and languages. Yet, challenges remain in optimizing query formatting and prompting, often with less focus on handling retrieval results. In this paper, we introduce Multi-query Multi-passage Late Fusion (MMLF), a straightforward yet potent pipeline that generates sub-queries, expands them into pseudo-documents, retrieves them individually, and aggregates results using reciprocal rank fusion. Our experiments demonstrate that MMLF exhibits superior performance across five BEIR benchmark datasets, achieving an average improvement of 4% and a maximum gain of up to 8% in both Recall@1k and nDCG@10 compared to state of the art across BEIR information retrieval datasets.
2024
Syllable-level lyrics generation from melody exploiting character-level language model
Zhe Zhang
|
Karol Lasocki
|
Yi Yu
|
Atsuhiro Takasu
Findings of the Association for Computational Linguistics: EACL 2024
The generation of lyrics tightly connected to accompanying melodies involves establishing a mapping between musical notes and syllables of lyrics. This process requires a deep understanding of music constraints and semantic patterns at syllable-level, word-level, and sentence-level semantic meanings. However, pre-trained language models specifically designed at the syllable level are publicly unavailable. To solve these challenging issues, we propose to exploit fine-tuning character-level language models for syllable-level lyrics generation from symbolic melody. In particular, our method aims to fine-tune a character-level pre-trained language model, allowing to incorporation of linguistic knowledge of the language model into the beam search process of a syllable-level Transformer generator network. Besides, by exploring ChatGPT-based evaluation of generated lyrics in addition to human subjective evaluation, we prove that our approach improves the coherence and correctness of generated lyrics, without the need to train expensive new language models.
2021
Multi-TimeLine Summarization (MTLS): Improving Timeline Summarization by Generating Multiple Summaries
Yi Yu
|
Adam Jatowt
|
Antoine Doucet
|
Kazunari Sugiyama
|
Masatoshi Yoshikawa
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
In this paper, we address a novel task, Multiple TimeLine Summarization (MTLS), which extends the flexibility and versatility of Time-Line Summarization (TLS). Given any collection of time-stamped news articles, MTLS automatically discovers important yet different stories and generates a corresponding time-line for each story. To achieve this, we propose a novel unsupervised summarization framework based on two-stage affinity propagation. We also introduce a quantitative evaluation measure for MTLS based on previousTLS evaluation methods. Experimental results show that our MTLS framework demonstrates high effectiveness and MTLS task can give bet-ter results than TLS.
Search
Fix data
Co-authors
- Chih-Ming Chen 1
- Antoine Doucet 1
- Adam Jatowt 1
- Yuan-Ching Kuo 1
- Karol Lasocki 1
- show all...