Michael Günther


2025

pdf bib
jina-embeddings-v4: Universal Embeddings for Multimodal Multilingual Retrieval
Michael Günther | Saba Sturua | Mohammad Kalim Akram | Isabelle Mohr | Andrei Ungureanu | Bo Wang | Sedigheh Eslami | Scott Martens | Maximilian Werk | Nan Wang | Han Xiao
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)

We introduce jina-embeddings-v4, a 3.8 billion parameter embedding model that unifies text and image representations, with a novel architecture supporting both single-vector and multi-vector embeddings. It achieves high performance on both single-modal and cross-modal retrieval tasks, and is particularly strong in processing visually rich content such as tables, charts, diagrams, and mixed-media formats that incorporate both image and textual information. We also introduce JVDR, a novel benchmark for visually rich document retrieval that includes more diverse materials and query types than previous efforts. We use JVDR to show that jina-embeddings-v4 greatly improves on state-of-the-art performance for these kinds of tasks.

2024

pdf bib
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
Rohan Jha | Bo Wang | Michael Günther | Georgios Mastrapas | Saba Sturua | Isabelle Mohr | Andreas Koukounas | Mohammad Kalim Wang | Nan Wang | Han Xiao
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)

Multi-vector dense models, such as ColBERT, have proven highly effective in information retrieval. ColBERT’s late interaction scoring approximates the joint query-document attention seen in cross-encoders while maintaining inference efficiency closer to traditional dense retrieval models, thanks to its bi-encoder architecture and recent optimizations in indexing and search. In this paper, we introduce a novel architecture and a training framework to support long context window and multilingual retrieval. Leveraging Matryoshka Representation Loss, we further demonstrate that the reducing the embedding dimensionality from 128 to 64 has insignificant impact on the model’s retrieval performance and cut storage requirements by up to 50%. Our new model, Jina-ColBERT-v2, demonstrates strong performance across a range of English and multilingual retrieval tasks,

2023

pdf bib
Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models
Michael Günther | Louis Milliken | Jonathan Geuter | Georgios Mastrapas | Bo Wang | Han Xiao
Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)

Jina Embeddings constitutes a set of high-performance sentence embedding models adept at translating textual inputs into numerical representations, capturing the semantics of the text. These models excel in applications like dense retrieval and semantic textual similarity. This paper details the development of Jina Embeddings, starting with the creation of high-quality pairwise and triplet datasets.It underlines the crucial role of data cleaning in dataset preparation, offers in-depth insights into the model training process, and concludes with a comprehensive performance evaluation using the Massive Text Embedding Benchmark (MTEB). Furthermore, to increase the model’s awareness of grammatical negation, we construct a novel training and evaluation dataset of negated and non-negated statements, which we make publicly available to the community.