DesignCLIP: Multimodal Learning with CLIP for Design Patent Understanding

Zhu Wang, Homaira Huda Shomee, Sathya N. Ravi, Sourav Medya


Abstract
In the field of design patent analysis, traditional tasks such as patent classification and patent image retrieval heavily depend on the image data. However, patent images—typically consisting of sketches with abstract and structural elements of an invention—often fall short in conveying comprehensive visual context and semantic information. This inadequacy can lead to ambiguities in evaluation during prior art searches. Recent advancements in vision-language models, such as CLIP, offer promising opportunities for more reliable and accurate AI-driven patent analysis. In this work, we leverage CLIP models to develop a unified framework DesignCLIP for design patent applications with a large-scale dataset of U.S. design patents. To address the unique characteristics of patent data, DesignCLIP incorporates class-aware classification and contrastive learning, utilizing generated detailed captions for patent images and multi-views image learning. We validate the effectiveness of DesignCLIP across various downstream tasks, including patent classification and patent retrieval. Additionally, we explore multimodal patent retrieval, which provides the potential to enhance creativity and innovation in design by offering more diverse sources of inspiration. Our experiments show that DesignCLIP consistently outperforms baseline and SOTA models in the patent domain on all tasks. Our findings underscore the promise of multimodal approaches in advancing patent analysis. The codebase is available here: https://github.com/AI4Patents/DesignCLIP.
Anthology ID:
2025.findings-emnlp.553
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10468–10490
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.553/
DOI:
10.18653/v1/2025.findings-emnlp.553
Bibkey:
Cite (ACL):
Zhu Wang, Homaira Huda Shomee, Sathya N. Ravi, and Sourav Medya. 2025. DesignCLIP: Multimodal Learning with CLIP for Design Patent Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 10468–10490, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
DesignCLIP: Multimodal Learning with CLIP for Design Patent Understanding (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.553.pdf
Checklist:
 2025.findings-emnlp.553.checklist.pdf