Janghan Yoon
2025
Are Any-to-Any Models More Consistent Across Modality Transfers Than Specialists?
Jiwan Chung
|
Janghan Yoon
|
Junhyeong Park
|
Sangeyl Lee
|
Joowon Yang
|
Sooyeon Park
|
Youngjae Yu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Any-to-any generative models aim to enable seamless interpretation and generation across multiple modalities within a unified framework, yet their ability to preserve relationships across modalities remains uncertain. Do unified models truly achieve cross-modal coherence, or is this coherence merely perceived? To explore this, we introduce ACON, a dataset of 1,000 images (500 newly contributed) paired with captions, editing instructions, and Q&A pairs to evaluate cross-modal transfers rigorously. Using three consistency criteria—cyclic consistency, forward equivariance, and conjugated equivariance—our experiments reveal that any-to-any models do not consistently demonstrate greater cross-modal consistency than specialized models in pointwise evaluations such as cyclic consistency. However, equivariance evaluations uncover weak but observable consistency through structured analyses of the intermediate latent space enabled by multiple editing operations. We release our code and data at https://github.com/JiwanChung/ACON.
C2: Scalable Auto-Feedback for LLM-based Chart Generation
Woosung Koh
|
Janghan Yoon
|
MinHyung Lee
|
Youngjin Song
|
Jaegwan Cho
|
Jaehyun Kang
|
Taehyeon Kim
|
Se-Young Yun
|
Youngjae Yu
|
Bongshin Lee
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Search
Fix author
Co-authors
- Youngjae Yu 2
- Jaegwan Cho 1
- Jiwan Chung 1
- Jaehyun Kang 1
- Taehyeon Kim 1
- show all...