Xiaojun Ma
2025
SheetDesigner: MLLM-Powered Spreadsheet Layout Generation with Rule-Based and Vision-Based Reflection
Qin Chen
|
Yuanyi Ren
|
Xiaojun Ma
|
Mugeng Liu
|
Shi Han
|
Dongmei Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Spreadsheets are critical to data-centric tasks, with rich, structured layouts that enable efficient information transmission. Given the time and expertise required for manual spreadsheet layout design, there is an urgent need for automated solutions.However, existing automated layout models are ill-suited to spreadsheets, as they often (1) treat components as axis-aligned rectangles with continuous coordinates, overlooking the inherently discrete, grid-based structure of spreadsheets; and (2) neglect interrelated semantics, such as data dependencies and contextual links, unique to spreadsheets. In this paper, we first formalize the spreadsheet layout generation task, supported by a seven-criterion evaluation protocol and a dataset of 3,326 spreadsheets. We then introduce SheetDesigner, a zero-shot and training-free framework using Multimodal Large Language Models (MLLMs) that combines rule and vision reflection for component placement and content population. SheetDesigner outperforms five baselines by at least 22.6%. We further find that through vision modality, MLLMs handle overlap and balance well but struggle with alignment, necessitates hybrid rule and visual reflection strategies. Our codes and data is available at Github.
Large Language Models for Predictive Analysis: How Far Are They?
Qin Chen
|
Yuanyi Ren
|
Xiaojun Ma
|
Yuyang Shi
Findings of the Association for Computational Linguistics: ACL 2025
Predictive analysis is a cornerstone of modern decision-making, with applications in various domains. Large Language Models (LLMs) have emerged as powerful tools in enabling nuanced, knowledge-intensive conversations, thus aiding in complex decision-making tasks. With the burgeoning expectation to harness LLMs for predictive analysis, there is an urgent need to systematically assess their capability in this domain. However, there are no relevant evaluations in existing studies. To bridge this gap, we introduce the PredictiQ benchmark, which integrates 1130 sophisticated predictive analysis queries originating from 44 real-world datasets of 8 diverse fields. We design an evaluation protocol considering text analysis, code generation, and their alignment. Twelve renowned LLMs are evaluated, offering insights into their practical use in predictive analysis.
2014
Twitter User Gender Inference Using Combined Analysis of Text and Image Processing
Shigeyuki Sakaki
|
Yasuhide Miura
|
Xiaojun Ma
|
Keigo Hattori
|
Tomoko Ohkuma
Proceedings of the Third Workshop on Vision and Language
Search
Fix author
Co-authors
- Qin Chen (陈琴) 2
- Yuanyi Ren 2
- Shi Han 1
- Keigo Hattori 1
- Mugeng Liu 1
- show all...