Da-Chen Lian
2026
When Structure Matters: Cross-Lingual Hyperbolic Embeddings for Chinese and English Wordnets
Mao-Chang Ku | Da-Chen Lian | Pin-Er Chen | Po-Ya Angela Wang | Wei-Ling Chen | Shu-Kai HSIEH
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Mao-Chang Ku | Da-Chen Lian | Pin-Er Chen | Po-Ya Angela Wang | Wei-Ling Chen | Shu-Kai HSIEH
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Hyperbolic embeddings such as the Poincaré model effectively represent lexical hierarchies with low distortion, yet their cross-lingual generalizability remains largely unexplored. This study investigates cross-lingual transfer by training 20-dimensional Poincaré embeddings exclusively on Open English WordNet (OEWN) hypernymy relations and evaluating on aligned Chinese Wordnet (CWN) synsets under a vocabulary-constrained transfer setting, where CWN-relevant synsets appear in OEWN training data but no Chinese-language supervision is used. We report robust statistical evidence based on the final 10 training checkpoints: Poincaré embeddings achieve 2.57× higher Mean Reciprocal Rank (MRR) than Euclidean embeddings on CWN (0.030 ± 0.001 vs 0.012 ± 0.000, p < 0.001, Cohen’s d = 34.48) and 5.61× higher on OEWN (0.016 ± 0.000 vs 0.003 ± 0.000, p < 0.001, d = 42.48). Furthermore, hierarchical filtering leveraging the radial dimension of hyperbolic space provides substantial additional gains: +74.6% MRR improvement on CWN and +25.8% on OEWN (both p < 0.001). The model achieves higher absolute performance on the zero-shot CWN test set (MRR = 0.052 ± 0.002) than on the in-domain OEWN test set (MRR = 0.020 ± 0.001). We attribute this to structural alignment: CWN’s broader branching factor (4.32 vs 1.10) and moderate depth naturally suit hyperbolic geometry’s capacity to compactly represent hierarchies. Our findings demonstrate that geometric properties learned from English hypernymy transfer robustly across languages when semantic structures align. We release the aligned CWN–OEWN hypernymy evaluation dataset and complete evaluation framework to facilitate future research on geometry-based cross-lingual semantic modeling.
2025
LOBSTER: Linguistics Olympiad Benchmark for Structured Evaluation on Reasoning
Da-Chen Lian | Ri-Sheng Huang | Pin-Er Chen | Chunki Lim | You-Kuan Lin | Guan-Yu Tseng | Zhen-Yu Lin | Pin-Cheng Chen | Shu-Kai Hsieh
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)
Da-Chen Lian | Ri-Sheng Huang | Pin-Er Chen | Chunki Lim | You-Kuan Lin | Guan-Yu Tseng | Zhen-Yu Lin | Pin-Cheng Chen | Shu-Kai Hsieh
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)
We propose the Linguistics Olympiad Benchmark for Structured Evaluation on Reasoning, or LOBSTER, a linguistically-informed benchmark designed to evaluate large language models (LLMs) on complex linguistic puzzles of the International Linguistics Olympiad (IOL). Unlike prior benchmarks that focus solely on final answer accuracy, our benchmark provides concrete evaluation protocols and rich typological metadata across over 90 low-resource and cross-cultural languages alongside the puzzles. Through systematic evaluations of state-of-the-art models on multilingual abilities, we demonstrate that LLMs struggle with low-resource languages, underscoring the need for such a benchmark. Experiments with various models on our benchmark showed that IOL problems remain a challenging task for reasoning models, though there are ways to enhance the performance—for example, iterative reasoning outperforms single-pass approaches in both final answers and explanations. Our benchmark offers a comprehensive foundation for advancing linguistically grounded, culturally informed, and cognitively plausible reasoning in LLMs.
2024
The Semantic Relations in LLMs: An Information-theoretic Compression Approach
Yu-Hsiang Tseng | Pin-Er Chen | Da-Chen Lian | Shu-Kai Hsieh
Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024
Yu-Hsiang Tseng | Pin-Er Chen | Da-Chen Lian | Shu-Kai Hsieh
Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024
Compressibility is closely related to the predictability of the texts from the information theory viewpoint. As large language models (LLMs) are trained to maximize the conditional probabilities of upcoming words, they may capture the subtlety and nuances of the semantic constraints underlying the texts, and texts aligning with the encoded semantic constraints are more compressible than those that do not. This paper systematically tests whether and how LLMs can act as compressors of semantic pairs. Using semantic relations from English and Chinese Wordnet, we empirically demonstrate that texts with correct semantic pairings are more compressible than incorrect ones, measured by the proposed compression advantages index. We also show that, with the Pythia model suite and a fine-tuned model on Chinese Wordnet, compression capacities are modulated by the model’s seen data. These findings are consistent with the view that LLMs encode the semantic knowledge as underlying constraints learned from texts and can act as compressors of semantic information or potentially other structured knowledge.