Yin Luo
2024
Dual Complex Number Knowledge Graph Embeddings
Yao Dong
|
Qingchao Kong
|
Lei Wang
|
Yin Luo
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Knowledge graph embedding, which aims to learn representations of entities and relations in large scale knowledge graphs, plays a crucial part in various downstream applications. The performance of knowledge graph embedding models mainly depends on the ability of modeling relation patterns, such as symmetry/antisymmetry, inversion and composition (commutative composition and non-commutative composition). Most existing methods fail in modeling the non-commutative composition patterns. Several methods support this kind of pattern by modeling in quaternion space or dihedral group. However, extending to such sophisticated spaces leads to a substantial increase in the amount of parameters, which greatly reduces the parameter efficiency. In this paper, we propose a new knowledge graph embedding method called dual complex number knowledge graph embeddings (DCNE), which maps entities to the dual complex number space, and represents relations as rotations in 2D space via dual complex number multiplication. The non-commutativity of the dual complex number multiplication empowers DCNE to model the non-commutative composition patterns. In the meantime, modeling relations as rotations in 2D space can effectively improve the parameter efficiency. Extensive experiments on multiple benchmark knowledge graphs empirically show that DCNE achieves significant performance in link prediction and path query answering.
PromISe: Releasing the Capabilities of LLMs with Prompt Introspective Search
Minzheng Wang
|
Nan Xu
|
Jiahao Zhao
|
Yin Luo
|
Wenji Mao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The development of large language models (LLMs) raises the importance of assessing the fairness and completeness of various evaluation benchmarks. Regrettably, these benchmarks predominantly utilize uniform manual prompts, which may not fully capture the expansive capabilities of LLMs—potentially leading to an underestimation of their performance. To unlock the potential of LLMs, researchers pay attention to automated prompt search methods, which employ LLMs as optimizers to discover optimal prompts. However, previous methods generate the solutions implicitly, which overlook the underlying thought process and lack explicit feedback. In this paper, we propose a novel prompt introspective search framework, namely PromISe, to better release the capabilities of LLMs. It converts the process of optimizing prompts into an explicit chain of thought, through a step-by-step procedure that integrates self-introspect and self-refine. Extensive experiments, conducted over 73 tasks on two major benchmarks, demonstrate that our proposed PromISe significantly boosts the performance of 12 well-known LLMs compared to the baseline approach. Moreover, our study offers enhanced insights into the interaction between humans and LLMs, potentially serving as a foundation for future designs and implementations. Keywords: large language models, prompt search, self-introspect, self-refine
Search
Co-authors
- Yao Dong 1
- Qingchao Kong 1
- Lei Wang (王雷) 1
- Minzheng Wang 1
- Nan Xu 1
- show all...