Hailong Yang
Also published as: 海龙 杨
2025
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Tongxuan Liu
|
Wenjiang Xu
|
Weizhe Huang
|
Yuting Zeng
|
Jiaxing Wang
|
Xingyu Wang
|
Hailong Yang
|
Jing Li
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can improve the reasoning ability of LLMs to some extent, they suffer from an unfaithful issue where derived conclusions may not align with the generated reasoning chain. To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information descriptions and utilizes them as an additional augmentation to original contexts, thereby ensuring information completeness and enhancing logical reasoning ability. LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, LoT enhances Chain-of-Thought’s performance on the ReClor dataset by +4.35%, improves Chain-of-Thought with Self-Consistency’s performance on the RuleTaker dataset by +3.52%, and boosts performance of Tree-of-Thoughts on the ProofWriter dataset by +8%.
2023
CCL23-Eval 任务6系统报告:基于预训练语言模型的双策略分类优化算法(System Report for CCL23-Eval Task 6:Double-strategy classification optimization algorithm based on pre-training language model)
Yongqing Huang (黄永清)
|
Hailong Yang (杨海龙)
|
Fu Xuelin (傅薛林)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“诈骗案件分类问题是打击电信网络诈骗犯罪过程中的关键一环,根据不同的诈骗方式、手法等将其分类,通过对不同案件进行有效分类能够便于统计现状,有助于公安部门掌握当前电信网络诈骗案件的分布特点,进而能够对不同类别的诈骗案件作出针对性的预防、监管、制止、侦查等措施。诈骗案件分类属于自然语言处理领域的文本分类任务,传统的基于LSTM和CNN等分类模型能在起到一定的效果,但是由于它们模型结构的参数量的限制,难以达到较为理想的效果。本文基于预训练语言模型Nezha,结合对抗扰动和指数移动平均策略,有助于电信网络诈骗案件分类任务取得更好效果,充分利用电信网络诈骗案件的数据。我们队伍未采用多模型融合的方法,并最终在此次评测任务中排名第三,评测指标分数为0.8625。”
Search
Fix data
Co-authors
- Yongqing Huang (黄永清) 1
- Weizhe Huang 1
- Jing Li (李婧) 1
- Tongxuan Liu 1
- Jiaxing Wang 1
- show all...