Weizhe Chen
2025
Flaming-hot Initiation with Regular Execution Sampling for Large Language Models
Weizhe Chen
|
Zhicheng Zhang
|
Guanlin Liu
|
Renjie Zheng
|
Wenlei Shi
|
Chen Dun
|
Zheng Wu
|
Xing Jin
|
Lin Yan
Findings of the Association for Computational Linguistics: NAACL 2025
Since the release of ChatGPT, large language models (LLMs) have demonstrated remarkable capabilities across various domains. A key challenge in developing these general capabilities is efficiently sourcing diverse, high-quality data. This becomes especially critical in reasoning-related tasks with sandbox checkers, such as math or code, where the goal is to generate correct solutions to specific problems with higher probability. In this work, we introduce Flaming-hot Initiation with Regular Execution (FIRE) sampling, a simple yet highly effective method to efficiently find good responses. Our empirical findings show that FIRE sampling enhances inference-time generation quality and also benefits training in the alignment stage. Furthermore, we explore how FIRE sampling improves performance by promoting diversity and analyze the impact of employing FIRE at different positions within a response.
Gender Bias in Large Language Models across Multiple Languages: A Case Study of ChatGPT
YiTian Ding
|
Jinman Zhao
|
Chen Jia
|
Yining Wang
|
Zifan Qian
|
Weizhe Chen
|
Xingyu Yue
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
With the growing deployment of large language models (LLMs) across various applications, assessing the influence of gender biases embedded in LLMs becomes crucial. The topic of gender bias within the realm of natural language processing (NLP) has gained considerable focus, particularly in the context of English. Nonetheless, the investigation of gender bias in languages other than English is still relatively under-explored and insufficiently analyzed. In this work, We examine gender bias in LLMs-generated outputs for different languages. We use three measurements: 1) gender bias in selecting descriptive words given the gender-related context. 2) gender bias in selecting gender-related pronouns (she/he) given the descriptive words. 3) gender bias in the topics of LLM-generated dialogues. We investigate the outputs of the GPT series of LLMs in various languages using our three measurement methods. Our findings revealed significant gender biases across all the languages we examined.