Po-Chun Hsu
2025
Enhancing Function-Calling Capabilities in LLMs: Strategies for Prompt Formats, Data Integration, and Multilingual Translation
Yi-Chang Chen
|
Po-Chun Hsu
|
Chan-Jan Hsu
|
Da-shan Shiu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Large language models (LLMs) have significantly advanced autonomous agents, particularly in zero-shot tool usage, also known as function calling. This research delves into enhancing the function-calling capabilities of LLMs by exploring different approaches, including prompt formats for integrating function descriptions, blending function-calling and instruction-following data, introducing a novel Decision Token for conditional prompts, leveraging chain-of-thought reasoning, and overcoming multilingual challenges with a translation pipeline. Our key findings and contributions are as follows: (1) Instruction-following data improves both function-calling accuracy and relevance detection. (2) The use of the newly proposed Decision Token, combined with synthetic non-function-call data, enhances relevance detection. (3) A tailored translation pipeline effectively overcomes multilingual limitations, demonstrating significant improvements in Traditional Chinese. These insights highlight the potential for improved function-calling capabilities and multilingual applications in LLMs.
RAD-Bench: Evaluating Large Language Models’ Capabilities in Retrieval Augmented Dialogues
Tzu-Lin Kuo
|
FengTing Liao
|
Mu-Wei Hsieh
|
Fu-Chieh Chang
|
Po-Chun Hsu
|
Da-shan Shiu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Search
Fix data
Co-authors
- Da-shan Shiu 2
- Fu-Chieh Chang 1
- Yi-Chang Chen 1
- Mu-Wei Hsieh 1
- Chan-Jan Hsu 1
- show all...