Lujun Zhang


2023

pdf
System Report for CCL23-Eval Task 9: HUST1037 Explore Proper Prompt Strategy for LLM in MRC Task
Xiao Liu | Junfeng Yu | Yibo He | Lujun Zhang | Kaiyichen Wei | Hongbo Sun | Gang Tu
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“Our research paper delves into the Adversarial Robustness Evaluation for Chinese Gaokao Read-ing Comprehension (GCRC advRobust). While Chinese reading comprehension tasks havegained significant attention in recent years, previous methods have not proven effective for thischallenging dataset. We focus on exploring how prompt engineering can impact a model’s read-ing comprehension ability. Through our experiments using ChatGLM, GPT3.5, and GPT4, wediscovered a correlation between prompt and LLM reading comprehension ability, and found thatprompt engineering improves the performance of each model. Our team submitted the results ofour system evaluation, which ranked first in three indexes and total scores. Keywords— LLM, Prompt, Chinese Reading Comprehension”