Huy Quoc To
2024
DeakinNLP at BioLaySumm: Evaluating Fine-tuning Longformer and GPT-4 Prompting for Biomedical Lay Summarization
Huy Quoc To
|
Ming Liu
|
Guangyan Huang
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
This paper presents our approaches for the BioLaySumm 2024 Shared Task. We evaluate two methods for generating lay summaries based on biomedical articles: (1) fine-tuning the Longformer-Encoder-Decoder (LED) model, and (2) zero-shot and few-shot prompting on GPT-4. In the fine-tuning approach, we individually fine-tune the LED model using two datasets: PLOS and eLife. This process is conducted under two different settings: one utilizing 50% of the training dataset, and the other utilizing the entire 100% of the training dataset. We compare the results of both methods with GPT-4 in zero-shot and few-shot prompting. The experiment results demonstrate that fine-tuning with 100% of the training data achieves better performance than prompting with GPT-4. However, under data scarcity circumstances, prompting GPT-4 seems to be a better solution.
2021
Monolingual versus multilingual BERTology for Vietnamese extractive multi-document summarization
Huy Quoc To
|
Kiet Van Nguyen
|
Ngan Luu-Thuy Nguyen
|
Anh Gia-Tuan Nguyen
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation
Search