Abstract
This paper presents our two deep learning-based approaches to participate in subtask 1 of the Chemotimelines 2024 Shared task. The first uses a fine-tuning strategy on a relatively small general domain Masked Language Model (MLM) model, with additional normalization steps obtained using a simple Large Language Model (LLM) prompting technique. The second is an LLM-based approach combining advanced automated prompt search with few-shot in-context learning using the DSPy framework.Our results confirm the continued relevance of the smaller MLM fine-tuned model. It also suggests that the automated few-shot LLM approach can perform close to the fine-tuning-based method without extra LLM normalization and be advantageous under scarce data access conditions. We finally hint at the possibility to choose between lower training examples or lower computing resources requirements when considering both methods.- Anthology ID:
- 2024.clinicalnlp-1.39
- Volume:
- Proceedings of the 6th Clinical Natural Language Processing Workshop
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
- Venues:
- ClinicalNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 406–416
- Language:
- URL:
- https://aclanthology.org/2024.clinicalnlp-1.39
- DOI:
- 10.18653/v1/2024.clinicalnlp-1.39
- Cite (ACL):
- Nesrine Bannour, Judith Jeyafreeda Andrew, and Marc Vincent. 2024. Team NLPeers at Chemotimelines 2024: Evaluation of two timeline extraction methods, can generative LLM do it all or is smaller model fine-tuning still relevant ?. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 406–416, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Team NLPeers at Chemotimelines 2024: Evaluation of two timeline extraction methods, can generative LLM do it all or is smaller model fine-tuning still relevant ? (Bannour et al., ClinicalNLP-WS 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.clinicalnlp-1.39.pdf