Harshit Gupta
2024
iREL at SemEval-2024 Task 9: Improving Conventional Prompting Methods for Brain Teasers
Harshit Gupta
|
Manav Chaudhary
|
Shivansh Subramanian
|
Tathagata Raha
|
Vasudeva Varma
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper describes our approach for SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. The BRAINTEASER task comprises multiple-choice Question Answering designed to evaluate the models’ lateral thinking capabilities. It consists of Sentence Puzzle and Word Puzzle subtasks that require models to defy default commonsense associations and exhibit unconventional thinking. We propose a unique strategy to improve the performance of pre-trained language models, notably the Gemini 1.0 Pro Model, in both subtasks. We employ static and dynamic few-shot prompting techniques and introduce a model-generated reasoning strategy that utilizes the LLM’s reasoning capabilities to improve performance. Our approach demonstrated significant improvements, showing that it performed better than the baseline models by a considerable margin but fell short of performing as well as the human annotators, thus highlighting the efficacy of the proposed strategies.
2023
Cross-Lingual Fact Checking: Automated Extraction and Verification of Information from Wikipedia using References
Shivansh Subramanian
|
Ankita Maity
|
Aakash Jain
|
Bhavyajeet Singh
|
Harshit Gupta
|
Lakshya Khanna
|
Vasudeva Varma
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Search
Co-authors
- Shivansh Subramanian 2
- Vasudeva Varma 2
- Ankita Maity 1
- Aakash Jain 1
- Bhavyajeet Singh 1
- show all...