Abstract
This paper contains a description of my solution to the problem statement of SemEval 2020: Assessing the Funniness of Edited News Headlines. I propose a Siamese Transformer based approach, coupled with a Global Attention mechanism that makes use of contextual embeddings and focus words, to generate important features that are fed to a 2 layer perceptron to rate the funniness of the edited headline. I detail various experiments to show the performance of the system. The proposed approach outperforms a baseline Bi-LSTM architecture and finished 5th (out of 49 teams) in sub-task 1 and 4th (out of 32 teams) in sub-task 2 of the competition and was the best non-ensemble model in both tasks.- Anthology ID:
- 2020.semeval-1.134
- Volume:
- Proceedings of the Fourteenth Workshop on Semantic Evaluation
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona (online)
- Editors:
- Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- International Committee for Computational Linguistics
- Note:
- Pages:
- 1026–1032
- Language:
- URL:
- https://aclanthology.org/2020.semeval-1.134
- DOI:
- 10.18653/v1/2020.semeval-1.134
- Cite (ACL):
- Pramodith Ballapuram. 2020. LMML at SemEval-2020 Task 7: Siamese Transformers for Rating Humor in Edited News Headlines. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1026–1032, Barcelona (online). International Committee for Computational Linguistics.
- Cite (Informal):
- LMML at SemEval-2020 Task 7: Siamese Transformers for Rating Humor in Edited News Headlines (Ballapuram, SemEval 2020)
- PDF:
- https://preview.aclanthology.org/gem-23-ingestion/2020.semeval-1.134.pdf