LSTMs with Attention for Aggression Detection
Nishant Nikhil, Ramit Pahwa, Mehul Kumar Nirala, Rohan Khilnani
Abstract
In this paper, we describe the system submitted for the shared task on Aggression Identification in Facebook posts and comments by the team Nishnik. Previous works demonstrate that LSTMs have achieved remarkable performance in natural language processing tasks. We deploy an LSTM model with an attention unit over it. Our system ranks 6th and 4th in the Hindi subtask for Facebook comments and subtask for generalized social media data respectively. And it ranks 17th and 10th in the corresponding English subtasks.- Anthology ID:
- W18-4406
- Volume:
- Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)
- Month:
- August
- Year:
- 2018
- Address:
- Santa Fe, New Mexico, USA
- Venue:
- TRAC
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 52–57
- Language:
- URL:
- https://aclanthology.org/W18-4406
- DOI:
- Cite (ACL):
- Nishant Nikhil, Ramit Pahwa, Mehul Kumar Nirala, and Rohan Khilnani. 2018. LSTMs with Attention for Aggression Detection. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 52–57, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Cite (Informal):
- LSTMs with Attention for Aggression Detection (Nikhil et al., TRAC 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/W18-4406.pdf