Deepthi Mave
2019
Jointly Learning Author and Annotated Character N-gram Embeddings: A Case Study in Literary Text
Suraj Maharjan
|
Deepthi Mave
|
Prasha Shrestha
|
Manuel Montes
|
Fabio A. González
|
Thamar Solorio
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
An author’s way of presenting a story through his/her writing style has a great impact on whether the story will be liked by readers or not. In this paper, we learn representations for authors of literary texts together with representations for character n-grams annotated with their functional roles. We train a neural character n-gram based language model using an external corpus of literary texts and transfer learned representations for use in downstream tasks. We show that augmenting the knowledge from external works of authors produces results competitive with other style-based methods for book likability prediction, genre classification, and authorship attribution.
2018
Language Identification and Analysis of Code-Switched Social Media Text
Deepthi Mave
|
Suraj Maharjan
|
Thamar Solorio
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching
In this paper, we detail our work on comparing different word-level language identification systems for code-switched Hindi-English data and a standard Spanish-English dataset. In this regard, we build a new code-switched dataset for Hindi-English. To understand the code-switching patterns in these language pairs, we investigate different code-switching metrics. We find that the CRF model outperforms the neural network based models by a margin of 2-5 percentage points for Spanish-English and 3-5 percentage points for Hindi-English.
RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification
Niloofar Safi Samghabadi
|
Deepthi Mave
|
Sudipta Kar
|
Thamar Solorio
Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)
This paper presents our system for “TRAC 2018 Shared Task on Aggression Identification”. Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1-measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).
Search
Co-authors
- Thamar Solorio 3
- Suraj Maharjan 2
- Prasha Shrestha 1
- Manuel Montes 1
- Fabio A. González 1
- show all...