Ashwin Ittoo


2021

pdf
A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution.
Judicael Poumay | Ashwin Ittoo
Findings of the Association for Computational Linguistics: EMNLP 2021

Coreference Resolution is an important NLP task and most state-of-the-art methods rely on word embeddings for word representation. However, one issue that has been largely overlooked in literature is that of comparing the performance of different embeddings across and within families. Therefore, we frame our study in the context of Event and Entity Coreference Resolution (EvCR & EnCR), and address two questions : 1) Is there a trade-off between performance (predictive and run-time) and embedding size? 2) How do the embeddings’ performance compare within and across families? Our experiments reveal several interesting findings. First, we observe diminishing returns in performance with respect to embedding size. E.g. a model using solely a character embedding achieves 86% of the performance of the largest model (Elmo, GloVe, Character) while being 1.2% of its size. Second, the larger models using multiple embeddings learns faster despite being slower per epoch. However, it is still slower at test time. Finally, Elmo performs best on both EvCR and EnCR, while GloVe and FastText perform best in EvCR and EnCR respectively.

2010

pdf
On Learning Subtypes of the Part-Whole Relation: Do Not Mix Your Seeds
Ashwin Ittoo | Gosse Bouma
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics