Suraj Pandey
2022
Combining Structured and Unstructured Knowledge in an Interactive Search Dialogue System
Svetlana Stoyanchev
|
Suraj Pandey
|
Simon Keizer
|
Norbert Braunschweiler
|
Rama Sanand Doddipatla
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Users of interactive search dialogue systems specify their preferences with natural language utterances. However, a schema-driven system is limited to handling the preferences that correspond to the predefined database content. In this work, we present a methodology for extending a schema-driven interactive search dialogue system with the ability to handle unconstrained user preferences. Using unsupervised semantic similarity metrics and the text snippets associated with the search items, the system identifies suitable items for the user’s unconstrained natural language query. In crowd-sourced evaluation, the users chat with our extended restaurant search system. Based on objective metrics and subjective user ratings, we demonstrate the feasibility of using an unsupervised low latency approach to extend a schema-driven search dialogue system to handle unconstrained user preferences.
2020
SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets
Parth Patwa
|
Gustavo Aguilar
|
Sudipta Kar
|
Suraj Pandey
|
Srinivas PYKL
|
Björn Gambäck
|
Tanmoy Chakraborty
|
Thamar Solorio
|
Amitava Das
Proceedings of the Fourteenth Workshop on Semantic Evaluation
In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment Analysis of Code-Mixed Tweets (SentiMix 2020). We also release and describe our Hinglish (Hindi-English)and Spanglish (Spanish-English) corpora annotated with word-level language identification and sentence-level sentiment labels. These corpora are comprised of 20K and 19K examples, respectively. The sentiment labels are - Positive, Negative, and Neutral. SentiMix attracted 89 submissions in total including 61 teams that participated in the Hinglish contest and 28 submitted systems to the Spanglish competition. The best performance achieved was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that BERT-like models and ensemble methods are the most common and successful approaches among the participants.