Mining Tweets that refer to TV programs with Deep Neural Networks

Takeshi Kobayakawa, Taro Miyazaki, Hiroki Okamoto, Simon Clippingdale


Abstract
The automatic analysis of expressions of opinion has been well studied in the opinion mining area, but a remaining problem is robustness for user-generated texts. Although consumer-generated texts are valuable since they contain a great number and wide variety of user evaluations, spelling inconsistency and the variety of expressions make analysis difficult. In order to tackle such situations, we applied a model that is reported to handle context in many natural language processing areas, to the problem of extracting references to the opinion target from text. Experiments on tweets that refer to television programs show that the model can extract such references with more than 90% accuracy.
Anthology ID:
D19-5517
Volume:
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
126–130
Language:
URL:
https://aclanthology.org/D19-5517
DOI:
10.18653/v1/D19-5517
Bibkey:
Cite (ACL):
Takeshi Kobayakawa, Taro Miyazaki, Hiroki Okamoto, and Simon Clippingdale. 2019. Mining Tweets that refer to TV programs with Deep Neural Networks. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 126–130, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Mining Tweets that refer to TV programs with Deep Neural Networks (Kobayakawa et al., WNUT 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/D19-5517.pdf