Learning from Domain Complexity

Robert Remus, Dominique Ziegelmayer


Abstract
Sentiment analysis is genre and domain dependent, i.e. the same method performs differently when applied to text that originates from different genres and domains. Intuitively, this is due to different language use in different genres and domains. We measure such differences in a sentiment analysis gold standard dataset that contains texts from 1 genre and 10 domains. Differences in language use are quantified using certain language statistics, viz. domain complexity measures. We investigate 4 domain complexity measures: percentage of rare words, word richness, relative entropy and corpus homogeneity. We relate domain complexity measurements to performance of a standard machine learning-based classifier and find strong correlations. We show that we can accurately estimate its performance based on domain complexity using linear regression models fitted using robust loss functions. Moreover, we illustrate how domain complexity may guide us in model selection, viz. in deciding what word n-gram order to employ in a discriminative model and whether to employ aggressive or conservative word n-gram feature selection.
Anthology ID:
L14-1405
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
2021–2028
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/480_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Robert Remus and Dominique Ziegelmayer. 2014. Learning from Domain Complexity. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2021–2028, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
Learning from Domain Complexity (Remus & Ziegelmayer, LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/480_Paper.pdf