Latent Semantic Analysis Models on Wikipedia and TASA

Dan Ștefănescu, Rajendra Banjade, Vasile Rus


Abstract
This paper introduces a collection of freely available Latent Semantic Analysis models built on the entire English Wikipedia and the TASA corpus. The models differ not only on their source, Wikipedia versus TASA, but also on the linguistic items they focus on: all words, content-words, nouns-verbs, and main concepts. Generating such models from large datasets (e.g. Wikipedia), that can provide a large coverage for the actual vocabulary in use, is computationally challenging, which is the reason why large LSA models are rarely available. Our experiments show that for the task of word-to-word similarity, the scores assigned by these models are strongly correlated with human judgment, outperforming many other frequently used measures, and comparable to the state of the art.
Anthology ID:
L14-1349
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
1417–1422
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/403_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Dan Ștefănescu, Rajendra Banjade, and Vasile Rus. 2014. Latent Semantic Analysis Models on Wikipedia and TASA. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1417–1422, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
Latent Semantic Analysis Models on Wikipedia and TASA (Ștefănescu et al., LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/403_Paper.pdf