Abstract
Representing a word by its co-occurrences with other words in context is an effective way to capture the meaning of the word. However, the theory behind remains a challenge. In this work, taking the example of a word classification task, we give a theoretical analysis of the approaches that represent a word X by a function f(P(C|X)), where C is a context feature, P(C|X) is the conditional probability estimated from a text corpus, and the function f maps the co-occurrence measure to a prediction score. We investigate the impact of context feature C and the function f . We also explain the reasons why using the co-occurrences with multiple context features may be better than just using a single one. In addition, based on the analysis, we propose a hypothesis about the conditional probability on zero probability events.- Anthology ID:
- C18-1241
- Volume:
- Proceedings of the 27th International Conference on Computational Linguistics
- Month:
- August
- Year:
- 2018
- Address:
- Santa Fe, New Mexico, USA
- Editors:
- Emily M. Bender, Leon Derczynski, Pierre Isabelle
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2846–2854
- Language:
- URL:
- https://aclanthology.org/C18-1241
- DOI:
- Cite (ACL):
- Yanpeng Li. 2018. Learning Features from Co-occurrences: A Theoretical Analysis. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2846–2854, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Cite (Informal):
- Learning Features from Co-occurrences: A Theoretical Analysis (Li, COLING 2018)
- PDF:
- https://preview.aclanthology.org/naacl24-info/C18-1241.pdf