@inproceedings{schwarzenberg-etal-2019-neural,
    title = "Neural Vector Conceptualization for Word Vector Space Interpretation",
    author = "Schwarzenberg, Robert  and
      Raithel, Lisa  and
      Harbecke, David",
    editor = "Rogers, Anna  and
      Drozd, Aleksandr  and
      Rumshisky, Anna  and
      Goldberg, Yoav",
    booktitle = "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}",
    month = jun,
    year = "2019",
    address = "Minneapolis, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W19-2001/",
    doi = "10.18653/v1/W19-2001",
    pages = "1--7",
    abstract = "Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, we train a neural model to conceptualize word vectors, which means that it activates higher order concepts it recognizes in a given vector. Contrary to prior approaches, our model operates in the original vector space and is capable of learning non-linear relations between word vectors and concepts. Furthermore, we show that it produces considerably less entropic concept activation profiles than the popular cosine similarity."
}Markdown (Informal)
[Neural Vector Conceptualization for Word Vector Space Interpretation](https://preview.aclanthology.org/iwcs-25-ingestion/W19-2001/) (Schwarzenberg et al., RepEval 2019)
ACL