@inproceedings{shi-etal-2018-learning,
    title = "Learning Visually-Grounded Semantics from Contrastive Adversarial Samples",
    author = "Shi, Haoyue  and
      Mao, Jiayuan  and
      Xiao, Tete  and
      Jiang, Yuning  and
      Sun, Jian",
    editor = "Bender, Emily M.  and
      Derczynski, Leon  and
      Isabelle, Pierre",
    booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
    month = aug,
    year = "2018",
    address = "Santa Fe, New Mexico, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/C18-1315/",
    pages = "3715--3727",
    abstract = "We study the problem of grounding distributional representations of texts on the visual domain, namely visual-semantic embeddings (VSE for short). Begin with an insightful adversarial attack on VSE embeddings, we show the limitation of current frameworks and image-text datasets (e.g., MS-COCO) both quantitatively and qualitatively. The large gap between the number of possible constitutions of real-world semantics and the size of parallel data, to a large extent, restricts the model to establish a strong link between textual semantics and visual concepts. We alleviate this problem by augmenting the MS-COCO image captioning datasets with textual contrastive adversarial samples. These samples are synthesized using language priors of human and the WordNet knowledge base, and enforce the model to ground learned embeddings to concrete concepts within the image. This simple but powerful technique brings a noticeable improvement over the baselines on a diverse set of downstream tasks, in addition to defending known-type adversarial attacks. Codes are available at \url{https://github.com/ExplorerFreda/VSE-C}."
}Markdown (Informal)
[Learning Visually-Grounded Semantics from Contrastive Adversarial Samples](https://preview.aclanthology.org/iwcs-25-ingestion/C18-1315/) (Shi et al., COLING 2018)
ACL