@inproceedings{wang-etal-2022-rovist,
    title = "{R}o{V}i{ST}: Learning Robust Metrics for Visual Storytelling",
    author = "Wang, Eileen  and
      Han, Caren  and
      Poon, Josiah",
    editor = "Carpuat, Marine  and
      de Marneffe, Marie-Catherine  and
      Meza Ruiz, Ivan Vladimir",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.findings-naacl.206/",
    doi = "10.18653/v1/2022.findings-naacl.206",
    pages = "2691--2702",
    abstract = "Visual storytelling (VST) is the task of generating a story paragraph that describes a given image sequence. Most existing storytelling approaches have evaluated their models using traditional natural language generation metrics like BLEU or CIDEr. However, such metrics based on $n$-gram matching tend to have poor correlation with human evaluation scores and do not explicitly consider other criteria necessary for storytelling such as sentence structure or topic coherence. Moreover, a single score is not enough to assess a story as it does not inform us about what specific errors were made by the model. In this paper, we propose 3 evaluation metrics sets that analyses which aspects we would look for in a good story: 1) visual grounding, 2) coherence, and 3) non-redundancy. We measure the reliability of our metric sets by analysing its correlation with human judgement scores on a sample of machine stories obtained from 4 state-of-the-arts models trained on the Visual Storytelling Dataset (VIST). Our metric sets outperforms other metrics on human correlation, and could be served as a learning based evaluation metric set that is complementary to existing rule-based metrics."
}Markdown (Informal)
[RoViST: Learning Robust Metrics for Visual Storytelling](https://preview.aclanthology.org/ingest-emnlp/2022.findings-naacl.206/) (Wang et al., Findings 2022)
ACL