Mert Inan
2022
Modeling Intensification for Sign Language Generation: A Computational Approach
Mert Inan
|
Yang Zhong
|
Sabit Hassan
|
Lorna Quandt
|
Malihe Alikhani
Findings of the Association for Computational Linguistics: ACL 2022
End-to-end sign language generation models do not accurately represent the prosody in sign language. A lack of temporal and spatial variations leads to poor-quality generated presentations that confuse human interpreters. In this paper, we aim to improve the prosody in generated sign languages by modeling intensification in a data-driven manner. We present different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations. To employ our strategies, we first annotate a subset of the benchmark PHOENIX-14T, a German Sign Language dataset, with different levels of intensification. We then use a supervised intensity tagger to extend the annotated dataset and obtain labels for the remaining portion of it. This enhanced dataset is then used to train state-of-the-art transformer models for sign language generation. We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics. Human evaluation also indicates a higher preference of the videos generated using our model.
Zero-shot Cross-Linguistic Learning of Event Semantics
Malihe Alikhani
|
Thomas Kober
|
Bashar Alhafni
|
Yue Chen
|
Mert Inan
|
Elizabeth Nielsen
|
Shahab Raji
|
Mark Steedman
|
Matthew Stone
Proceedings of the 15th International Conference on Natural Language Generation
2021
COSMic: A Coherence-Aware Generation Metric for Image Descriptions
Mert Inan
|
Piyush Sharma
|
Baber Khalid
|
Radu Soricut
|
Matthew Stone
|
Malihe Alikhani
Findings of the Association for Computational Linguistics: EMNLP 2021
Developers of text generation models rely on automated evaluation metrics as a stand-in for slow and expensive manual evaluations. However, image captioning metrics have struggled to give accurate learned estimates of the semantic and pragmatic success of output text. We address this weakness by introducing the first discourse-aware learned generation metric for evaluating image descriptions. Our approach is inspired by computational theories of discourse for capturing information goals using coherence. We present a dataset of image–description pairs annotated with coherence relations. We then train a coherence-aware metric on a subset of the Conceptual Captions dataset and measure its effectiveness—its ability to predict human ratings of output captions—on a test set composed of out-of-domain images. We demonstrate a higher Kendall Correlation Coefficient for our proposed metric with the human judgments for the results of a number of state-of-the-art coherence-aware caption generation models when compared to several other metrics including recently proposed learned metrics such as BLEURT and BERTScore.
Search
Co-authors
- Malihe Alikhani 3
- Matthew Stone 2
- Yang Zhong 1
- Sabit Hassan 1
- Lorna Quandt 1
- show all...