Xiaodong Gu
2022
Building Joint Relationship Attention Network for Image-Text Generation
Changzhi Wang
|
Xiaodong Gu
Proceedings of the 29th International Conference on Computational Linguistics
Attention based methods for image-text generation often focus on visual features individually, while ignoring relationship information among image features that provides important guidance for generating sentences. To alleviate this issue, in this work we propose the Joint Relationship Attention Network (JRAN) that novelly explores the relationships among the features. Specifically, different from the previous relationship based approaches that only explore the single relationship in the image, our JRAN can effectively learn two relationships, the visual relationships among region features and the visual-semantic relationships between region features and semantic features, and further make a dynamic trade-off between them during outputting the relationship representation. Moreover, we devise a new relationship based attention, which can adaptively focus on the output relationship representation when predicting different words. Extensive experiments on large-scale MSCOCO and small-scale Flickr30k datasets show that JRAN achieves state-of-the-art performance. More remarkably, JRAN achieves new 28.3% and 58.2% performance in terms of BLEU4 and CIDEr metric on Flickr30k dataset.
Continuous Decomposition of Granularity for Neural Paraphrase Generation
Xiaodong Gu
|
Zhaowei Zhang
|
Sang-Woo Lee
|
Kang Min Yoo
|
Jung-Woo Ha
Proceedings of the 29th International Conference on Computational Linguistics
While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information. Prior work has shown that decomposing the levels of granularity (e.g., word, phrase, or sentence) for input tokens has produced substantial improvements, suggesting the possibility of enhancing Transformers via more fine-grained modeling of granularity. In this work, we present continuous decomposition of granularity for neural paraphrase generation (C-DNPG): an advanced extension of multi-head self-attention with: 1) a granularity head that automatically infers the hierarchical structure of a sentence by neurally estimating the granularity level of each input token; and 2) two novel attention masks, namely, granularity resonance and granularity scope, to efficiently encode granularity into attention. Experiments on two benchmarks, including Quora question pairs and Twitter URLs have shown that C-DNPG outperforms baseline models by a significant margin. Qualitative analysis reveals that C-DNPG indeed captures fine-grained levels of granularity with effectiveness.
2014
Reducing Over-Weighting in Supervised Term Weighting for Sentiment Analysis
Haibing Wu
|
Xiaodong Gu
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers
Search
Co-authors
- Changzhi Wang 1
- Zhaowei Zhang 1
- Sang-Woo Lee 1
- Kang Min Yoo 1
- Jung-Woo Ha 1
- show all...