Haoyu Kuang
2024
SoMeLVLM: A Large Vision Language Model for Social Media Processing
Xinnong Zhang
|
Haoyu Kuang
|
Xinyi Mou
|
Hanjia Lyu
|
Kun Wu
|
Siming Chen
|
Jiebo Luo
|
Xuanjing Huang
|
Zhongyu Wei
Findings of the Association for Computational Linguistics: ACL 2024
The growth of social media, characterized by its multimodal nature, has led to the emergence of diverse phenomena and challenges, which calls for an effective approach to uniformly solve automated tasks. The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even with carefully designed prompting methods, the general domain models often fall short in aligning with the unique speaking style and context of social media tasks. In this paper, we introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM), which is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation. SoMeLVLM is designed to understand and generate realistic social media behavior. We have developed a 654k multimodal social media instruction-tuning dataset to support our cognitive framework and fine-tune our model. Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks. Further analysis shows its significant advantages over baselines in terms of cognitive abilities.
2023
Unleashing the Power of Language Models in Text-Attributed Graph
Haoyu Kuang
|
Jiarong Xu
|
Haozhe Zhang
|
Zuyu Zhao
|
Qi Zhang
|
Xuanjing Huang
|
Zhongyu Wei
Findings of the Association for Computational Linguistics: EMNLP 2023
Representation learning on graph has been demonstrated to be a powerful tool for solving real-world problems. Text-attributed graph carries both semantic and structural information among different types of graphs. Existing works have paved the way for knowledge extraction of this type of data by leveraging language models or graph neural networks or combination of them. However, these works suffer from issues like underutilization of relationships between nodes or words or unaffordable memory cost. In this paper, we propose a Node Representation Update Pre-training Architecture based on Co-modeling Text and Graph (NRUP). In NRUP, we construct a hierarchical text-attributed graph that incorporates both original nodes and word nodes. Meanwhile, we apply four self-supervised tasks for different level of constructed graph. We further design the pre-training framework to update the features of nodes during training epochs. We conduct the experiment on the benchmark dataset ogbn-arxiv. Our method achieves outperformance compared to baselines, fully demonstrating its validity and generalization.
Search
Co-authors
- Xuan-Jing Huang 2
- Zhongyu Wei 2
- Jiarong Xu 1
- Haozhe Zhang 1
- Zuyu Zhao 1
- show all...