Zhiyue Liu
2025
Task-Specific Information Decomposition for End-to-End Dense Video Captioning
Zhiyue Liu
|
Xinru Zhang
|
Jinyuan Liu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Dense video captioning aims to localize events within input videos and generate concise descriptive texts for each event. Advanced end-to-end methods require both tasks to share the same intermediate features that serve as event queries, thereby enabling the mutual promotion of two tasks. However, relying on shared queries limits the model’s ability to extract task-specific information, as event semantic perception and localization demand distinct perspectives on video understanding. To address this, we propose a decomposed dense video captioning framework that derives localization and captioning queries from event queries, enabling task-specific representations while maintaining inter-task collaboration. Considering the roles of different queries, we design a contrastive semantic optimization strategy that guides localization queries to focus on event-level visual features and captioning queries to align with textual semantics. Besides, only localization information is considered in existing methods for label assignment, failing to ensure the relevance of the selected queries to descriptions. We jointly consider localization and captioning losses to achieve a semantically balanced assignment process. Extensive experiments on the YouCook2 and ActivityNet Captions datasets demonstrate that our framework achieves state-of-the-art performance.
2023
RMLM: A Flexible Defense Framework for Proactively Mitigating Word-level Adversarial Attacks
Zhaoyang Wang
|
Zhiyue Liu
|
Xiaopeng Zheng
|
Qinliang Su
|
Jiahai Wang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Adversarial attacks on deep neural networks keep raising security concerns in natural language processing research. Existing defenses focus on improving the robustness of the victim model in the training stage. However, they often neglect to proactively mitigate adversarial attacks during inference. Towards this overlooked aspect, we propose a defense framework that aims to mitigate attacks by confusing attackers and correcting adversarial contexts that are caused by malicious perturbations. Our framework comprises three components: (1) a synonym-based transformation to randomly corrupt adversarial contexts in the word level, (2) a developed BERT defender to correct abnormal contexts in the representation level, and (3) a simple detection method to filter out adversarial examples, any of which can be flexibly combined. Additionally, our framework helps improve the robustness of the victim model during training. Extensive experiments demonstrate the effectiveness of our framework in defending against word-level adversarial attacks.
2022
UECA-Prompt: Universal Prompt for Emotion Cause Analysis
Xiaopeng Zheng
|
Zhiyue Liu
|
Zizhen Zhang
|
Zhaoyang Wang
|
Jiahai Wang
Proceedings of the 29th International Conference on Computational Linguistics
Emotion cause analysis (ECA) aims to extract emotion clauses and find the corresponding cause of the emotion. Existing methods adopt fine-tuning paradigm to solve certain types of ECA tasks. These task-specific methods have a deficiency of universality. And the relations among multiple objectives in one task are not explicitly modeled. Moreover, the relative position information introduced in most existing methods may make the model suffer from dataset bias. To address the first two problems, this paper proposes a universal prompt tuning method to solve different ECA tasks in the unified framework. As for the third problem, this paper designs a directional constraint module and a sequential learning module to ease the bias. Considering the commonalities among different tasks, this paper proposes a cross-task training method to further explore the capability of the model. The experimental results show that our method achieves competitive performance on the ECA datasets.
Search
Fix author
Co-authors
- Zhaoyang Wang 2
- Jiahai Wang 2
- Xiaopeng Zheng 2
- Jinyuan Liu 1
- Qinliang Su 1
- show all...