2024
pdf
abs
Matching Varying-Length Texts via Topic-Informed and Decoupled Sentence Embeddings
Xixi Zhou
|
Chunbin Gu
|
Xin Jie
|
Jiajun Bu
|
Haishuai Wang
Findings of the Association for Computational Linguistics: NAACL 2024
Measuring semantic similarity between texts is a crucial task in natural language processing. While existing semantic text matching focuses on pairs of similar-length sequences, matching texts with non-comparable lengths has broader applications in specific domains, such as comparing professional document summaries and content. Current approaches struggle with text pairs of non-comparable lengths due to truncation issues. To address this, we split texts into natural sentences and decouple sentence representations using supervised contrastive learning (SCL). Meanwhile, we adopt the embedded topic model (ETM) for specific domain data. Our experiments demonstrate the effectiveness of our model, based on decoupled and topic-informed sentence embeddings, in matching texts of significantly different lengths across three well-studied datasets.
pdf
abs
MMAD:Multi-modal Movie Audio Description
Xiaojun Ye
|
Junhao Chen
|
Xiang Li
|
Haidong Xin
|
Chao Li
|
Sheng Zhou
|
Jiajun Bu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Audio Description (AD) aims to generate narrations of information that is not accessible through unimodal hearing in movies to aid the visually impaired in following film narratives. Current solutions rely heavily on manual work, resulting in high costs and limited scalability. While automatic methods have been introduced, they often yield descriptions that are sparse and omit key details. ddressing these challenges, we propose a novel automated pipeline, the Multi-modal Movie Audio Description (MMAD). MMAD harnesses the capabilities of three key modules as well as the power of Llama2 to augment the depth and breadth of the generated descriptions. Specifically, first, we propose an Audio-aware Feature Enhancing Module to provide the model with multi-modal perception capabilities, enriching the background descriptions with a more comprehensive understanding of the environmental features. Second, we propose an Actor-tracking-aware Story Linking Module to aid in the generation of contextual and character-centric descriptions, thereby enhancing the richness of character depictions. Third, we incorporate a Subtitled Movie Clip Contextual Alignment Module, supplying semantic information about various time periods throughout the movie, which facilitates the consideration of the full movie narrative context when describing silent segments, thereby enhancing the richness of the descriptions. Experiments on widely used datasets convincingly demonstrates that MMAD significantly surpasses existing strong baselines in performance, establishing a new state-of-the-art in the field. Our code will be released at https://github.com/Daria8976/MMAD.
2023
pdf
abs
Translate the Beauty in Songs: Jointly Learning to Align Melody and Translate Lyrics
Chengxi Li
|
Kai Fan
|
Jiajun Bu
|
Boxing Chen
|
Zhongqiang Huang
|
Zhi Yu
Findings of the Association for Computational Linguistics: EMNLP 2023
Song translation requires both translation of lyrics and alignment of music notes so that the resulting verse can be sung to the accompanying melody, which is a challenging problem that has attracted some interests in different aspects of the translation process. In this paper, we propose Lyrics-Melody Translation with Adaptive Grouping (LTAG), a holistic solution to automatic song translation by jointly modeling lyric translation and lyrics-melody alignment. It is a novel encoder-decoder framework that can simultaneously translate the source lyrics and determine the number of aligned notes at each decoding step through an adaptive note grouping module. To address data scarcity, we commissioned a small amount of training data annotated specifically for this task and used large amounts of automatic training data through back-translation. Experiments conducted on an English-Chinese song translation data set show the effectiveness of our model in both automatic and human evaluations.
pdf
abs
GEM: Gestalt Enhanced Markup Language Model for Web Understanding via Render Tree
Zirui Shao
|
Feiyu Gao
|
Zhongda Qi
|
Hangdi Xing
|
Jiajun Bu
|
Zhi Yu
|
Qi Zheng
|
Xiaozhong Liu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Inexhaustible web content carries abundant perceptible information beyond text. Unfortunately, most prior efforts in pre-trained Language Models (LMs) ignore such cyber-richness, while few of them only employ plain HTMLs, and crucial information in the rendered web, such as visual, layout, and style, are excluded. Intuitively, those perceptible web information can provide essential intelligence to facilitate content understanding tasks. This study presents an innovative Gestalt Enhanced Markup (GEM) Language Model inspired by Gestalt psychological theory for hosting heterogeneous visual information from the render tree into the language model without requiring additional visual input. Comprehensive experiments on multiple downstream tasks, i.e., web question answering and web information extraction, validate GEM superiority.
pdf
abs
Training Simultaneous Speech Translation with Robust and Random Wait-k-Tokens Strategy
Linlin Zhang
|
Kai Fan
|
Jiajun Bu
|
Zhongqiang Huang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Simultaneous Speech Translation (SimulST) is a task focused on ensuring high-quality translation of speech in low-latency situations. Despite this, the modality gap (e.g., unknown word boundaries) between audio and text presents a challenge. This gap hinders the effective application of policies from simultaneous text translation (SimulMT) and compromises the performance of offline speech translation. To address this issue, we first leverage the Montreal Forced Aligner (MFA) and utilize audio transcription pairs in pre-training the acoustic encoder, and introduce a token-level cross-modal alignment that allows the wait-k policy from SimulMT to better adapt to SimulST. This token-level boundary alignment simplifies the decision-making process for predicting read/write actions, as if the decoder were directly processing text tokens. Subsequently, to optimize the SimulST task, we propose a robust and random wait-k-tokens strategy. This strategy allows a single model to meet various latency requirements and minimizes error accumulation of boundary alignment during inference. Our experiments on the MuST-C dataset show that our method achieves better trade-off between translation quality and latency.
2011
pdf
bib
Opinion Word Expansion and Target Extraction through Double Propagation
Guang Qiu
|
Bing Liu
|
Jiajun Bu
|
Chun Chen
Computational Linguistics, Volume 37, Issue 1 - March 2011
2007
pdf
Exploration of Term Dependence in Sentence Retrieval
Keke Cai
|
Jiajun Bu
|
Chun Chen
|
Kangmiao Liu
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions
pdf
Manifolds Based Emotion Recognition in Speech
Mingyu You
|
Chun Chen
|
Jiajun Bu
|
Jia Liu
|
Jianhua Tao
International Journal of Computational Linguistics & Chinese Language Processing, Volume 12, Number 1, March 2007: Special Issue on Affective Speech Processing