Ching Yun Chang

Also published as: Ching-Yun Chang


2023

pdf
Translation-Enhanced Multilingual Text-to-Image Generation
Yaoyiran Li | Ching-Yun Chang | Stephen Rawls | Ivan Vulić | Anna Korhonen
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Research on text-to-image generation (TTI) still predominantly focuses on the English language due to the lack of annotated image-caption data in other languages; in the long run, this might widen inequitable access to TTI technology. In this work, we thus investigate multilingual TTI (termed mTTI) and the current potential of neural machine translation (NMT) to bootstrap mTTI systems. We provide two key contributions. 1) Relying on a multilingual multi-modal encoder, we provide a systematic empirical study of standard methods used in cross-lingual NLP when applied to mTTI: Translate Train, Translate Test, and Zero-Shot Transfer. 2) We propose Ensemble Adapter (EnsAd), a novel parameter-efficient approach that learns to weigh and consolidate the multilingual text knowledge within the mTTI framework, mitigating the language gap and thus improving mTTI performance. Our evaluations on standard mTTI datasets COCO-CN, Multi30K Task2, and LAION-5B demonstrate the potential of translation-enhanced mTTI systems and also validate the benefits of the proposed EnsAd which derives consistent gains across all datasets. Further investigations on model variants, ablation studies, and qualitative analyses provide additional insights on the inner workings of the proposed mTTI approaches.

2022

pdf
Amazon Alexa AI’s System for IWSLT 2022 Offline Speech Translation Shared Task
Akshaya Shanbhogue | Ran Xue | Ching-Yun Chang | Sarah Campbell
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)

This paper describes Amazon Alexa AI’s submission to the IWSLT 2022 Offline Speech Translation Task. Our system is an end-to-end speech translation model that leverages pretrained models and cross modality transfer learning. We detail two improvements to the knowledge transfer schema. First, we implemented a new loss function that reduces knowledge gap between audio and text modalities in translation task effectively. Second, we investigate multiple finetuning strategies including sampling loss, language grouping and domain adaption. These strategies aims to bridge the gaps between speech and text translation tasks. We also implement a multi-stage segmentation and merging strategy that yields improvements on the unsegmented development datasets. Results show that the proposed loss function consistently improves BLEU scores on the development datasets for both English-German and multilingual models. Additionally, certain language pairs see BLEU score improvements with specific finetuning strategies.

2020

pdf
Cross-lingual Alignment Methods for Multilingual BERT: A Comparative Study
Saurabh Kulshreshtha | Jose Luis Redondo Garcia | Ching-Yun Chang
Findings of the Association for Computational Linguistics: EMNLP 2020

Multilingual BERT (mBERT) has shown reasonable capability for zero-shot cross-lingual transfer when fine-tuned on downstream tasks. Since mBERT is not pre-trained with explicit cross-lingual supervision, transfer performance can further be improved by aligning mBERT with cross-lingual signal. Prior work propose several approaches to align contextualised embeddings. In this paper we analyse how different forms of cross-lingual supervision and various alignment methods influence the transfer capability of mBERT in zero-shot setting. Specifically, we compare parallel corpora vs dictionary-based supervision and rotational vs fine-tuning based alignment methods. We evaluate the performance of different alignment methodologies across eight languages on two tasks: Name Entity Recognition and Semantic Slot Filling. In addition, we propose a novel normalisation method which consistently improves the performance of rotation-based alignment including a notable 3% F1 improvement for distant and typologically dissimilar languages. Importantly we identify the biases of the alignment methods to the type of task and proximity to the transfer language. We also find that supervision from parallel corpus is generally superior to dictionary alignments.

2018

pdf
Learning Target-Specific Representations of Financial News Documents For Cumulative Abnormal Return Prediction
Junwen Duan | Yue Zhang | Xiao Ding | Ching-Yun Chang | Ting Liu
Proceedings of the 27th International Conference on Computational Linguistics

Texts from the Internet serve as important data sources for financial market modeling. Early statistical approaches rely on manually defined features to capture lexical, sentiment and event information, which suffers from feature sparsity. Recent work has considered learning dense representations for news titles and abstracts. Compared to news titles, full documents can contain more potentially helpful information, but also noise compared to events and sentences, which has been less investigated in previous work. To fill this gap, we propose a novel target-specific abstract-guided news document representation model. The model uses a target-sensitive representation of the news abstract to weigh sentences in the news content, so as to select and combine the most informative sentences for market modeling. Results show that document representations can give better performance for estimating cumulative abnormal returns of companies when compared to titles and abstracts. Our model is especially effective when it used to combine information from multiple document sources compared to the sentence-level baselines.

2017

pdf
Integrating Order Information and Event Relation for Script Event Prediction
Zhongqing Wang | Yue Zhang | Ching-Yun Chang
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

There has been a recent line of work automatically learning scripts from unstructured texts, by modeling narrative event chains. While the dominant approach group events using event pair relations, LSTMs have been used to encode full chains of narrative events. The latter has the advantage of learning long-range temporal orders, yet the former is more adaptive to partial orders. We propose a neural model that leverages the advantages of both methods, by using LSTM hidden states as features for event pair modelling. A dynamic memory network is utilized to automatically induce weights on existing events for inferring a subsequent event. Standard evaluation shows that our method significantly outperforms both methods above, giving the best results reported so far.

2016

pdf
Measuring the Information Content of Financial News
Ching-Yun Chang | Yue Zhang | Zhiyang Teng | Zahn Bozanic | Bin Ke
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies. We propose a model to automatically measure the information content given news text, trained using news and corresponding cumulative abnormal returns of listed companies. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a novel tree-structured LSTM is used to find target-specific representations of news text given syntax structures. Empirical results show that the neural models can outperform sentiment-based models, demonstrating the effectiveness of recent NLP technology advances for computational finance.

pdf
Expectation-Regulated Neural Model for Event Mention Extraction
Ching-Yun Chang | Zhiyang Teng | Yue Zhang
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf
Practical Linguistic Steganography using Contextual Synonym Substitution and a Novel Vertex Coding Method
Ching-Yun Chang | Stephen Clark
Computational Linguistics, Volume 40, Issue 2 - June 2014

2012

pdf
Adjective Deletion for Linguistic Steganography and Secret Sharing
Ching-Yun Chang | Stephen Clark
Proceedings of COLING 2012

pdf
The Secret’s in the Word Order: Text-to-Text Generation for Linguistic Steganography
Ching-Yun Chang | Stephen Clark
Proceedings of COLING 2012

2010

pdf
Practical Linguistic Steganography Using Contextual Synonym Substitution and Vertex Colour Coding
Ching-Yun Chang | Stephen Clark
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Linguistic Steganography Using Automatically Generated Paraphrases
Ching-Yun Chang | Stephen Clark
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics