Ruey-Cheng Chen


2024

pdf bib
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pre-training of BabyLM
Zhewen Shen | Aditya Joshi | Ruey-Cheng Chen
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Children from bilingual backgrounds benefit from interactions with parents and teachers to re-acquire their heritage language. In this paper, we investigate how this insight from behavioral study can be incorporated into the learning of small-scale language models. We introduce BAMBINO-LM, a continual pre-training strategy for BabyLM that uses a novel combination of alternation and PPO-based perplexity reward induced from a parent Italian model. Upon evaluation on zero-shot classification tasks for English and Italian, BAMBINO-LM improves the Italian language capability of a BabyLM baseline. Our ablation analysis demonstrates that employing both the alternation strategy and PPO-based modeling is key to this effectiveness gain. We also show that, as a side effect, the proposed method leads to a similar degradation in L1 effectiveness as human children would have had in an equivalent learning scenario. Through its modeling and findings, BAMBINO-LM makes a focused contribution to the pre-training of small-scale language models by first developing a human-inspired strategy for pre-training and then showing that it results in behaviours similar to that of humans.

2020

pdf
Incorporating Behavioral Hypotheses for Query Generation
Ruey-Cheng Chen | Chia-Jung Lee
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Generative neural networks have been shown effective on query suggestion. Commonly posed as a conditional generation problem, the task aims to leverage earlier inputs from users in a search session to predict queries that they will likely issue at a later time. User inputs come in various forms such as querying and clicking, each of which can imply different semantic signals channeled through the corresponding behavioral patterns. This paper induces these behavioral biases as hypotheses for query generation, where a generic encoder-decoder Transformer framework is presented to aggregate arbitrary hypotheses of choice. Our experimental results show that the proposed approach leads to significant improvements on top-k word error rate and Bert F1 Score compared to a recent BART model.

2013

pdf
An improved MDL-based compression algorithm for unsupervised word segmentation
Ruey-Cheng Chen
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf
A Regularized Compression Method to Unsupervised Word Segmentation
Ruey-Cheng Chen | Chiung-Min Tsai | Jieh Hsiang
Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology

2009

pdf
Web Mining for Unsupervised Classification
Wei-Yen Day | Chun-Yi Chi | Ruey-Cheng Chen | Pu-Jen Cheng | Pei-Sen Liu
Proceedings of the 21st Conference on Computational Linguistics and Speech Processing

pdf
Query Formulation by Selecting Good Terms
Chia-Jung Lee | Yi-Chun Lin | Ruey-Cheng Chen | Pei-Sen Liu | Pu-Jen Cheng
Proceedings of the 21st Conference on Computational Linguistics and Speech Processing