2023
pdf
Lexical Retrieval Hypothesis in Multimodal Context
Po-Ya Angela Wang
|
Pin-Er Chen
|
Hsin-Yu Chou
|
Yu-Hsiang Tseng
|
Shu-Kai Hsieh
Proceedings of the 4th Conference on Language, Data and Knowledge
pdf
Exploring Affordance and Situated Meaning in Image Captions: A Multimodal Analysis
Pin-Er Chen
|
Po-Ya Angela Wang
|
Hsin-Yu Chou
|
Yu-Hsiang Tseng
|
Shu-Kai Hsieh
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
abs
Analyzing discourse functions with acoustic features and phone embeddings: non-lexical items in Taiwan Mandarin
Pin-Er Chen
|
Yu-Hsiang Tseng
|
Chi-Wei Wang
|
Fang-Chi Yeh
|
Shu-Kai Hsieh
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Non-lexical items are expressive devices used in conversations that are not words but are nevertheless meaningful. These items play crucial roles, such as signaling turn-taking or marking stances in interactions. However, as the non-lexical items do not stably correspond to written or phonological forms, past studies tend to focus on studying their acoustic properties, such as pitches and durations. In this paper, we investigate the discourse functions of non-lexical items through their acoustic properties and the phone embeddings extracted from a deep learning model. Firstly, we create a non-lexical item dataset based on the interpellation video clips from Taiwan’s Legislative Yuan. Then, we manually identify the non-lexical items and their discourse functions in the videos. Next, we analyze the acoustic properties of those items through statistical modeling and building classifiers based on phone embeddings extracted from a phone recognition model. We show that (1) the discourse functions have significant effects on the acoustic features; and (2) the classifiers built on phone embeddings perform better than the ones on conventional acoustic properties. These results suggest that phone embeddings may reflect the phonetic variations crucial in differentiating the discourse functions of non-lexical items.
pdf
abs
CxLM: A Construction and Context-aware Language Model
Yu-Hsiang Tseng
|
Cing-Fang Shih
|
Pin-Er Chen
|
Hsin-Yu Chou
|
Mao-Chang Ku
|
Shu-Kai Hsieh
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Constructions are direct form-meaning pairs with possible schematic slots. These slots are simultaneously constrained by the embedded construction itself and the sentential context. We propose that the constraint could be described by a conditional probability distribution. However, as this conditional probability is inevitably complex, we utilize language models to capture this distribution. Therefore, we build CxLM, a deep learning-based masked language model explicitly tuned to constructions’ schematic slots. We first compile a construction dataset consisting of over ten thousand constructions in Taiwan Mandarin. Next, an experiment is conducted on the dataset to examine to what extent a pretrained masked language model is aware of the constructions. We then fine-tune the model specifically to perform a cloze task on the opening slots. We find that the fine-tuned model predicts masked slots more accurately than baselines and generates both structurally and semantically plausible word samples. Finally, we release CxLM and its dataset as publicly available resources and hope to serve as new quantitative tools in studying construction grammar.
2021
pdf
abs
What confuses BERT? Linguistic Evaluation of Sentiment Analysis on Telecom Customer Opinion
Cing-Fang Shih
|
Yu-Hsiang Tseng
|
Ching-Wen Yang
|
Pin-Er Chen
|
Hsin-Yu Chou
|
Lian-Hui Tan
|
Tzu-Ju Lin
|
Chun-Wei Wang
|
Shu-Kai Hsieh
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)
Ever-expanding evaluative texts on online forums have become an important source of sentiment analysis. This paper proposes an aspect-based annotated dataset consisting of telecom reviews on social media. We introduce a category, implicit evaluative texts, impevals for short, to investigate how the deep learning model works on these implicit reviews. We first compare two models, BertSimple and BertImpvl, and find that while both models are competent to learn simple evaluative texts, they are confused when classifying impevals. To investigate the factors underlying the correctness of the model’s predictions, we conduct a series of analyses, including qualitative error analysis and quantitative analysis of linguistic features with logistic regressions. The results show that local features that affect the overall sentential sentiment confuse the model: multiple target entities, transitional words, sarcasm, and rhetorical questions. Crucially, these linguistic features are independent of the model’s confidence measured by the classifier’s softmax probabilities. Interestingly, the sentence complexity indicated by syntax-tree depth is not correlated with the model’s correctness. In sum, this paper sheds light on the characteristics of the modern deep learning model and when it might need more supervision through linguistic evaluations.