Yao-Ting Sung

Also published as: Yao-Ting Hung


2025

With the proliferation of digital learning, an increasing number of learners are engaging with audio-visual materials. For preschool and lower elementary students, whose literacy skills are still limited, knowledge acquisition relies more heavily on spoken and visual content. Traditional readability models were primarily developed for written texts, and their applicability to spoken materials remains uncertain. To address this issue, this study investigates the impact of different word segmentation tools and language models on the performance of automatic grade classification models for Chinese spoken materials. Support Vector Machines were employed for grade prediction, aiming to automatically determine the appropriate grade level of learning resources and assist learners in selecting suitable materials. The results show that language models with higher-dimensional word embeddings achieved better classification performance, with an accuracy of up to 61% and an adjacent accuracy of 76%. These findings may contribute to future digital learning platforms or educational resource recommendation systems by automatically providing students with appropriate listening materials to enhance learning outcomes.
摘要寫作為閱讀與寫作整合的高層次語文任務,不僅可評量學生的文本理解能力,也能促進語言表達與重述能力的培養。過去自動摘要批改系統多依賴關鍵詞比對或語義重疊等「由下而上」的方法,較難以全面評估學生的理解深度與文本重述能力,且中文摘要寫作批改研究雖有,但相較於英文仍相對不足,形成研究缺口。隨著大型語言模型(Large Language Models, LLMs)的發展,其在語意理解與生成能力上的突破,為自動摘要批改與回饋帶來新契機。有鑑於此,本研究旨以由上而下的方式探討結合LLMs與閱讀摘要評分規準(Rubrics)對學生閱讀摘要批改與回饋之應用潛力,進一步而言,在考量教學資料隱私的情況下,本研究採用Meta-Llama-3.1-70B生成電腦摘要,並依據專家所制定的摘要評分規準,其評分涵蓋:理解與準確性、組織結構、簡潔性、語言表達與文法及重述能力五大構面,對學生閱讀摘要進行自動評分與回饋。研究結果顯示,Meta-Llama-3.1-70B能提供具體、清晰的即時回饋,不僅能指出摘要中遺漏的關鍵概念,也能針對結構安排與語法錯誤提出修正建議,協助學生快速掌握摘要改進方向;然而回饋多偏向表面語言與結構調整,在語言表達、修辭多樣性及重述能力等高層次語文能力評估上仍存在限制。整體而言,LLMs可作為形成性評量與教學輔助工具,提升評分效率,但需結合教師專業判斷與回饋以補足深層概念與策略性寫作指導,促進學生摘要寫作能力的發展。

2024

Automated speaking assessment (ASA) typically involves automatic speech recognition (ASR) and hand-crafted feature extraction from the ASR transcript of a learner’s speech. Recently, self-supervised learning (SSL) has shown stellar performance compared to traditional methods. However, SSL-based ASA systems are faced with at least three data-related challenges: limited annotated data, uneven distribution of learner proficiency levels and non-uniform score intervals between different CEFR proficiency levels. To address these challenges, we explore the use of two novel modeling strategies: metric-based classification and loss re-weighting, leveraging distinct SSL-based embedding features. Extensive experimental results on the ICNALE benchmark dataset suggest that our approach can outperform existing strong baselines by a sizable margin, achieving a significant improvement of more than 10% in CEFR prediction accuracy.

2023

2022

Due to the surge in global demand for English as a second language (ESL), developments of automated methods for grading speaking proficiency have gained considerable attention. This paper aims to present a computerized regime of grading the spontaneous spoken language for ESL learners. Based on the speech corpus of ESL learners recently collected in Taiwan, we first extract multi-view features (e.g., pronunciation, fluency, and prosody features) from either automatic speech recognition (ASR) transcription or audio signals. These extracted features are, in turn, fed into a tree-based classifier to produce a new set of indicative features as the input of the automated assessment system, viz. the grader. Finally, we use different machine learning models to predict ESL learners’ respective speaking proficiency and map the result into the corresponding CEFR level. The experimental results and analysis conducted on the speech corpus of ESL learners in Taiwan show that our approach holds great potential for use in automated speaking assessment, meanwhile offering more reliable predictive results than the human experts.
Feature analysis of Chinese characters plays a prominent role in “character-based” education. However, there is an urgent need for a text analysis system for processing the difficulty of composing components for characters, primarily based on Chinese learners’ performance. To meet this need, the purpose of this research was to provide such a system by adapting a data-driven approach. Based on Chen et al.’s (2011) Chinese Orthography Database, this research has designed and developed an system: Character Difficulty - Research on Multi-features (CD-ROM). This system provides three functions: (1) analyzing a text and providing its difficulty regarding Chinese characters; (2) decomposing characters into components and calculating the frequency of components based on the analyzed text; and (3) affording component-deriving characters based on the analyzed text and downloadable images as teaching materials. With these functions highlighting multi-level features of characters, this system has the potential to benefit the fields of Chinese character instruction, Chinese orthographic learning, and Chinese natural language processing.

2021

2019

2018

2017

2016

2015