2024
pdf
abs
FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data
Haoran Sun
|
Renren Jin
|
Shaoyang Xu
|
Leiyu Pan
|
Supryadi
|
Menglong Cui
|
Jiangcun Du
|
Yikun Lei
|
Lei Yang
|
Ling Shi
|
Juesi Xiao
|
Shaolin Zhu
|
Deyi Xiong
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language models (LLMs) have demonstrated prowess in a wide range of tasks. However, many LLMs exhibit significant performance discrepancies between high- and low-resource languages. To mitigate this challenge, we present FuxiTranyu, an open-source multilingual LLM, which is designed to satisfy the need of the research community for balanced and high-performing multilingual capabilities. The base model, FuxiTranyu-8B, features 8 billion parameters and is trained from scratch on meticulously balanced multilingual data that contains 600 billion tokens covering 43 natural languages and 16 programming languages. We also develop two instruction-tuned models: FuxiTranyu-8B-SFT which is fine-tuned on a diverse multilingual instruction dataset, and FuxiTranyu-8B-DPO which is further refined with DPO on a preference dataset for enhanced alignment ability. Extensive experiments on a wide range of multilingual benchmarks demonstrate the competitive performance of FuxiTranyu against existing multilingual LLMs, e.g., BLOOM-7B, PolyLM-13B, and Mistral-7B-Instruct. Both neuron and representation interpretability analyses reveal that FuxiTranyu achieves consistent multilingual representations across languages. To promote further research into multilingual LLMs, we release both the base and instruction-tuned FuxiTranyu models together with 58 pre-training checkpoints at HuggingFace and Github.
2023
pdf
abs
ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Yakun Yu
|
Mingjun Zhao
|
Shi-ang Qi
|
Feiran Sun
|
Baoxun Wang
|
Weidong Guo
|
Xiaoli Wang
|
Lei Yang
|
Di Niu
Findings of the Association for Computational Linguistics: ACL 2023
Multimodal Sentiment Analysis leverages multimodal signals to detect the sentiment of a speaker. Previous approaches concentrate on performing multimodal fusion and representation learning based on general knowledge obtained from pretrained models, which neglects the effect of domain-specific knowledge. In this paper, we propose Contrastive Knowledge Injection (ConKI) for multimodal sentiment analysis, where specific-knowledge representations for each modality can be learned together with general knowledge representations via knowledge injection based on an adapter architecture. In addition, ConKI uses a hierarchical contrastive learning procedure performed between knowledge types within every single modality, across modalities within each sample, and across samples to facilitate the effective learning of the proposed representations, hence improving multimodal sentiment predictions. The experiments on three popular multimodal sentiment analysis benchmarks show that ConKI outperforms all prior methods on a variety of performance metrics.
2020
pdf
abs
面向垂直领域的阅读理解数据增强方法(Method for reading comprehension data enhancement in vertical field)
Zhengwei Lv (吕政伟)
|
Lei Yang (杨雷)
|
Zhizhong Shi (石智中)
|
Xiao Liang (梁霄)
|
Tao Lei (雷涛)
|
Duoxing Liu (刘多星)
Proceedings of the 19th Chinese National Conference on Computational Linguistics
阅读理解问答系统是利用语义理解等自然语言处理技术,根据输入问题,对非结构化文档数据进行分析,生成一个答案,具有很高的研究和应用价值。在垂直领域应用过程中,阅读理解问答数据标注成本高且用户问题表达复杂多样,使得阅读理解问答系统准确率低、鲁棒性差。针对这一问题,本文提出一种面向垂直领域的阅读理解问答数据的增强方法,该方法基于真实用户问题,构造阅读理解训练数据,一方面降低标注成本,另一方面增加训练数据多样性,提升模型的准确率和鲁棒性。本文用汽车领域数据对该方法进行实验验证,其结果表明该方法对垂直领域阅读理解模型的准确率和鲁棒性均能有效提升。
2019
pdf
abs
AUTOHOME-ORCA at SemEval-2019 Task 8: Application of BERT for Fact-Checking in Community Forums
Zhengwei Lv
|
Duoxing Liu
|
Haifeng Sun
|
Xiao Liang
|
Tao Lei
|
Zhizhong Shi
|
Feng Zhu
|
Lei Yang
Proceedings of the 13th International Workshop on Semantic Evaluation
Fact checking is an important task for maintaining high quality posts and improving user experience in Community Question Answering forums. Therefore, the SemEval-2019 task 8 is aimed to identify factual question (subtask A) and detect true factual information from corresponding answers (subtask B). In order to address this task, we propose a system based on the BERT model with meta information of questions. For the subtask A, the outputs of fine-tuned BERT classification model are combined with the feature of length of questions to boost the performance. For the subtask B, the predictions of several variants of BERT model encoding the meta information are combined to create an ensemble model. Our system achieved competitive results with an accuracy of 0.82 in the subtask A and 0.83 in the subtask B. The experimental results validate the effectiveness of our system.