Jiarong He
2024
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data
Ze Chen
|
Chengcheng Wei
|
Songtan Fang
|
Jiarong He
|
Max Gao
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper mainly describes a unified system for hallucination detection of LLMs, which wins the second prize in the model-agnostic track of the SemEval-2024 Task 6, and also achieves considerable results in the model-aware track. This task aims to detect hallucination with LLMs for three different text-generation tasks without labeled training data. We utilize prompt engineering and few-shot learning to verify the performance of different LLMs on the validation data. Then we select the LLMs with better performance to generate high-quality weakly supervised training data, which not only satisfies the consistency of different LLMs, but also satisfies the consistency of the optimal LLM with different sampling parameters. Furthermore, we finetune different LLMs by using the constructed training data, and finding that a relatively small LLM can achieve a competitive level of performance in hallucination detection, when compared to the large LLMs and the prompt-based approaches using GPT-4.
2022
OPDAI at SemEval-2022 Task 11: A hybrid approach for Chinese NER using outside Wikipedia knowledge
Ze Chen
|
Kangxu Wang
|
Jiewen Zheng
|
Zijian Cai
|
Jiarong He
|
Jin Gao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This article describes the OPDAI submission to SemEval-2022 Task 11 on Chinese complex NER. First, we explore the performance of model-based approaches and their ensemble, finding that fine-tuning the pre-trained Chinese RoBERTa-wwm model with word semantic representation and contextual gazetteer representation performs best among single models. However, the model-based approach performs poorly on test data because of low-context and unseen-entity cases. Then, we extend our system into two stages: (1) generating entity candidates by using neural model, soft-templates and Wikipedia lexicon. (2) predicting the final entity results within a feature-based rank model. For the evaluation, our best submission achieves an F1 score of 0.7954 and attains the third-best score in the Chinese sub-track.
Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC
Ze Chen
|
Kangxu Wang
|
Zijian Cai
|
Jiewen Zheng
|
Jiarong He
|
Max Gao
|
Jason Zhang
Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP)
This paper mainly describes the dma submission to the TempoWiC task, which achieves a macro-F1 score of 77.05% and attains the first place in this task. We first explore the impact of different pre-trained language models. Then we adopt data cleaning, data augmentation, and adversarial training strategies to enhance the model generalization and robustness. For further improvement, we integrate POS information and word semantic representation using a Mixture-of-Experts (MoE) approach. The experimental results show that MoE can overcome the feature overuse issue and combine the context, POS, and word semantic features well. Additionally, we use a model ensemble method for the final prediction, which has been proven effective by many research works.
Search
Co-authors
- Ze Chen 3
- Max Gao 2
- Kangxu Wang 2
- Jiewen Zheng 2
- Zijian Cai 2
- show all...