Jing Ye


2024

pdf
MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Xinpei Zhao | Jingyuan Sun | Shaonan Wang | Jing Ye | Xiaohan Zhang | Chengqing Zong
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Decoding continuous language from brain activity is a formidable yet promising field of research. It is particularly significant for aiding people with speech disabilities to communicate through brain signals. This field addresses the complex task of mapping brain signals to text. The previous best attempt reverse-engineered this process in an indirect way: it began by learning to encode brain activity from text and then guided text generation by aligning with predicted brain responses. In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities. Comprehensive experiments reveal that our method significantly outperforms the current state-of-the-art model, showing average improvements of 77% and 54% on BLEU and METEOR scores. We further validate the proposed modules through detailed ablation studies and case analyses and highlight a critical correlation: the more precisely we map brain activities to text embeddings, the better the text reconstruction results. Such insight can simplify the task of reconstructing language from brain activities for future work, emphasizing the importance of improving brain-to-text-embedding mapping techniques.

pdf
CoCA: Fusing Position Embedding with Collinear Constrained Attention in Transformers for Long Context Window Extending
Shiyi Zhu | Jing Ye | Wei Jiang | Siqiao Xue | Qi Zhang | Yifan Wu | Jianguo Li
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Self-attention and position embedding are two crucial modules in transformer-based Large Language Models (LLMs). However, the potential relationship between them is far from well studied, especially for long context window extending. In fact, anomalous behaviors that hinder long context extrapolation exist between Rotary Position Embedding (RoPE) and vanilla self-attention.Incorrect initial angles between Q and K can cause misestimation in modeling rotary position embedding of the closest tokens.To address this issue, we propose Collinear Constrained Attention mechanism, namely CoCA. Specifically, we enforce a collinear constraint between Q and K to seamlessly integrate RoPE and self-attention.While only adding minimal computational and spatial complexity, this integration significantly enhances long context window extrapolation ability. We provide an optimized implementation, making it a drop-in replacement for any existing transformer-based models.Extensive experiments demonstrate that CoCA excels in extending context windows. A CoCA-based GPT model, trained with a context length of 512, can extend the context window up to 32K (60×) without any fine-tuning.Additionally, incorporating CoCA into LLaMA-7B achieves extrapolation up to 32K within a training length of only 2K.Our code is publicly available at: https://github.com/codefuse-ai/Collinear-Constrained-Attention

2023

pdf
INFORM : Information eNtropy based multi-step reasoning FOR large language Models
Chuyue Zhou | Wangjie You | Juntao Li | Jing Ye | Kehai Chen | Min Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs) have demonstrated exceptional performance in reasoning tasks with dedicated Chain-of-Thought (CoT) prompts. Further enhancing CoT prompts with exquisite exemplars can significantly improve reasoning performance.However, the effectiveness of CoT prompts may fluctuate dramatically with different choices of in-context examples. Additionally, manual construction of rationale steps can be time-consuming, presenting challenges for the widespread adoption of CoT prompting. In this work, we propose a novel approach by introducing information entropy (IE) as a criteria on for CoT prompt selection. We extend this criterion to the CoT generation and inference stages, automatically generating CoT prompts with higher information entropy scores and adaptively determining the number of samples. These three stages together form our proposed information- entropy-based multi-step reasoning for large language models, named INFORM. Our experiments across seven reasoning benchmarks utilizing two language models(GPT-3.5-Turbo and text-davinci-003) demonstrate the superiority of INFORM both in performance and efficiency.

2020

pdf
汉语否定焦点识别研究:数据集与基线系统(Research on Chinese Negative Focus Identification: Dataset and Baseline)
Jiaxuan Sheng (盛佳璇) | Bowei Zou (邹博伟) | Longxiang Shen (沈龙骧) | Jing Ye (叶静) | Yu Hong (洪宇)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

自然语言文本中存在大量否定语义表达,否定焦点识别任务作为更细粒度的否定语义分析,近年来开始受到自然语言处理学者的关注。该任务旨在识别句子中被否定词修饰和强调的文本片段,其对自然语言处理的下游任务,如情感分析、观点挖掘等具有重要意义。与英语相比,目前面向汉语的否定焦点识别研究彶展缓慢,其主要原因是尚未有中文数据集为模型提供训练和测试数据。为解决上述问题,本文在汉语否定与不确定语料库上进行了否定焦点的标注工作,初步探索了否定焦点在汉语上的语言现象,并构建了一个包含5,762个样本的数据集。同时,本文还提出了一个基于神经网络模型的基线系统,为后续相关研究提供参照。