Jiahui Gao
2023
DetGPT: Detect What You Need via Reasoning
Renjie Pi
|
Jiahui Gao
|
Shizhe Diao
|
Rui Pan
|
Hanze Dong
|
Jipeng Zhang
|
Lewei Yao
|
Jianhua Han
|
Hang Xu
|
Lingpeng Kong
|
Tong Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
In recent years, the field of computer vision has seen significant advancements thanks to the development of large language models (LLMs). These models have enabled more effective and sophisticated interactions between humans and machines, paving the way for novel techniques that blur the lines between human and machine intelligence. In this paper, we introduce a new paradigm for object detection that we call reasoning-based object detection. Unlike conventional object detection methods that rely on specific object names, our approach enables users to interact with the system using natural language instructions, allowing for a higher level of interactivity. Our proposed method, called DetGPT, leverages state-of-the-art multi-modal models and open-vocabulary object detectors to perform reasoning within the context of the user’s instructions and the visual scene. This enables DetGPT to automatically locate the object of interest based on the user’s expressed desires, even if the object is not explicitly mentioned. For instance, if a user expresses a desire for a cold beverage, DetGPT can analyze the image, identify a fridge, and use its knowledge of typical fridge contents to locate the beverage. This flexibility makes our system applicable across a wide range of fields, from robotics and automation to autonomous driving. Overall, our proposed paradigm and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines. We hope that our proposed paradigm and approach will provide inspiration to the community and open the door to more interactive and versatile object detection systems.
2022
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
Jiacheng Ye
|
Jiahui Gao
|
Zhiyong Wu
|
Jiangtao Feng
|
Tao Yu
|
Lingpeng Kong
Findings of the Association for Computational Linguistics: EMNLP 2022
Recently, dataset-generation-based zero-shot learning has shown promising results by training a task-specific model with a dataset synthesized from large pre-trained language models (PLMs). The final task-specific model often achieves compatible or even better performance than PLMs under the zero-shot setting, with orders of magnitude fewer parameters. However, synthetic datasets have their drawbacks. They have long being suffering from the low-quality issue (e.g., low informativeness, redundancy). This explains why the massive synthetic data does not lead to better performance – a scenario we would expect in the human-labeled data. To improve the quality in dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples. Extensive experiments on five text classification datasets demonstrate the effectiveness of the proposed approach. We also show ProGen achieves on-par or superior performance with only 1% synthetic dataset size, when comparing to baseline methods without in-context feedback.
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Jiacheng Ye
|
Jiahui Gao
|
Qintong Li
|
Hang Xu
|
Jiangtao Feng
|
Zhiyong Wu
|
Tao Yu
|
Lingpeng Kong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs). In this paper, we study a flexible and efficient zero-short learning method, ZeroGen. Given a zero-shot task, we first generate a dataset from scratch using PLMs in an unsupervised manner. Then, we train a tiny task model (e.g., LSTM) under the supervision of the synthesized dataset. This approach allows highly efficient inference as the final task model only has orders of magnitude fewer parameters comparing to PLMs (e.g., GPT2-XL).Apart from being annotation-free and efficient, we argue that ZeroGen can also provide useful insights from the perspective of data-free model-agnostic knowledge distillation, and unreferenced text generation evaluation. Experiments and analysis on different NLP tasks, namely, text classification, question answering, and natural language inference, show the effectiveness of ZeroGen.
Search
Co-authors
- Lingpeng Kong 3
- Jiacheng Ye 2
- Zhiyong Wu 2
- Jiangtao Feng 2
- Tao Yu 2
- show all...