Namho Koh
2024
Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models
Oana Ignat
|
Zhijing Jin
|
Artem Abzaliev
|
Laura Biester
|
Santiago Castro
|
Naihao Deng
|
Xinyi Gao
|
Aylin Ece Gunal
|
Jacky He
|
Ashkan Kazemi
|
Muhammad Khalifa
|
Namho Koh
|
Andrew Lee
|
Siyang Liu
|
Do June Min
|
Shinka Mori
|
Joan C. Nwatu
|
Veronica Perez-Rosas
|
Siqi Shen
|
Zekun Wang
|
Winston Wu
|
Rada Mihalcea
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Recent progress in large language models (LLMs) has enabled the deployment of many generative NLP applications. At the same time, it has also led to a misleading public discourse that “it’s all been solved.” Not surprisingly, this has, in turn, made many NLP researchers – especially those at the beginning of their careers – worry about what NLP research area they should focus on. Has it all been solved, or what remaining questions can we work on regardless of LLMs? To address this question, this paper compiles NLP research directions rich for exploration. We identify fourteen different research areas encompassing 45 research directions that require new research and are not directly solvable by LLMs. While we identify many research areas, many others exist; we do not cover areas currently addressed by LLMs, but where LLMs lag behind in performance or those focused on LLM development. We welcome suggestions for other research directions to include: https://bit.ly/nlp-era-llm.
Search
Co-authors
- Oana Ignat 1
- Zhijing Jin 1
- Artem Abzaliev 1
- Laura Biester 1
- Santiago Castro 1
- show all...