Jungin Kim
2025
CodeComplex: Dataset for Worst-Case Time Complexity Prediction
SeungYeop Baik
|
Joonghyuk Hahn
|
Jungin Kim
|
Aditi
|
Mingi Jeon
|
Yo-Sub Han
|
Sang-Ki Ko
Findings of the Association for Computational Linguistics: EMNLP 2025
Reasoning ability of large language models (LLMs) is a crucial ability,especially in complex decision-making tasks. One significant task to show LLMs’reasoning capability is code time complexity prediction, which involves variousintricate factors such as the input range of variables and conditional loops.Current benchmarks fall short of providing a rigorous assessment due to limiteddata, language constraints, and insufficient labeling. They do not consider timecomplexity based on input representation and merely evaluate whether predictionsfall into the same class, lacking a measure of how close incorrect predictionsare to the correct ones.To address these dependencies, we introduce CodeComplex, the first robust andextensive dataset designed to evaluate LLMs’ reasoning abilities in predictingcode time complexity. CodeComplex comprises 4,900 Java codes and an equivalentnumber of Python codes, overcoming language and labeling constraints, carefullyannotated with complexity labels based on input characteristics by a panel ofalgorithmic experts. Additionally, we propose specialized evaluation metrics forthe reasoning of complexity prediction tasks, offering a more precise andreliable assessment of LLMs’ reasoning capabilities. We release our dataset andbaseline models publicly to encourage the relevant (NLP, SE, and PL) communitiesto utilize and participate in this research. Our code and data are available athttps://github.com/sybaik1/CodeComplex.
TCProF:Time-Complexity Prediction SSL Framework
Joonghyuk Hahn
|
Hyeseon Ahn
|
Jungin Kim
|
Soohan Lim
|
Yo-Sub Han
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
2024
SharedCon: Implicit Hate Speech Detection using Shared Semantics
Hyeseon Ahn
|
Youngwook Kim
|
Jungin Kim
|
Yo-Sub Han
Findings of the Association for Computational Linguistics: ACL 2024
The ever-growing presence of hate speech on social network services and other online platforms not only fuels online harassment but also presents a growing challenge for hate speech detection. As this task is akin to binary classification, one of the promising approaches for hate speech detection is the utilization of contrastive learning. Recent studies suggest that classifying hateful posts in just a binary manner may not adequately address the nuanced task of detecting implicit hate speech. This challenge is largely due to the subtle nature and context dependency of such pejorative remarks. Previous studies proposed a modified contrastive learning approach equipped with additional aids such as human-written implications or machine-generated augmented data for better implicit hate speech detection. While this approach can potentially enhance the overall performance by its additional data in general, it runs the risk of overfitting as well as heightened cost and time to obtain. These drawbacks serve as motivation for us to design a methodology that is not dependent on human-written or machine-generated augmented data for training. We propose a straightforward, yet effective, clustering-based contrastive learning approach that leverages the shared semantics among the data.
Search
Fix author
Co-authors
- Yo-Sub Han 3
- Hyeseon Ahn 2
- Joonghyuk Hahn 2
- Aditi 1
- SeungYeop Baik 1
- show all...