Bulent Yener


2022

pdf
SPOCK at FinCausal 2022: Causal Information Extraction Using Span-Based and Sequence Tagging Models
Anik Saha | Jian Ni | Oktie Hassanzadeh | Alex Gittens | Kavitha Srinivas | Bulent Yener
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022

Causal information extraction is an important task in natural language processing, particularly in finance domain. In this work, we develop several information extraction models using pre-trained transformer-based language models for identifying cause and effect text spans from financial documents. We use FinCausal 2021 and 2022 data sets to train span-based and sequence tagging models. Our ensemble of sequence tagging models based on the RoBERTa-Large pre-trained language model achieves an F1 score of 94.70 with Exact Match score of 85.85 and obtains the 1st place in the FinCausal 2022 competition.

2015

pdf
Context-aware Entity Morph Decoding
Boliang Zhang | Hongzhao Huang | Xiaoman Pan | Sujian Li | Chin-Yew Lin | Heng Ji | Kevin Knight | Zhen Wen | Yizhou Sun | Jiawei Han | Bulent Yener
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf
Be Appropriate and Funny: Automatic Entity Morph Encoding
Boliang Zhang | Hongzhao Huang | Xiaoman Pan | Heng Ji | Kevin Knight | Zhen Wen | Yizhou Sun | Jiawei Han | Bulent Yener
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)