Hua Cheng


2023

pdf
MDACE: MIMIC Documents Annotated with Code Evidence
Hua Cheng | Rana Jafari | April Russell | Russell Klopfer | Edmond Lu | Benjamin Striner | Matthew Gormley
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We introduce a dataset for evidence/rationale extraction on an extreme multi-label classification task over long medical documents. One such task is Computer-Assisted Coding (CAC) which has improved significantly in recent years, thanks to advances in machine learning technologies. Yet simply predicting a set of final codes for a patient encounter is insufficient as CAC systems are required to provide supporting textual evidence to justify the billing codes. A model able to produce accurate and reliable supporting evidence for each code would be a tremendous benefit. However, a human annotated code evidence corpus is extremely difficult to create because it requires specialized knowledge. In this paper, we introduce MDACE, the first publicly available code evidence dataset, which is built on a subset of the MIMIC-III clinical records. The dataset – annotated by professional medical coders – consists of 302 Inpatient charts with 3,934 evidence spans and 52 Profee charts with 5,563 evidence spans. We implemented several evidence extraction methods based on the EffectiveCAN model (Liu et al., 2021) to establish baseline performance on this dataset. MDACE can be used to evaluate code evidence extraction methods for CAC systems, as well as the accuracy and interpretability of deep learning models for multi-label classification. We believe that the release of MDACE will greatly improve the understanding and application of deep learning technologies for medical coding and document classification.

2021

pdf
Effective Convolutional Attention Network for Multi-label Clinical Document Classification
Yang Liu | Hua Cheng | Russell Klopfer | Matthew R. Gormley | Thomas Schaaf
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Multi-label document classification (MLDC) problems can be challenging, especially for long documents with a large label set and a long-tail distribution over labels. In this paper, we present an effective convolutional attention network for the MLDC problem with a focus on medical code prediction from clinical documents. Our innovations are three-fold: (1) we utilize a deep convolution-based encoder with the squeeze-and-excitation networks and residual networks to aggregate the information across the document and learn meaningful document representations that cover different ranges of texts; (2) we explore multi-layer and sum-pooling attention to extract the most informative features from these multi-scale representations; (3) we combine binary cross entropy loss and focal loss to improve performance for rare labels. We focus our evaluation study on MIMIC-III, a widely used dataset in the medical domain. Our models outperform prior work on medical coding and achieve new state-of-the-art results on multiple metrics. We also demonstrate the language independent nature of our approach by applying it to two non-English datasets. Our model outperforms prior best model and a multilingual Transformer model by a substantial margin.

2020

pdf
Posterior Calibrated Training on Sentence Classification Tasks
Taehee Jung | Dongyeop Kang | Hua Cheng | Lucas Mentch | Thomas Schaaf
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Most classification models work by first predicting a posterior probability distribution over all classes and then selecting that class with the largest estimated probability. In many settings however, the quality of posterior probability itself (e.g., 65% chance having diabetes), gives more reliable information than the final predicted class alone. When these methods are shown to be poorly calibrated, most fixes to date have relied on posterior calibration, which rescales the predicted probabilities but often has little impact on final classifications. Here we propose an end-to-end training procedure called posterior calibrated (PosCal) training that directly optimizes the objective while minimizing the difference between the predicted and empirical posterior probabilities.We show that PosCal not only helps reduce the calibration error but also improve task performance by penalizing drops in performance of both objectives. Our PosCal achieves about 2.5% of task performance gain and 16.1% of calibration error reduction on GLUE (Wang et al., 2018) compared to the baseline. We achieved the comparable task performance with 13.2% calibration error reduction on xSLUE (Kang and Hovy, 2019), but not outperforming the two-stage calibration baseline. PosCal training can be easily extendable to any types of classification tasks as a form of regularization term. Also, PosCal has the advantage that it incrementally tracks needed statistics for the calibration objective during the training process, making efficient use of large training sets.

2006

pdf
Exploring Semantic Constraints for Document Retrieval
Hua Cheng | Yan Qu | Jesse Montgomery | David A. Evans
Proceedings of the Workshop on How Can Computational Linguistics Improve Information Retrieval?

2005

pdf
A Flexible Conversational Dialog System for MP3 Player
Fuliang Weng | Lawrence Cavedon | Badri Raghunathan | Danilo Mirkovic | Ben Bei | Heather Pon-Barry | Harry Bratt | Hua Cheng | Hauke Schmidt | Rohit Mishra | Brian Lathrop | Qi Zhang | Tobias Scheideck | Kui Xu | Tess Hand-Bender | Stanley Peters | Liz Shriberg | Carsten Bergmann
Proceedings of HLT/EMNLP 2005 Interactive Demonstrations

2002

pdf
Automatic Semantic Grouping in a Spoken Language User Interface Toolkit
Hassan Alam | Hua Cheng | Rachmat Hartono | Aman Kumar | Paul Llido | Crystal Nakatsu | Huy Nguyen | Fuad Rahman | Yuliya Tarnikova | Timotius Tjahjadi | Che Wilcox
COLING 2002: The 19th International Conference on Computational Linguistics

pdf
Extending a Broad-Coverage Parser for a General NLP Toolkit
Hassan Alam | Hua Cheng | Rachmat Hartono | Aman Kumar | Paul Llido | Crystal Nakatsu | Fuad Rahman | Yuliya Tarnikova | Timotius Tjahjadi | Che Wilcox
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf bib
Corpus-based NP Modifier Generation
Hua Cheng | Massimo Poesio | Renate Henschel | Chris Mellish
Second Meeting of the North American Chapter of the Association for Computational Linguistics

2000

pdf
Pronominalization revisited
Renate Henschel | Hua Cheng | Massimo Poesio
COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics

pdf bib
Experimenting with the Interaction between Aggregation and Text Structuring
Hua Cheng
Proceedings of the ANLP-NAACL 2000 Student Research Workshop

pdf
An Empirical Analysis of Constructing Non-restrictive NP Modifiers to Express Semantic Relations
Hua Cheng | Chris Mellish
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

pdf
Capturing the Interaction between Aggregation and Text Planning in Two Generation Systems
Hua Cheng | Chris Mellish
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

1998

pdf
Embedding New Information into Referring Expressions
Hua Cheng
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
Integrating Referring and Informing in NP Planning
Michael O’Donnell | Hua Cheng | Janet Hitzeman
The Computational Treatment of Nominals

pdf
Embedding New Information into Referring Expressions
Hua Cheng
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics