Housam Khalifa Bashier
2026
Reason2Decide: Rationale-Driven Multi-Task Learning
H M Quamran Hasan | Housam Khalifa Bashier | Jiayi Dai | Mi-Young Kim | Randy Goebel
Proceedings of the Fifteenth Language Resources and Evaluation Conference
H M Quamran Hasan | Housam Khalifa Bashier | Jiayi Dai | Mi-Young Kim | Randy Goebel
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Despite the wide adoption of Large Language Models (LLM)s, clinical decision support systems face a critical challenge: achieving high predictive accuracy while generating explanations aligned with those predictions. Current approaches suffer from exposure bias, leading to misaligned explanations. We propose Reason2Decide, a two-stage training framework that addresses key challenges in self-rationalization, including exposure bias and task separation. In Stage-1, our model is trained on rationale generation, while in Stage-2, we jointly train on label prediction and rationale generation, applying scheduled sampling to gradually transition from conditioning on gold labels to model predictions. We evaluate Reason2Decide on three medical datasets, including a proprietary triage dataset and public biomedical QA datasets. Across model sizes, Reason2Decide outperforms other fine-tuned baselines and some zero-shot LLMs in prediction (F1) and rationale fidelity (BERTScore, BLEU, LLM-as-a-Judge). In triage, Reason2Decide is rationale source-robust across LLM-generated, nurse-authored, and nurse-post-processed rationales. In our experiments, while using only LLM-generated rationales in Stage-1, Reason2Decide outperforms other fine-tuned variants. This indicates that LLM-generated rationales are suitable for pretraining models, reducing reliance on human annotations. Remarkably, Reason2Decide achieves these gains with models 40x smaller than contemporary foundation models, making clinical reasoning more accessible for resource-constrained deployments while still providing explainable decision support.
2021
DISK-CSV: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector
Housam Khalifa Bashier | Mi-Young Kim | Randy Goebel
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Housam Khalifa Bashier | Mi-Young Kim | Randy Goebel
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex “black-boxes” creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.
2020
RANCC: Rationalizing Neural Networks via Concept Clustering
Housam Khalifa Bashier | Mi-Young Kim | Randy Goebel
Proceedings of the 28th International Conference on Computational Linguistics
Housam Khalifa Bashier | Mi-Young Kim | Randy Goebel
Proceedings of the 28th International Conference on Computational Linguistics
We propose a new self-explainable model for Natural Language Processing (NLP) text classification tasks. Our approach constructs explanations concurrently with the formulation of classification predictions. To do so, we extract a rationale from the text, then use it to predict a concept of interest as the final prediction. We provide three types of explanations: 1) rationale extraction, 2) a measure of feature importance, and 3) clustering of concepts. In addition, we show how our model can be compressed without applying complicated compression techniques. We experimentally demonstrate our explainability approach on a number of well-known text classification datasets.