Katherine Axford
2023
Logic-driven Indirect Supervision: An Application to Crisis Counseling
Mattia Medina Grespan
|
Meghan Broadbent
|
Xinyao Zhang
|
Katherine Axford
|
Brent Kious
|
Zac Imel
|
Vivek Srikumar
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Ensuring the effectiveness of text-based crisis counseling requires observing ongoing conversations and providing feedback, both labor-intensive tasks. Automatic analysis of conversations—at the full chat and utterance levels—may help support counselors and provide better care. While some session-level training data (e.g., rating of patient risk) is often available from counselors, labeling utterances requires expensive post hoc annotation. But the latter can not only provide insights about conversation dynamics, but can also serve to support quality assurance efforts for counselors. In this paper, we examine if inexpensive—and potentially noisy—session-level annotation can help improve label utterances. To this end, we propose a logic-based indirect supervision approach that exploits declaratively stated structural dependencies between both levels of annotation to improve utterance modeling. We show that adding these rules gives an improvement of 3.5% f-score over a strong multi-task baseline for utterance-level predictions. We demonstrate via ablation studies how indirect supervision via logic rules also improves the consistency and robustness of the system.
2022
Psychotherapy is Not One Thing: Simultaneous Modeling of Different Therapeutic Approaches
Maitrey Mehta
|
Derek Caperton
|
Katherine Axford
|
Lauren Weitzman
|
David Atkins
|
Vivek Srikumar
|
Zac Imel
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
There are many different forms of psychotherapy. Itemized inventories of psychotherapeutic interventions provide a mechanism for evaluating the quality of care received by clients and for conducting research on how psychotherapy helps. However, evaluations such as these are slow, expensive, and are rarely used outside of well-funded research studies. Natural language processing research has progressed to allow automating such tasks. Yet, NLP work in this area has been restricted to evaluating a single approach to treatment, when prior research indicates therapists used a wide variety of interventions with their clients, often in the same session. In this paper, we frame this scenario as a multi-label classification task, and develop a group of models aimed at predicting a wide variety of therapist talk-turn level orientations. Our models achieve F1 macro scores of 0.5, with the class F1 ranging from 0.36 to 0.67. We present analyses which offer insights into the capability of such models to capture psychotherapy approaches, and which may complement human judgment.
Search
Co-authors
- Zac Imel 2
- Vivek Srikumar 2
- Mattia Medina Grespan 1
- Meghan Broadbent 1
- Xinyao Zhang 1
- show all...