Abstract:
This case explores the ethical, operational, and strategic challenges facing a nonprofit hospital as it pilots an AI-assisted messaging tool designed to help clinicians manage patient communications more efficiently. Riverside Medical Center, under financial strain and coping with staff burnout, has seen promising reductions in message response times and clinician workload. However, the tool raises complex concerns about accuracy, liability, oversight, bias, and patient trust—especially following a near-miss incident involving an AI-generated medical response. Dr. Rachel Evans, the hospital’s Chief Medical Information Officer, must weigh three options: expanding the use of AI across clinical operations, pausing the program to reassess safety and risk, or continuing with stronger human oversight. The case invites students to grapple with questions of technological innovation in healthcare, responsible AI deployment, competing stakeholder values, and leadership under uncertainty. The events, characters and entities depicted in this case are fictitious; however, the industry dilemmas and proposed technological solutions are based on real issues.
Learning Objectives:
After reading the case and participating in a class discussion, students will be able to:
- Identify and understand the benefits and risks of implementing an AI intervention in a high stakes setting.
- Weigh the benefits/risks and grapple with the tensions surrounding a workplace-based AI intervention.
- Appreciate the benefits and challenges of placing a “human in the loop” of an AI-assisted workflow.