When clinical judgment and AI predictions point in different directions, what happens next?

This event will address instances where artificial intelligence and human clinical judgment do not agree, and why these moments of “human–AI dissonance” may become one of the most consequential frontiers in modern medicine.

Through case-based perspectives from radiology, surgery, and nursing, the program explores how clinicians are responding to AI-driven recommendations in real clinical environments. Legal, regulatory, and ethical experts will also address emerging questions of documentation, liability, patient transparency, and shared decision-making. The conversation will look ahead to emerging tools such as digital twins and next-generation clinical infrastructures, exploring how disagreement, accountability, and trust may be intentionally designed into the future of AI-enabled care.

This event is part of the Frontiers in Data Science and AI initiative at the Data Science Institute, Columbia University.

REGISTER


Event Details & Agenda

Monday, February 9, 2026 (9:00 AM – 5:00 PM ET)
In-Person & Zoom Option

Location: Columbia University Irving Medical Center
Address: 50 Haven Avenue, New York, NY 10032 – Map

· · ─ · ─ · ·

9:00 AM – 9:15 AM: Welcome and overview (15 min)

9:15 AM – 9:45 AM: What is AI and how is it being used for clinical decision-making? (30 min)

  • Beth Percha, Chief Data and Analytics Officer at NewYork-Presbyterian Hospital; and Adjunct Assistant Professor, Biomedical Informatics, Vagelos College of Physicians and Surgeons, Columbia University

9:45 AM – 10:15 AM: What is human-AI dissonance and why is it important? (30 min)

  • Charles E. Binkley, Director of AI Ethics and Quality, Hackensack Meridian Health; Associate Professor of Surgery, Hackensack Meridian School of Medicine; and Lecturer in Bioethics, School of Professional Studies, Columbia University

10:15 AM – 10:30 AM: Break (15 min)

10:30 AM – 11:30 AM: How should human-AI dissonance be addressed clinically? (60 min)

  • Radiology: Florence Doo, Assistant Professor, Diagnostic Radiology & Nuclear Medicine, University of Maryland School of Medicine; Director of Innovation, University of Maryland Medical Intelligent Imaging (UM2ii) Center; and Faculty, University of Maryland-Institute for Health Computing (UM-IHC) (20 min)
  • Surgery: Gabriel Brat, Assistant Professor of Surgery, Beth Israel Deaconess Medical Center; and Assistant Professor of Biomedical Informatics, Harvard Medical School (20 min)
  • Nursing: Sarah Collins Rossetti, Associate Professor of Biomedical Informatics and Nursing, Columbia University Irving Medical Center (20 min)

11:30 AM – 12:00 PM: Q&A (30 min)

12:00 PM – 1:00 PM: Lunch (60 min)

1:00 PM – 1:45 PM: How should human-AI dissonance be addressed legally? (45 min)

1:45 PM – 2:15 PM: Q&A (30 min)

2:15 PM – 3:00 PM: How should human-AI dissonance be addressed ethically? (45 min)

  • Nancy Berlinger, Senior Research Scholar, The Hastings Center for Bioethics
  • Charles E. Binkley, Director of AI Ethics and Quality, Hackensack Meridian Health; Associate Professor of Surgery, Hackensack Meridian School of Medicine; and Lecturer in Bioethics, School of Professional Studies, Columbia University

3:00 PM – 3:30 PM: Q&A (30 min)

3:30 PM – 4:00 PM: Summary of key points and conclusion (30 min)

4:00 PM – 5:00 PM: Networking reception (in-person – 60-min)


Speaker Details

Listed in order of program:

Host & DSI Frontiers Awardee: Robert Klitzman
Professor of Psychiatry (in Sociomedical Sciences), Columbia University Irving Medical Center; Program Director, Bioethics Program, School of Professional Studies, Columbia University

Beth Percha
Chief Data and Analytics Officer at NewYork-Presbyterian Hospital; and Adjunct Assistant Professor, Biomedical Informatics, Vagelos College of Physicians and Surgeons, Columbia University

A Survey of AI in Clinical Decision Making

Abstract: Artificial intelligence (AI) is increasingly embedded in the operations of large health systems, yet AI is often poorly defined and confused with traditional analytics or automation. This talk provides a practical introduction to AI in the context of a complex health care environment. We will begin with a clear working definition of “AI”, review the major classes of AI technologies seen at a large health system, and discuss several common forms of AI in health care, including predictive risk models, image- and signal-based algorithms, and emerging generative AI tools. We conclude with a brief overview of AI governance, highlighting how a clear and lightweight AI governance process can preserve patient safety, transparency, and regulatory compliance while enabling innovation.

Co-Host: Charles E. Binkley
Director of AI Ethics and Quality, Hackensack Meridian Health; Associate Professor of Surgery, Hackensack Meridian School of Medicine; and Lecturer in Bioethics, School of Professional Studies, Columbia University

Understanding the Clinician–AI Collaboration: Judgment, Dissonance, and Patient-Centered Care

Abstract: Disagreements between clinicians and AI decision-support systems are often treated as technical failures to be resolved through improved accuracy or explainability. This talk argues that such disagreements are an inevitable and revealing feature of clinician–AI collaboration. When clinical judgment and AI recommendations diverge, the central questions are not simply who is right, but how judgment, responsibility, and risk are navigated in service of patient-centered care.

This talk examines how disagreement is shaped by decision complexity and the asymmetric consequences of accepting or rejecting AI, including patient harm, professional liability, and moral regret. It challenges the assumption that disagreement is purely epistemic and highlights the moral and relational dimensions of clinical decision-making that AI cannot resolve.

By reframing clinician–AI disagreement away from adversarial narratives toward goal-directed collaboration, this talk sets the conceptual foundation for the clinical, legal, and ethical analyses that follow throughout the conference.

Florence Doo
Assistant Professor, Diagnostic Radiology & Nuclear Medicine, University of Maryland School of Medicine; Director of Innovation, University of Maryland Medical Intelligent Imaging (UM2ii) Center; and Faculty, University of Maryland-Institute for Health Computing (UM-IHC)

AI Is Not the Human Eye – and What That Means for Radiology Patient Care

Abstract: The most FDA-cleared clinical AI tools are in radiology. Dr. Doo, a board-certified radiologist, and others in her specialty have been using AI tools clinically since at least 2019. However, even after years of experience, radiologists still face moments of dissonance – when AI and human judgment diverge – and those moments matter for patients. Although radiologists have had a front-row seat to AI’s promise and limitations, the hard questions remain: What happens when AI and radiologists don’t agree? How do we keep patients at the center when technology changes the way we see? This session reflects on lessons learned, illustrated through real-world case examples, and invites discussion on what these insights mean for human-AI trust, transparency, and patient care moving forward.

Gabriel Brat
Assistant Professor of Surgery, Beth Israel Deaconess Medical Center; and Assistant Professor of Biomedical Informatics, Harvard Medical School

Abstract Coming Soon

Sarah Collins Rossetti
Associate Professor of Biomedical Informatics and Nursing, Columbia University Irving Medical Center

When and Why AI and Nursing Practice may Diverge

Abstract: Many nursing practice decisions are not fully captured in electronic health records. AI models that are developed without consideration of the data that informs nurse decision making and the decisions themselves will risk misalignment with nursing practice. Using two clinical scenarios, insulin management and patient deterioration, this talk will provide an overview of potential types of AI-practice dissonance for nursing and approaches to model development to minimize those gaps.

Nicholson Price
Professor of Law, University of Michigan

Legal Considerations around Clinician/AI Divergence

Abstract: AI use in patient care happens against a backdrop of several different legal regimes.  Law governs what AI tools make it to market, what information needs to be disclosed to patients about those tools and their use, and who is responsible when things go wrong and patients are injured.  How does the law conceptualize clinician/AI divergence?  In many ways, the answers aren’t fully known yet, but existing doctrine offers guidelines going forward.  This presentation will provide a high-level overview of the changing and uncertain legal landscape, touching on regulation, liability, disclosure, and documentation.  The session will consider not only what the law is, but also what it should be to best facilitate safe, effective care.

Nancy Berlinger
Senior Research Scholar, The Hastings Center for Bioethics

How Should Human-AI Dissonance Be Addressed Ethically?

Overview: How should health care practitioners approach the use of predictive AI tools in reducing clinical uncertainty? This interactive session will provide an overview of emerging ethical questions in patient care, drawing on health care ethics and AI ethics, with suggestions for clinical teaching and learning.