Article Text

Download PDFPDF
Beyond presumed autonomy: AI-assisted patient preference predictors and the personalised living will
  1. Ricardo Diaz Milian1,
  2. Anirban Bhattacharyya2
  1. 1Critical Care Medicine, Mayo Clinic Jacksonville Campus, Jacksonville, Florida, USA
  2. 2Critical Care, Mayo Clinic in Florida, Jacksonville, Florida, USA
  1. Correspondence to Dr Ricardo Diaz Milian; rdiaz{at}alumni.mcw.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Annoni’s critique of Personalized Patient Preference Predictors (P4) highlights a fundamental flaw in their current design: they fail to meaningfully respect patient autonomy.1 His argument that P4 risks reducing decision-making to the presumed preferences of incapacitated individuals underscores the need for a better approach. To address this, we introduce the concept of the Personalized Patient Preference Predictor-Assisted Living Will (P4-LW)—a mechanism that allows individuals, while still capacitated, to formally consent to the use of P4 and subsequently validate that their living will aligns with their personal values and evolving preferences. Unlike static living wills or surrogate decision-makers, the P4-LW has the potential to provide clinicians with real-time, patient-specific guidance from individuals who have proactively endorsed its use. In this commentary, we extend the discussion by examining the ethical and practical implications of this model, arguing that it offers a more nuanced and ethically sound alternative to conventional surrogate decision-making, and the proposed P4-aided process.

The transfer of autonomy from an incapacitated patient to a P4 raises ethical concerns, but its acceptability depends on how these tools are implemented. A complete and unchecked transfer is likely inappropriate and would constitute what we called in our previous work ‘AI paternalism’.2 Artificial intelligence (AI) paternalism, …

View Full Text

Footnotes

  • Contributors Both authors contributed to the writing and formulation of the ideas of this article. GPT-4 was used minimally for feedback regarding editing and writing style.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles