Article Text
Statistics from Altmetric.com
In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about attacks to our humanity stemming from the use of AI and (3) the possibility of designing AI systems for more relevant use cases than the one we consider in our work. We shall focus on these themes, leaving aside some other interesting suggestions, given the limited space available for our response.
Diaz Milian and Bhattacharyya discuss the risks of AI paternalism highlighting how the use of an AI to predict goal of care preferences may incentivise clinicians and surrogates to shift the burden of decision making to the machine.2 In particular, ‘[i]f the AI-generated response is given priority above the other agents’2 and the AI makes decisions autonomously, then ‘AI paternalism’ would emerge.2 Against this possibility, the authors suggest four safeguards to be implemented in the AI life-cycle.2 These are procedures that may improve the trustworthiness of the AI and human control over it.3
We agree that it is important …
Footnotes
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Linked Articles
- Feature article
- Commentary
- Commentary
- Commentary
- Commentary
- Commentary