Article Text
Abstract
Informed consent in surgical settings requires not only the accurate communication of medical information but also the establishment of trust through empathic engagement. The use of large language models (LLMs) offers a novel opportunity to enhance the informed consent process by combining advanced information retrieval capabilities with simulated emotional responsiveness. However, the ethical implications of simulated empathy raise concerns about patient autonomy, trust and transparency. This paper examines the challenges of surgical informed consent, the potential benefits and limitations of digital tools such as LLMs and the ethical implications of simulated empathy. We distinguish between active empathy, which carries the risk of creating a misleading illusion of emotional connection and passive empathy, which focuses on recognising and signalling patient distress cues, such as fear or uncertainty, rather than attempting to simulate genuine empathy. We argue that LLMs should be limited to the latter, recognising and signalling patient distress cues and alerting healthcare providers to patient anxiety. This approach preserves the authenticity of human empathy while leveraging the analytical strengths of LLMs to assist surgeons in addressing patient concerns. This paper highlights how LLMs can ethically enhance the informed consent process without undermining the relational integrity essential to patient-centred care. By maintaining transparency and respecting the irreplaceable role of human empathy, LLMs can serve as valuable tools to support, rather than replace, the relational trust essential to informed consent.
- Ethics
- Ethics- Medical
- Informed Consent
- Health Personnel
- Personal Autonomy
Data availability statement
No data are available.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Introduction
The term ‘informed consent’ describes the informed and voluntary consent of a competent patient or research participant who receives comprehensive disclosures, fully understands them and—on this basis—consents voluntarily to treatment or participation. In the 1950s and 1960s, judicial rulings in some medical fields, including surgery, gradually formalised the duty to disclose certain information to obtain informed consent for both medical practice and research.1 Initially applied in clinical research, the principle of informed consent has expanded to encompass all aspects of diagnostic and therapeutic care, additionally safeguarding patient autonomy.2
Grounded in both moral and legal obligations, this practice is reinforced by cases like Montgomery versus Lanarkshire Health Board in the UK, which emphasises full disclosure of material risks to patients, and by legal frameworks such as Germany’s 2013 Patient Rights Act, which bolster patient autonomy through required consent documentation.3 4 In surgical settings, the disclosure of information typically encompasses the nature of the procedure, associated risks, alternative treatments and the potential outcomes, allowing patients to make informed choices regarding their care. However, the practical implementation of informed consent in surgical contexts is often hindered by challenges, including the trade-off between the time and effort required from physicians and the benefit gained by patients—a dynamic that may, in fact, represent the core difficulty. Other issues, such as patient misunderstandings, dissatisfaction and disputes over the completeness of the information provided, further complicate the process.5–7 Advances in technology, including electronic consent forms, offer new ways to enhance the consent process, though their effectiveness remains under scrutiny.8 9
Preliminary studies suggest that large language models (LLMs), such as GPT-4 or LLaMA, hold the potential to make medical information more accessible by simplifying medical jargon and making the content easier to understand for patients. Ali et al. found that LLMs can enhance the accessibility of medical information; however, concerns persist regarding potential oversimplification, requiring the need for physician oversight to maintain accuracy.10 LLMs have also been shown to produce more comprehensive consent forms, addressing gaps in traditional documentation, though they still exhibit errors and fail to meet specific legal standards for certain procedures, such as in anaesthesiology.11 12 Stroop et al. noted that ChatGPT’s output, while generally accurate, missed key details found in traditional consent forms, highlighting the ethical responsibility of clinicians in using artificial intelligence (AI) tools for informed consent.13
Beyond informational accuracy, the interpersonal and emotional dimensions of informed consent play a vital role in achieving its ethical goals. A central concern is whether LLMs can contribute to these aspects by simulating empathy within the consent process. In theory, LLMs could adopt empathetic tones, potentially bridging emotional gaps in patient-provider communication and easing patient anxieties. However, this raises the question of whether simulated empathy aligns with the legal framework of informed consent since emotional dimensions are not part of the explicit legal requirements. They are nevertheless essential components of the ethical foundation of medical communication. While simulated empathy might enhance the patient experience, given that time or staff resources are limited, it could also jeopardise objectivity and transparency and may need to be deliberately excluded from legally sensitive processes. The notion of simulated empathy further raises ethical questions about its authenticity and the impact on patient trust, as LLMs cannot genuinely understand or share patient emotions.14 While early in the 1960s, AI systems like ELIZA demonstrated how humans can anthropomorphise even simple rule-based programmes, modern LLMs present a more complex challenge.15 Unlike ELIZA, which followed predefined scripts, LLMs generate contextually adaptive responses that can convincingly mimic empathy. This raises new ethical concerns in medical contexts, where patients may misinterpret AI-generated emotional expressions as genuine, potentially influencing their decision-making in unintended ways.
This research article examines these ethical implications, focusing on the ability—or inherent limitations—of LLMs to convey empathy within surgical informed consent. We explore how simulated empathy might affect patient trust, authenticity, transparency and patient understanding, centring our analysis on common elective procedures involving competent adult patients. Consequently, this paper advocates for a specific, ethically grounded approach in which LLMs are designed to recognise and signal distress cues—rather than simulate empathy—allowing them to support surgeons in addressing patient concerns without compromising the authenticity of human empathy. As a starting point, we refer to elective surgical procedures (such as cholecystectomy or joint replacement). Complexities particular to emergency, diagnostic and paediatric care remain outside the scope of this discussion.
Challenges of surgical informed consent
To adequately assess how far LLMs might support surgical informed consent, a closer look at the emotional and procedural challenges in the current clinical practice is needed. Research indicates that many patients struggle to comprehend the information provided during the consent process, particularly for complex procedures and risks, with vulnerable populations facing additional barriers due to language, literacy and education gaps.16–18 Many patients retain a suboptimal understanding of surgical risks and alternatives, even immediately after the consent process, limiting their ability to make informed decisions.19–21
Consent forms often aggravate these issues by presenting lengthy, complex and jargon-heavy information that exceeds the average patient’s comprehension. They are frequently designed with a ‘one-size-fits-all’ approach, failing to address specific procedures or to balance thoroughness with cognitive load. This leaves many patients overwhelmed, similar to the way software end-user license agreements are often ignored due to their excessive detail.10 22 The emotional and relational dimensions of informed consent also pose critical challenges for informed consent in surgery, which are essentially related to healthcare structures and the training of healthcare professionals:
Limited physician competencies in informed consent
A critical issue is the entrustment of consent procedures to junior doctors, particularly in high-demand hospital environments. While this practice may address resource constraints, it is possible that such a shift may negatively impact the quality of consent procedures. Studies suggest that junior doctors often lack the clinical expertise and experience to provide comprehensive explanations and address patients’ emotional concerns, leading to a more superficial consent process.14 23 Audits in colorectal surgery have revealed significant inconsistencies in consent documentation, such as incorrect patient details and a low rate of patients receiving copies of their consent forms, which underscore the shortcomings of delegating consent responsibilities to junior doctors.24 This concern is further compounded by physician burnout, which is linked to emotional exhaustion and diminished engagement in patient interactions.25
Beyond delegation and resource constraints, physicians also tend to overestimate patient comprehension, frequently omitting critical discussions about alternatives, risks and benefits.26 27 In some cases, patients feel pressured to decide quickly or assume they should inherently understand medical terminology, resulting in decisions made without fully grasping the implications. Documentation practices often fall short of accurately capturing the discussions between patients and physicians.28 These gaps raise ethical concerns about whether consent is truly informed and autonomous.
Gaps in empathy training
Empathy is a critical component of informed consent, yet its integration into medical training remains inconsistent. While there is a growing consensus on the need for empathy training in surgical residencies, implementation varies across programmes. Some curricula incorporate didactic sessions, role-playing, simulations and mentorship to enhance empathy skills.29 However, these efforts often compete with the biomedical focus of medical education, where the hidden curriculum tends to prioritise technical knowledge over social-emotional learning.30 As a result, many trainees struggle to balance clinical efficiency with meaningful engagement of patients whom they have just met, leading to a gradual decline in empathy throughout medical training.31
Furthermore, empathy in medical practice is recognised as a form of emotional labour, requiring continuous emotional engagement from physicians.32 This emotional demand, combined with time pressures, cognitive load and gaps in procedural knowledge, can diminish a physician’s ability to provide consistent, empathetic interactions.33 Studies have identified specific communication behaviours that enhance patient trust, such as sitting during discussions, acknowledging patient emotions and responding to non-verbal cues.34 However, systemic barriers, including time constraints and the delegation of consent discussions to junior doctors, often prevent these best practices from being consistently applied.
Balancing standardisation and personalisation in emotional support
A key challenge in surgical informed consent is the inherent variability in how practitioners provide emotional support. While some clinicians may naturally adopt empathetic, patient-centred approaches, others may focus more on the technical and legal requirements of consent. This variability mirrors systemic inconsistencies observed in preoperative risk communication, where the level of emotional support received often depends on individual physician style, training and time constraints rather than patient need.35
Striking a balance between consistency and personalisation is crucial. While standardised guidelines for effective communication behaviours such as acknowledging patient emotions, maintaining eye contact and ensuring patient comprehension can help establish a baseline of support, true patient-centred care requires adaptability. Physicians must be able to adapt their communication style to individual patient needs, ensuring that the consent discussion remains both ethically sound and emotionally responsive.
Information accuracy, legal standards and patient trust in surgical consent
A key factor in building patient trust and emotional security during informed consent is the accuracy and consistency of the information disclosed. However, significant variability exists in surgical consent discussions, particularly regarding what information should be disclosed and how it is presented. Unlike research consent, where standards exist for risk disclosure, surgical informed consent lacks universally agreed-upon disclosure guidelines, leading to inconsistencies in practice.
One of the most debated issues is the definition of material risk, the legal threshold for information disclosure. There is no universally accepted standard, and different jurisdictions interpret material risk differently.36 This lack of clarity creates discrepancies in the informed consent process, where some surgeons disclose extensive details, while others provide only minimal information based on their subjective judgement.
This variation can have significant effects on patient trust, as two patients undergoing the same procedure may receive differing levels of risk disclosure, depending on the practitioner’s approach. The absence of clear legal and professional consensus on disclosure requirements not only affects informed decision-making but also introduces emotional uncertainty, as patients may question whether they are receiving complete and relevant information.
LLMs and simulated empathy
As demonstrated above, the informed consent process is inherently relational, requiring both accurate information and an empathic connection to reassure patients, promote trust and respect autonomy. Patients facing surgical decisions often experience significant anxiety, making it crucial for surgeons to not only convey information but also address these emotional responses. A purely factual approach without empathic engagement may fall short of achieving truly informed consent by neglecting the emotional and relational dimensions essential to effective medical communication.
Through techniques such as emotional RAG (Retrieval Augmented Generation), LLMs demonstrate a unique capacity to retrieve information using both semantic and emotional cues, including the ability to understand and use implicit emotional context.37 38 This capability marks an advancement beyond traditional information retrieval systems, as rule-based avatars exemplified by ELIZA.15 This enables LLMs to dynamically adjust to the complexities of patient enquiries while maintaining a balance between accuracy and responsiveness.
While avatars have been successfully used in collaborative virtual environments for tasks like medical decision-making, their rule-based, preprogrammed nature limits their adaptability and depth in relational interactions, such as those required in surgical informed consent.39 LLMs, however, are potentially able to bridge this gap by offering a bidirectional exchange that addresses not only the patient’s need for clear, empathetic and personalised information but also supports healthcare providers. Providers often experience emotional stress and cognitive overload in the consent process. LLMs can help reduce administrative burdens and ensure consistent, supportive interactions, allowing clinicians to focus on patient care. In surgical informed consent, LLMs could supplement human interactions by personalising consent discussions, clarifying medical jargon and identifying patient concerns—all while enhancing emotional engagement through tailored responses.
The concept of ‘relevance’ further illustrates why LLMs are particularly suited for this role. In the context of informed consent, relevance extends beyond topical accuracy to include cognitive relevance (filling knowledge gaps), situational relevance (addressing the immediate context) and affective relevance (responding to emotional cues).40 41 These dimensions highlight the dual nature of relevance in informed consent: both the patient and the healthcare provider have evolving informational needs and emotional considerations. LLMs are potentially capable of addressing this complexity by integrating relevance into their responses, building trust and promoting engagement without overwhelming the user. These dimensions also emphasise that relevance is not solely an objective measure but also deeply influenced by user context, emotions and individual circumstances. However, balancing these subjective factors with the informational accuracy required by ethical and legal standards remains a challenge for LLM-based systems.
As the use of LLMs expands in healthcare, questions emerge about their potential to simulate empathy in surgical informed consent settings. While human-led consent discussions are ideal, studies indicate that clinicians' empathy may decrease with professional detachment over time, posing a challenge to consistently empathic patient interactions.42 43 Recent research (partly only available as pre-print) suggests that LLMs, such as ChatGPT-3.5, can exhibit cognitive empathy—recognising and responding to patient cues with supportive language, without the deeper emotional engagement known as affective empathy.44–46 Cognitive empathy refers to understanding and responding to others’ emotions intellectually, whereas affective empathy involves actually sharing and feeling those emotions. For instance, in a study comparing ChatGPT’s responses to physician interactions, LLM responses were rated as more empathetic than human responses in 72.85% of cases.45 However, patients generally prefer human responses even when satisfied with LLM-generated empathetic replies, indicating that genuine empathy is deeply valued in clinical contexts.47 While LLMs can simulate empathy, this simulation raises ethical concerns about authenticity, transparency and patient understanding.
When considering the deployment of LLMs in emotionally sensitive scenarios such as surgical consent, certain overarching ethical concerns arise. First, the authenticity of interactions is challenged when LLMs simulate empathy without genuine understanding, potentially leading patients to misinterpret the AI’s responses. Second, the balance between providing emotional reassurance and respecting patient autonomy must be carefully managed to avoid undue influence on decision-making. Lastly, the inherent limitations of LLMs—such as their inability to truly comprehend fear, pain, or vulnerability—necessitate clear boundaries for their ethical use in healthcare. To explore ethical boundaries in greater depth, the following discussion evaluates two distinct scenarios: Scenario A, where the patient is unaware of the AI’s simulated nature, and Scenario B, where the patient is fully aware of interacting with an artificial system.
Scenario A: patient not aware of the simulation
Research shows that people may struggle to distinguish between LLM and human-generated responses, raising concerns about transparency when LLMs simulate empathy.48 When patients are unaware that they are communicating with an LLM, simulated empathy may be perceived as a genuine concern. This misinterpretation can create misplaced trust, undermining patient autonomy by influencing decisions with a false sense of emotional support. Patients might consent to surgery based on what they believe to be authentic human understanding, only to later feel deceived when the AI’s true nature becomes obvious to them. This can compromise patient autonomy, as patients might consent to procedures with a skewed perception of the support they are receiving. Beyond the general erosion of trust, this raises deeper ethical concerns:
First, there is a critical ethical distinction between actively deceiving patients (eg, falsely claiming an LLM is human) and omitting information (eg, allowing patients to assume they are interacting with a human). Active deception constitutes a direct falsehood, whereas omission, while less overt, still breaches the duty of transparency central to informed consent. Both practices exploit patient assumptions, but active deception may compound harm by promoting intentional dishonesty in the physician-patient relationship. Some might argue deception could be justified if outcomes improve (eg, patients only accept care from ‘humans’). However, prioritising short-term gains risks normalising dishonesty, eroding systemic trust and invalidating consent. Deception in informed consent undermines legal requirements for diagnostics and treatment and therefore is unacceptable in clinical practice.
Failure to appropriately inform patients about interacting with an LLM undermines informed consent, as patients cannot fully weigh the implications of algorithmic involvement in their care. Despite the fact that patients may feel emotionally supported, misleading them denies their right to understand the nature of their interaction. Human empathy carries moral weight rooted in shared vulnerability, while LLM simulations reduce it to transactional mimicry. Ethically, the ends (comfort) do not absolve the means (deception), as patients’ agency depends on transparency.
Therefore, consent obtained under false pretences cannot be truly informed. Appropriate information about LLM use preserves autonomy, ensures patients contextualise empathy appropriately and safeguards the ethical foundation of healthcare. There might be scenarios, however, in which patients are informed about the artificiality of their communication partner but still misperceive it as a truly empathetic counterpart (as was the case in the ELIZA scenario). Patients then might feel emotionally supported based on wrong premises. When dealing with competent patients, it needs to be decided on a case-by-case basis whether physicians not only have the duty to fully inform patients about the artificial agents but also are obliged to correct their false beliefs in the case the patient obviously profits from communicating with the LLM. If patients do not possess full competency, there are good reasons not to use artificial systems for informed consent (to inform the patient and their legal representative), as there is a high risk that patients may misperceive the artificial agent as being human.
Scenario B: patient aware of the simulation
When patients know they are interacting with an LLM, ethical concerns shift towards the implications of substituting human empathy with algorithmic simulacra. Transparency resolves deception but does not eliminate risks tied to care standards and relational authenticity. While human clinicians may use standardised rapport-building techniques (eg, verbal affirmations and empathetic framing), these practices are grounded in the capacity for genuine emotional engagement, even if imperfectly expressed. LLMs, in contrast, simulate empathy without consciousness or intent—a distinction underscored by patients’ tendency to rate AI-generated empathy as less authentic than human responses, even when equally effective.49 This raises critical questions about care equity and patient expectations. If LLMs are used for consent interactions in some clinical contexts but not others, patients may perceive artificial empathy as a lower standard of care, exacerbating disparities if marginalised groups disproportionately encounter automated systems. Furthermore, while some patients may accept LLM support if it meets their needs, others may reject it as a dehumanising substitute for authentic connection, particularly in high-stakes decisions like surgery.
The ethical significance of empathy’s ‘authenticity’ hinges on its moral foundation: human empathy, even when inconsistently felt, arises from shared vulnerability and ethical commitment to care. LLMs lack this capacity, reducing empathy to a functional tool. This distinction matters not because humans always ‘feel’ empathy, but because the potential for reciprocal understanding is central to trust in medicine. Transparent use of LLM may still support decision-making, but positioning them as replacements for human empathy risks reframing care as transactional, prioritising efficiency over the irreplaceable value of human connection.
How to responsibly use simulated empathy in the context of informed consent
Given that LLMs can only simulate rather than experience emotions, the question arises: does it matter which type of emotion is referred to during the consent process? Simulated empathy, which mimics an understanding of and shared experience with the patient’s emotions, risks creating an illusion of emotional connection that is ethically questionable. This illusion may lead patients to perceive the interaction as more relationally meaningful than it truly is, thereby influencing their trust and emotional reliance on the system. Such influence may subtly compromise their autonomy during decision-making.
To address these ethical concerns, it is helpful to differentiate between active and passive forms of emotional engagement. Active empathy involves the LLM directly simulating an emotional connection, which can blur the boundaries between humans and machines and create a false sense of relational trust. In contrast, a passive empathy approach—focusing on distress recognition—is, at first glance, a more ethically sound approach that limits LLMs to recognising and signalling patient distress without simulating emotions. However, developing LLMs for distress recognition demands careful ethical scrutiny. The lack of scientific consensus regarding the accuracy of emotion recognition technologies, the sensitivity of emotional data and the potential for harm in high-stakes settings underscores the need for rigorous oversight and a critical evaluation of the societal values involved.50
Distress recognition could be seen as a solution to the ethical dilemmas of both scenarios mentioned above in enabling LLMs to detect and signal patient distress without simulating empathy. To balance efficiency and ethics, LLMs should prioritise detecting actionable distress over merely addressing mild uncertainty. For instance, when patients express mild uncertainty or ambivalence, such as saying, ‘I need more time to think’. or ‘I’m torn but leaning toward surgery’, the LLM can offer tailored clarifications like ‘Would you like me to review the risks again?’ However, when a patient expresses actionable distress statements such as ‘I’m panicking about complications’, this should immediately trigger a notification to a physician for timely human intervention. By programming LLMs to first respond to mild concerns with informational support (eg, repeating key facts and simplifying jargon), clinicians intervene only when patients exhibit persistent or escalating distress. This tiered approach, informed by advances in emotion detection, may minimise disruptions while ensuring human empathy is reserved for high-stakes scenarios.51
A further question arises: Is the efficiency offered by LLMs, even if they are restricted to the provision of information and distress recognition, worth the potential loss of human emotions that surgeons provide during consent conversations? In many healthcare settings, efficiency is critical. LLMs can potentially offer clear, personalised information consistently and can reduce the cognitive and administrative burden on healthcare professionals. However, there is a distinct trade-off when it comes to the loss of in-person care and real human emotions. Surgical informed consent is not merely a technical or legal formality—it is an emotional interaction where trust, reassurance and understanding play a crucial role in the patient’s decision-making process. While LLMs can expedite the delivery of necessary information, they cannot replicate the emotional depth that human providers can bring to these conversations. This creates a potential ethical tension: Does the pursuit of efficiency justify the risk of depersonalisation? In contexts requiring emotional sensitivity and understanding, the absence of human empathy may undermine patient trust and lead to poorer outcomes. Thus, while LLMs may enhance procedural efficiency, there remains an ethical imperative to consider whether this gain comes at too high a cost in terms of diminished human connection.
Conclusion
We have explored the ethical implications of using LLMs in surgical informed consent, arguing that LLMs should be limited to recognising and signalling patient distress rather than simulating empathy. By maintaining transparency and preserving the authenticity of human emotional engagement, LLMs can enhance communication without compromising patient autonomy or trust.
A promising avenue for future research is applying this approach to informed consent in clinical studies, where LLMs could enhance patients’ understanding and decision-making about participation, while also considering emotional factors.52 53 However, a significant technical challenge remains: it may be difficult to prevent LLMs from exhibiting empathy-like responses due to their reliance on statistical patterns in human language. For example, phrases such as ‘I’m sorry to hear that’ are commonly produced without genuine emotional intent, complicating efforts to strictly limit emotional expression.
Moreover, patient preferences for human versus AI-driven healthcare interactions vary widely. While some patients may value the emotional depth of human care, others may prefer AI if it proves effective at providing clear, accessible information. This underscores the importance of respecting individual choices and avoiding generalised assumptions about patient needs—an area worthy of further exploration.
Ultimately, the critical question remains: does the fact that something comes from a machine inherently diminish its value? If LLMs can improve outcomes and support patients effectively, many may accept their role in care, while others will prioritise human interaction. These debates highlight the need for ongoing research and ethical reflection, ensuring that technological advancements align with patient-centred care.
Data availability statement
No data are available.
Ethics statements
Patient consent for publication
Ethics approval
Not applicable.
References
Footnotes
X @KacprowskiTim, @frank_ursin
Contributors PR, SS and FU conceptualised the paper. PR wrote the original drafts. SS, FU, WTB and TK reviewed and edited multiple versions of the manuscript. PR is the guarantor. OpenAI GPT-4 was employed in adherence to ethical guidelines for academic research, assisting in generic parts of idea development, which was then critically revised by all authors.
Funding This study was funded by Deutsche Forschungsgemeinschaft (Project Number 525059925) Lower Saxony Ministry of Science and Culture (MWK) (11-76251-1251/2023 ZN 4109).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.