Artificial Intelligence and Informed Consent
Laura M. Cascella, MA, CPHRM
In healthcare, the concept of informed consent is generally straightforward. A patient is informed about a proposed test, treatment, or procedure; its benefits and risks; and any alternative options. With this knowledge, the patient decides to either consent or not consent to the recommended plan. In reality, though, informed consent is a more complex process that involves nondelegable duties and varies in scope based on the type of test, treatment, or procedure involved.
When technology is introduced into the mix — particularly advanced technology like artificial intelligence (AI) — the informed consent process becomes even more complicated. When should providers tell patients that they are using AI technologies for diagnostic and treatment purposes? How much information about the technology should they disclose? What are the best ways to explain the complexities of AI in understandable ways? Can patients ever truly understand the role of AI in their care and treatment?
An article in Digital Health sums up the predicament that AI poses to informed consent:
On the one hand, consulting an AI application can be understood similarly to consulting a colleague or a book, which does not need any special mention. On the other hand, AI applications might introduce new types of errors or biases, which could harm the patients in an unexpected fashion or reduce the trust held between the patient and the doctor.1
Unfortunately, AI’s rapid momentum has eclipsed the ability of regulators, leaders, and professional experts to implement laws, standards, guidelines, and best practices that address some of these issues. An article in the Georgetown Law Journal notes that “This is not unusual when a new technology emerges—we struggle with whether it can be assimilated into existing doctrines or whether it requires something new.”2
As a result of this uncertainty, healthcare providers should stay vigilant for ongoing developments related to their legal and ethical responsibilities for disclosing information about AI during informed consent discussions. Likewise, healthcare organizations should maintain diligence in crafting policies that align with evolving schools of thought and regulatory changes.
Beyond legal and ethical frameworks, healthcare providers also must contend with how the media and popular culture might shape their views and their patients’ views of AI. An article in the AMA Journal of Ethics notes that “When an AI device is used, the presentation of information can be complicated by possible patient and physician fears, overconfidence, or confusion.”3 An example that might foreshadow this potential issue occurred with the emergence of robotic surgery. Vigorous direct-to-consumer advertising and marketing were noted in some instances to overestimate benefits, overpromise results, and/or fail to define specific risks, which led to inflated patient perceptions or unrealistic expectations of the technology.4
It is not difficult to see how the media might also shape AI’s image, with persistent stories about how AI technologies will affect daily life in either fantastic or catastrophic ways. In a media-obsessed society, this inundation of information can have a significant impact on patients’ perceptions, potentially leading to idealistic or pessimistic views of AI. In turn, these perceptions might play a significant role in patient decision-making when information about AI is disclosed during the informed consent process.
If healthcare providers do plan to incorporate information about AI into informed consent discussions, they should start with self-awareness and education about the technology. The authors of the aforementioned AMA Journal of Ethics article explain that “for an informed consent process to proceed appropriately, it requires physicians to be sufficiently knowledgeable to explain to patients how an AI device works.”5
Although acquiring extensive knowledge of AI coding, programming, and functioning is an unrealistic expectation for practitioners, they should be able to offer patients high-level and clear information, such as:
- A general explanation of how the AI program or system works and its predictive accuracy
- Their personal experience using the AI program or system
- The risks versus potential benefits of the AI technology and alternative options
- The human versus machine roles and responsibilities in diagnosis, treatment, and procedures
- Any safeguards that have been put in place, such as cross-checking results between clinicians and AI programs
- Potential risks related to data privacy and confidentiality
Taking the time to provide patients with these additional details during the informed consent process and to answer any questions can help ensure that they have the appropriate information to make informed decisions about their treatment. Following the informed consent process, providers should document these discussions in patients’ health records and include copies of any related consent forms.
In time, healthcare organizations and providers can use their experiences with AI, lessons learned during AI implementation, evolving federal and state regulations and professional standards, and emerging case law to develop informed consent processes that are ethically and legally sound.
For more information about informed consent, see MedPro’s guideline Risk Management Strategies for Informed Consent. To learn more about managing AI risks, see Risk Tips: Artificial Intelligence.
Endnotes
1 Park H. J. (2024). Patient perspectives on informed consent for medical AI: A web-based experiment. Digital health, 10, 20552076241247938. doi: https://doi.org/10.1177/20552076241247938
2 Cohen, G. (2020). Informed consent and medical artificial intelligence: What to tell the patient? Georgetown Law Journal, 108(6), 1,425–1,469. Retrieved from www.law.georgetown.edu/georgetown-law-journal/in-print/volume-108/volume-108-issue-6-june-2020/informed-consent-and-medical-artificial-intelligence-what-to-tell-the-patient/
3 Schiff, D., & Borenstein, J. (2019, February). How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics, 21(2), E119-197. doi: 10.1001/amajethics.2019.138
4 Langreth, R. (2013, October 8). Robot surgery damaging patients rises with marketing. Bloomberg News. Retrieved from www.bloomberg.com/news/2013-10-08/robot-surgery-damaging-patients-rises-withmarketing.html
5 Schiff, et al., How should clinicians communicate with patients about the roles of artificially intelligent team members?
TOOLS & RESOURCES
- Articles
- Booklets
- Checklists
- Claims Data by Specialty
- Claims Data by Topic
- Guidelines
- Product Catalog
- Reference Manual
- Resource Lists
- Risk Management Review
- Risk Q&A
- Risk Tips
- Sample Dental Informed
Consent Forms - Sample OMS Informed
Consent Forms - Senior Care Resources from
Pendulum