Risk Management Tools & Resources

 


Artificial Intelligence Risks: Patient Expectations

Artificial Intelligence Risks: Patient Expectations

Laura M. Cascella, MA, CPHRM

At the heart of many innovations in healthcare are patients and finding ways to improve the quality of their care and experience. This is perhaps no truer than in the case of artificial intelligence (AI), which offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies as well as how these advances will change the provider–patient relationship.

A look at other technologies reveals some insights and possible concerns. Electronic health records, for example, have been known to produce issues with communication. When clinicians focus on inputting data into the computer and looking at the screen, patients can feel ignored, dismissed, or disrespected. These issues can depersonalize the patient experience and erode the provider–patient relationship — a concern as well for AI as automation takes on more roles and responsibilities.

Electronic portals offer another interesting example of how patients interact with technology. Some patients prefer the convenience of portals and establishing an electronic connection with their healthcare providers, while others hesitate to use the technology or reject it outright. Fears about privacy and security, lack of technological savvy, and personal preferences for human interaction can all play a role in patients’ perceived value and adoption of technology.

The enormity and complexity of AI might lead patients to have both negative and positive misconceptions about its functions and capabilities. Some patients might have unfounded fears while others have overambitious expectations. As a result of these emotions or beliefs, patients might reject helpful technology or put false hope in less-than-perfect systems.

Addressing and managing patients’ expectations of AI will require open communication, ongoing education, and thorough informed consent processes. While explaining AI to patients at a granular level might be overwhelming and confusing, healthcare providers should be able to discuss the benefits, risks, and realistic capabilities of the technology so patients can make informed decisions about their care.

Additionally, as healthcare organizations implement AI tools and systems, they should prioritize building patient trust, communicating how they are choosing AI applications that have proven benefits for patients, and reinforcing strong provider-patient relationships.1

To learn more about other challenges and risks associated with AI, see MedPro’s article Artificial Intelligence in Healthcare: Challenges and Risks.

Endnote


1 Nong, P., & Ji, M. (2025). Expectations of healthcare AI and the role of trust: Understanding patient views on how AI will impact cost, access, and patient-provider relationships. Journal of the American Medical Informatics Association, 32(5), 795–799. doi: https://doi.org/10.1093/jamia/ocaf031