Risk Management Tools & Resources

 


Artificial Intelligence Risks: Privacy and Security

Artificial Intelligence Risks: Privacy and Security

Laura M. Cascella, MA, CPHRM

Artificial intelligence (AI), much like other types of health information technology, raises concerns about data privacy and security — particularly in an era in which cyberattacks are rampant and patients’ protected health information (PHI) is highly valuable to identity thieves and cybercriminals.

The healthcare industry has faced growing challenges with securing increasing amounts of sensitive and confidential digital information while adhering to federal and state privacy and security regulations. AI reinforces these challenges because of its dichotomous nature — it requires massive quantities and diverse types of data but is vulnerable to data breaches.

The momentum of AI development further complicates matters because current privacy and security regulations and standards often do not account for AI capabilities. For example, an AMA Journal of Ethics article explains that some methods for de-identifying data are ineffective “in the context of large, complex data sets when machine learning algorithms can re-identify a record from as few as 3 data points.”1 The authors also note that AI algorithms can be susceptible to cyberattacks, which could pose threats to patient safety and data integrity.

Experts also have raised ethical questions about privacy and security in an AI-enabled healthcare environment, such as how to define boundaries between research and commercial use of patient data and how to determine who owns the intellectual property of data-driven AI algorithms.2

Protecting patient privacy and securing digital data will continue to be a fundamental risk issue as AI becomes more mainstream in healthcare. Thus, it will be incumbent on healthcare leaders, AI developers, policymakers, data scientists, and other experts to (a) identify vulnerabilities and consider innovative and proactive strategies to address them, and (b) advocate for replacing outdated regulations with privacy and security laws that are “consistent and flexible to account for innovations in AI and [machine learning].”3

In the meantime, healthcare organizations should make sure they are assessing risks and performing due diligence when choosing AI vendors. Healthcare organizations should ensure they have business associate agreements in place with any technology vendors/companies that are using or disclosing PHI. Additionally, the Cloud Security Alliance recommends that healthcare organizations using AI devices ensure strong access control using multifactor authentication; the organization also recommends incorporating anomaly detection in endpoint security to identify unusual activity.4

To learn more about other challenges and risks associated with AI, see MedPro’s article Artificial Intelligence in Healthcare: Challenges and Risks.

Endnotes


1 Crigger, E., & Khoury C. (2019, February). Making policy on augmented intelligence in health care. AMA Journal of Ethics, 21(2), E188-191. doi: 10.1001/amajethics.2019.188

2 Hearing on Artificial Intelligence: Societal and Ethical Implications: Hearings Before the Committee on Science, Space, and Technology, House of Representatives (2019) (Statement of Georgia Tourassi). Retrieved from www.congress.gov/116/meeting/house/109688/witnesses/HHRG-116-SY00-Wstate-TourassiG-20190626.pdf

3 Angle, J. (2022). Artificial intelligence in healthcare. Cloud Security Alliance. Retrieved from https://cloudsecurityalliance.org/artifacts/artificial-intelligence-in-healthcare/

4 Ibid.