Risk Management Tools & Resources

 


Artificial Intelligence Risks: Biased Data and Functional Issues

Artificial Intelligence Risks: Biased Data and Functional Issues

Laura M. Cascella, MA, CPHRM

One of the major red flags associated with artificial intelligence (AI) is the potential for bias. Bias can occur for various reasons. For example, the real-world data used to train AI applications (e.g., from medical studies and patient records) might be biased. Algorithms that rely on data from these sources will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1 Likewise, bias can permeate the rules and assumptions used to develop AI algorithms, which “may unfairly privilege one particular group of patients over another.” 2

Bias sometimes occurs because of a variance in the training data or environment and how the AI program or tool is applied in real life — referred to as “distributional shift.” A study in BMJ Quality & Safety notes that this shift can occur because of:

  • Bias in the data training set (e.g., data represent outlying rather than typical cases)
  • Changes in disease patterns over time that are not introduced to the AI system (e.g., data are not updated, so the program continues to rely on the initial data training set)
  • Inappropriate application of an AI system to an unanticipated patient context (e.g., a different population than originally intended)3

An example of inappropriate application of AI was described in a Health Data Management article that discussed AI-enabled facial analysis systems used to detect pain and monitor disease. An investigation of algorithmic bias showed that these systems did not perform well when used with older adults who had dementia.4 Although liability implications related to AI are still evolving, Healthcare IT News has warned that “A clinician relying on a device in a medical setting who doesn't account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit.”5

Further, the Shifts Project — an international collaboration of researchers dedicated to studying distributional shift — explains that AI performance will typically degrade as the degree of shift increases. They caution that “Distributional shift is ubiquitous in machine learning and is especially important to be aware of in safety-critical applications.”6

Another important consideration with AI is that machine learning is literal and results oriented — it relies on the data it receives to run algorithms that generate outputs, whereas humans can see “bigger picture” influences. As a result, AI systems might be rigid in recognizing and adapting to nuances, changes in context, and idiosyncrasies.

This “insensitivity to impact” can prevent AI from factoring in the consequences of false positives and false negatives. The aforementioned BMJ Quality & Safety article notes that although humans’ ability to err on the side of caution might result in a higher number of false positives and apparent decreases in accuracy, “this behaviour alteration in the face of a potentially serious outcome is critical for safety . . .”7

Other examples of how AI functioning might lead to unintended consequences include:

  • Unsafe failure mode. A program or system makes predictions with limited confidence or insufficient information.
  • Negative side effects. A program or system has narrow functions that are unable to consider a broader context.
  • Reward hacking. A program or system finds ways to meet specified objectives without achieving long-term goals.
  • Unsafe exploration. A program or system pushes safety boundaries in an attempt to learn new strategies or methods.8

Awareness of AI bias and functional issues have elevated concerns about the overall safety and reliability of AI technologies. “The rapid pace of change, diversity of different techniques and multiplicity of tuning parameters make it difficult to get a clear picture of how accurate these systems might be in clinical practice or how reproducible they are in different clinical contexts.”9

Thus, amid growing enthusiasm for AI, it is imperative that researchers, AI developers, public health experts, clinicians, and others recognize how AI might exacerbate existing problems and generate new concerns. Failure to acknowledge these issues and work toward viable solutions will have implications for patient safety and quality of care — and, ultimately, will contradict the long-term vision of how AI can benefit healthcare.

To learn more about other challenges and risks associated with AI, see MedPro’s article Artificial Intelligence in Healthcare: Challenges and Risks.

Endnotes


1 Slabodkin, G. (2019, August 13). AI, machine learning algorithms are susceptible to biased data. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-machine-learning-algorithms-are-susceptible-to-biased-data; Shroff, A. (2022, April 14). Healthcare AI bias: Reasons and resolutions. Healthcare IT Today. Retrieved from www.healthcareittoday.com/2022/04/14/healthcare-ai-bias-reasons-and-resolutions/

2 Shroff, Healthcare AI bias: Reasons and resolutions.

3 Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019, March). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237. doi: 10.1136/bmjqs-2018-008370

4 Slabodkin, G. (2019, July 27). AI presents host of ethical challenges for healthcare. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-presents-host-of-ethical-challenges-for-healthcare

5 Jercich, K. (2021, October 29). Machine learning can revolutionize healthcare, but it also carries legal risks. Healthcare IT News. Retrieve from www.healthcareitnews.com/news/machine-learning-can-revolutionize-healthcare-it-also-carries-legal-risks

6 The Shifts Project. (n.d.). What is distributional shift? Retrieved from https://shifts.ai/

7 Ibid.

8 Challen, et al., Artificial intelligence, bias and clinical safety.

9 Ibid.