Risk Management Tools & Resources

 


Artificial Intelligence Risks: Black-Box Reasoning

Artificial Intelligence Risks: Black-Box Reasoning

Laura M. Cascella, MA, CPHRM

Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system’s conclusions or recommendations by examining its programming and coding.

However, many of today’s cutting-edge AI technologies — particularly deep learning and machine learning systems that offer great promise for transforming healthcare — have more opaque reasoning. Referred to as “black-box reasoning” or “black-box decision-making,” this type of functioning makes it difficult or impossible to determine how the AI system produces results. Rather than being programmed to follow commands, black-box programs learn through observation and experience and then create their own algorithms based on training data and desired outputs.1

Over time, if more data are introduced, AI can continue to adjust its reasoning and decision-making. The benefit of evolving AI is increased accuracy; however, by “becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.”2

Although black-box AI offers compelling capabilities and the potential for significant advances in many areas, including disease detection and precision treatment, it also presents serious concerns. Lack of transparency about “how output is derived from input”3 can erode healthcare providers’ confidence in AI systems and create barriers to acceptance and adoption.

One important factor to note, though, is that opaque reasoning might not always generate these concerns. In some situations, how an AI program produces results might be perplexing but not troubling. For example, an article in the AMA Journal of Ethics notes that if an image analysis system can detect cancer with 100 percent accuracy, knowing how it does so is not critical because it is solving a problem that has a “black or white” answer — either cancer is detected or it is not — and the system is doing it with more accuracy than a human.4

However, not all decision-making results in indisputable conclusions. In certain situations, providers need an understanding of how the technology works and the level of confidence on which it is drawing its conclusions. This might be the case, for example, when AI produces results:

  • With less than 100 percent accuracy
  • Based on unknown but potentially biased algorithms
  • That require weighing various factors to determine an optimal course of care

Lack of important information about how the AI system or program works might make it difficult for providers to judge the quality and reliability of the results and might undermine a clinician’s own clinical reasoning and decision-making.

Black-box AI also creates concerns related to liability. In many cases, the technology is emerging and evolving more quickly than standards of care and best practices, leaving healthcare providers with a level of uncertainty about using AI in clinical practice. Additionally, if AI’s functioning is unknown and unpredictable, questions arise about who the legal system can and should hold responsible when errors occur that result in patient harm.

The authors of an article about tort liability doctrines and AI also point to the technology’s increasing autonomy as an impending legal challenge. As machines continue to learn and adapt, “fewer parties (i.e., clinicians, health care organizations, and AI designers) actually have control over it, and legal standards founded on agency, control, and foreseeability collapse . . .”5 The authors explain that the number of people involved in the development of AI systems and programs also can make it difficult to assign responsibility for malfunctions or errors, particularly when these technologies are built over many years with input from various experts.

Beyond professional and product liability, black-box AI systems also can lead to other legal and ethical dilemmas, such as veiled bias, cybersecurity vulnerabilities, privacy issues, and more.

These numerous concerns highlight the obstacles that black-box AI systems present as well as the shortcomings of current legal principles to address potential AI liability. Much more work remains in relation to defining and implementing transparency requirements, determining system reliability, and building confidence in AI technology. Further, the complexity of AI indicates the need for evolving standards that address malpractice negligence, vicarious liability, and product liability in AI-enabled healthcare delivery.

To learn more about other challenges and risks associated with AI, see MedPro’s article Artificial Intelligence in Healthcare: Challenges and Risks.

Endnotes


1 Knight, W. (2017, April 11). The dark secret at the heart of AI. MIT Technology Review. Retrieved from www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

2 Sullivan, H. R., & Schweikart, S. J. (2019, February). Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics, 21(2), E160-166. doi: 10.1001/amajethics.2019.160

3 Anderson, M., & Anderson, S. L. (2019, February). How should AI be developed, validated, and implemented in patient care? AMA Journal of Ethics, 21(2), E125-130. doi: 10.1001/amajethics.2019.125

4 Ibid.

5 Sullivan, et al., Are current tort liability doctrines adequate for addressing injury caused by AI?

6 Kosinski, M. (2024, October 29). What is black box artificial intelligence (AI)? IBM. Retrieved from www.ibm.com/think/topics/black-box-ai