
Laura M. Cascella, MA, CPHRM
In healthcare, the concept of informed consent is generally straightforward. A patient is informed about a proposed test, treatment, or procedure; its benefits and risks; and any alternative options. With this knowledge, the patient decides to either consent or not consent to the recommended plan. In reality, though, informed consent is a more complex process that involves nondelegable duties and varies in scope based on the type of test, treatment, or procedure involved.
Read more 
Laura M. Cascella, MA, CPHRM
When envisioning the future of healthcare, artificial intelligence (AI) is a preeminent part of the picture. Daily stories trend in the media related to AI applications and their widespread potential for revolutionizing medical practice and patient care. Yet, akin to the promises of electronic health records in the early 21st century, the excitement surrounding AI has sometimes led to an idealistic view of its capabilities while marginalizing technological and operational challenges as well as safety and ethical concerns.2
Read more 
Laura M. Cascella, MA, CPHRM
One of the major red flags associated with artificial intelligence (AI) is the potential for bias. Bias can occur for various reasons. For example, the real-world data used to train AI applications (e.g., from medical studies and patient records) might be biased. Algorithms that rely on data from these sources will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1 Likewise, bias can permeate the rules and assumptions used to develop AI algorithms, which “may unfairly privilege one particular group of patients over another.” 2
Read more 
Laura M. Cascella, MA, CPHRM
Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system’s conclusions or recommendations by examining its programming and coding.
Read more 
Laura M. Cascella, MA, CPHRM
The concept of bias in relation to artificial intelligence (AI) usually is discussed in terms of biased data and algorithms, which pose significant ethical and safety issues. However, another type of bias also raises concern. Automation bias occurs when “clinicians accept the guidance of an automated system and cease searching for confirmatory evidence . . . perhaps transferring responsibility for decision-making onto the machine . . .”1 Similarly, clinicians who use generally reliable technology systems might become complacent and miss potential errors, particularly if they are pressed for time or carrying a heavy workload.
Read more
Laura M. Cascella, MA, CPHRM
Artificial intelligence (AI), much like other types of health information technology, raises concerns about data privacy and security — particularly in an era in which cyberattacks are rampant and patients’ protected health information (PHI) is highly valuable to identity thieves and cybercriminals.
Read more Laura M. Cascella, MA, CPHRM
At the heart of many innovations in healthcare are patients and finding ways to improve the quality of their care and experience. This is perhaps no truer than in the case of artificial intelligence (AI), which offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies as well as how these advances will change the provider–patient relationship.
Read more Laura M. Cascella, MA, CPHRM
Training and education are imperative in many facets of healthcare — from understanding clinical systems, to improving technical skills, to understanding regulations and professional standards. Technology often presents unique training challenges because of the ways in which it disrupts existing workflow patterns, alters clinical practice, and creates both predictable and unforeseen challenges.
Read more