
Laura M. Cascella, MA, CPHRM
Violence is an undisputable issue in healthcare, and the media is rife with reports of violent acts occurring in various healthcare settings. When thinking about violence in healthcare, stories in which patients or their families are the perpetrators often come to mind. In some instances, disgruntled or mentally unstable employees act as the aggressors. Violence prevention programs often focus on these aspects but may overlook another crucial source of violence — domestic violence (including intimate partner violence).
Read more 
Laura M. Cascella, MA, CPHRM
Maternal health has long been an Achilles heel in the U.S. healthcare system, with the United States having the highest rate of maternal deaths of any high-income nation.1 Although recent data show some improvements, maternal morbidity and mortality are still significant issues, particularly for Black women.2
Read more 
Laura M. Cascella, MA, CPHRM
In healthcare, the concept of informed consent is generally straightforward. A patient is informed about a proposed test, treatment, or procedure; its benefits and risks; and any alternative options. With this knowledge, the patient decides to either consent or not consent to the recommended plan. In reality, though, informed consent is a more complex process that involves nondelegable duties and varies in scope based on the type of test, treatment, or procedure involved.
Read more 
Laura M. Cascella, MA, CPHRM
When envisioning the future of healthcare, artificial intelligence (AI) is a preeminent part of the picture. Daily stories trend in the media related to AI applications and their widespread potential for revolutionizing medical practice and patient care. Yet, akin to the promises of electronic health records in the early 21st century, the excitement surrounding AI has sometimes led to an idealistic view of its capabilities while marginalizing technological and operational challenges as well as safety and ethical concerns.2
Read more 
Laura M. Cascella, MA, CPHRM
One of the major red flags associated with artificial intelligence (AI) is the potential for bias. Bias can occur for various reasons. For example, the real-world data used to train AI applications (e.g., from medical studies and patient records) might be biased. Algorithms that rely on data from these sources will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1 Likewise, bias can permeate the rules and assumptions used to develop AI algorithms, which “may unfairly privilege one particular group of patients over another.” 2
Read more
Laura M. Cascella, MA, CPHRM
Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system’s conclusions or recommendations by examining its programming and coding.
Read more Laura M. Cascella, MA, CPHRM
The concept of bias in relation to artificial intelligence (AI) usually is discussed in terms of biased data and algorithms, which pose significant ethical and safety issues. However, another type of bias also raises concern. Automation bias occurs when “clinicians accept the guidance of an automated system and cease searching for confirmatory evidence . . . perhaps transferring responsibility for decision-making onto the machine . . .”1 Similarly, clinicians who use generally reliable technology systems might become complacent and miss potential errors, particularly if they are pressed for time or carrying a heavy workload.
Read more Laura M. Cascella, MA, CPHRM
Artificial intelligence (AI), much like other types of health information technology, raises concerns about data privacy and security — particularly in an era in which cyberattacks are rampant and patients’ protected health information (PHI) is highly valuable to identity thieves and cybercriminals.
Read more