As an RN with over 29 years of experience—including over 2 decades in emergency departments (EDs)—I’m used to working in chaotic clinical environments, and I rely on rigorous routines and battle-tested protocols to ensure my patients get the best possible care. That makes me a creature of habit and, I’ll admit, an utter skeptic when new ideas, methods, or technologies threaten to disrupt the status quo.
Artificial intelligence (AI) is threatening to upend just about every aspect of healthcare, but somewhat unexpectedly I’ve found myself becoming a passionate advocate for the use of AI in frontline clinical practice. In fact, I’m a staunch believer that nurses have a vital responsibility not only to adopt AI tools, but to work to shape their evolution and to participate in the decision-making process as these new technologies integrate into and transform our workspaces.
I know I’m not an obvious advocate for AI, but that’s really my point. If a change-averse, late-adopting skeptic like me can learn to love AI, then anybody can. Let me explain how I went from skeptic to AI booster. In 2015, a multi-disciplinary group from Johns Hopkins (including a biomedical engineer, a data scientist, and a nurse informaticist) approached me about using a recently developed AI “tool” to offer clinical decision support to triage nurses in my ED.
My initial response, predictably, was to say no. I didn’t understand why they wanted to get involved in triage, I didn’t trust their motives, and I didn’t think their idea would work. Our system wasn’t perfect, but it was good enough. I also knew my way around the system; when things weren’t going well, I could find workarounds to advocate for my patients on a case-by-case basis.
Fortunately, the group persisted, and I relented—and was quickly surprised by the effectiveness of this AI triage tool. It turned out that my beloved system and well-intentioned workarounds could be improved upon. Within a week, I saw that the tool as a big step forward. We were making clear, data-driven, and evidence-based triage decisions, rather than relying on the whims and hunches of individual clinicians.
Over the coming months, patient care nurses helped the developers align the tool with our team’s real-world needs. Amazingly, we found that disposition decisions were being made an average of 35 minutes faster, and that on average, high-risk patients were receiving care more than an hour sooner. We also found that while we drove more patients to the lower acuity area, we had a decrease in hospitalizations from that area. In 2018, I presented a poster on our findings at the national Emergency Nurses Association conference. In 2019, I included our results in a speaker session called “Blowing up triage.”
Of course, I understand why many nurses remain wary of new technologies. There’s so much conflicting information out there, and it’s hard to know what you can trust. Sometimes we don’t trust our managers or we weren’t consulted on change. We’re also frequently fearful about changes to roles that bring us professional fulfillment. Part of my own initial resistance to this change, after all, was that I’d spent many years as a triage nurse preceptor and took pride in doing my job well.
As nurses, though, we need to understand that although our concerns about new technologies may be well-founded, there’s also a risk to ignoring the big changes coming down the pipeline. My biggest fear is that if we refuse to engage with and educate ourselves about AI technologies, we’ll wind up having to live with solutions that miss the mark because they were designed without input from frontline teams.
When I overcame my fears—of new things, of being replaced, of innovative technology, and of the unknown—I found I could approach the new tool with an open mind and simply evaluate the evidence. We are, after all, an evidence-based practice. I found that AI didn’t erode my own clinical authority; I was still the one making the final decisions. But it did give me the tools I needed to make better decisions and to ensure that both high- and low-risk patients received efficient and effective care.
Over time, I stopped seeing AI as a threat and started seeing it as an opportunity. By approaching this new technology with an open mind, nurses have a chance to shape the way it evolves. After all, patient care nurses are the largest group of healthcare workers in the United States, which means they’re uniquely positioned to guide developers, institutional leaders, and educators. Their knowledge base, education, and clinical expertise must be considered and used to guide the development and deployment of AI in healthcare.
That’s especially important because not all AI tools will be effective or beneficial. Real risks are associated with AI, ranging from algorithmic bias to lack of transparency; even the best tools will require frontline team supervision and scrutiny. Some decisions should never be entrusted to AI models, and AI will never be able to do many things—from comforting a patient to drawing on the experience, insight, and intuition of veteran healthcare professionals.
If we, as nurses, want safe, effective AI tools that augment our abilities without eroding our authority, we need to step up now and make sure we’re part of the conversation as these technologies are developed. If we snub AI altogether, we risk being handed tools that complicate rather than enhance our ability to care for our patients. It’s easy to be a skeptic, but as nurses, we have a duty to actively work to ensure that AI enhances both our profession and our ability to provide care. Our patients, and the next generation of nurses, are depending on us. We’re an integral part of the health delivery team; we can’t afford to be left behind.
Sophia Henry, MS, RN, is a Clinical Consultant for Beckman Coutler in Pikesville, MD