Today’s intelligent EHR systems have undoubtedly saved clinicians uncounted hours, helped streamline workflows, and improved the accuracy of documentation—all of which are vast improvements over paper-based health record systems.
But one challenging area for EHRs in general is that clinicians must work within the boundaries of the system. The teams designing software work hard to replicate clinical workflows and processes to deliver the best possible user experience, but at the end of the day, clinicians must work within the rules and boundaries of the EHR. All the power of how clinicians use the system and do their documentation ends up in the hands of the teams that build the platforms.
Future AI healthcare technology, specifically large language models (LLMs), can reverse that relationship, letting clinicians dictate and determine how they work rather than having to conform to a pre-built process that may not suit them.
The need to address ongoing challenges
Staffing and retention challenges are not new in post-acute care and will not be resolved any time soon. Government and regulatory bodies are taking some steps, and other organizations are trying to help by increasing funding for nursing education and other initiatives. But in the long run, the only way to solve this challenge is to help those who are in the workforce right now work more efficiently.
Here’s one example: Currently, clinicians spend about 1½ hours every day in their EHR, navigating to different pages, digging up documentation and typing in their own notes. That’s 20% of every shift. Across the post-acute space, that’s 800,000 hours every single day just navigating and working in the EHR rather than spending time with patients and residents. And trying to duplicate the complexities of clinical workflows and documentation in an EHR means those systems are inevitably complex. So in this environment, how do we solve for efficiency?
Using LLMs to boost efficiency
In existing EHRs, navigation is a linear path: click a link, get to a section, find a specific page or field. LLMs take search to the next level, moving the user exactly where they need to be within the EHR without having to memorize the steps it takes to get there. All the clinicians need to know is how to provide care.
The behavior LLMs can empower is called question-and-answer architecture, which allows clinicians to put information in linguistic terms. A user can say (or type), “I am entering a progress note for Patient X” and the system uses LLMs to convert the language into tasks the EHR can understand. This is a completely new interface and will be as revolutionary as the touch screens we use today, which replaced navigating with a mouse and scrolling with up and down buttons.
This new way of interacting with an EHR also solves two significant issues. First, it means users no longer have to learn the EHR. Clinicians will simply be able to say what they need and will no longer even have to touch a screen. Second, it allows immediate documentation at the bedside, so clinicians will not need to double document by scribbling notes on paper earlier in their day and then proceeding to type in notes about clinical care later. LLMs will make it possible to document in real time, streamlining clinicians’ work and helping reduce errors.
Language is a complex process, despite the fact that everyone has mastered it. LLMs are designed to overcome complexity to generate summarizations and turn input into data and information that both humans and computers can understand and act on. That’s the power of these language models. If you’ve used Chat GPT, where you type a series of prompts and suggestions to generate content, you’ve seen how powerful this technology can be. LLMs are the same underlying technology that can be used on vocal input. This technology can move the EHR away from the computer and into the room, so clinicians won’t have to step away from the care they’re providing patients and residents.
Change is already starting
Bits and pieces of this technology already exist in post-acute care and home health. Verbal documentation and note taking tools are available in some products that translate speech into raw text. Search engines like Google are becoming more advanced, letting you identify where you want to go rather than having to know how to get there.
The good news is that the shift to this hands-free world will be incremental, rather than a jarring or difficult transition. In small steps, it will start to feel natural to talk to the EHR rather than sitting at a computer to complete tasks. This new functionality should feel seamless, with minor changes in workflow that won’t require completely re-learning a new system. It will mean no longer needing to memorize how to navigate the EHR and instead, letting users simply say where they want to go and what task they want to accomplish. And all of this should add time back to a clinician’s day, so they don’t need to spend 1½ hours just reading documents into the EHR.
Meeting challenges with technology
Using the power of LLMs to translate linguistic data into computer commands has the potential to re-shape the way clinicians work, making it easier to document care, gather information from disparate systems and summarize data into succinct, actionable reports. At MatrixCare, we’re working every day to bring these powerful tools closer to reality, so clinicians shift their attention away from remembering how to navigate through their EHR and instead focus on caring for patients and residents.