It wasn’t long ago when bringing up artificial intelligence in conversations about home health care was nearly taboo.
An extremely personal and hands-on industry was largely skeptical of the emerging technology, concerned there would be a push for hands-on workers to be replaced.
Since then, however, home health leaders have come around to AI, whether it be of the generative or machine learning variety. Meanwhile, those types of AI have continued to advance.
There are plenty of use cases for AI in home-based care, but providers are still in the early innings of adoption.
As AI proliferates, it will be important for providers to pick their spots. In the near-term future, there will be places where it can help, and places where it will not, WellSky CEO Bill Miller told Home Health Care News.
Even though there is much more AI adoption occuring in home-based care of late, it is still far from an all-encompassing savior.
“Having lived through all sorts of technology trends over the years, I’ve learned to modulate a little bit about how excited to get about things,” Miller said.
Based in Overland Park, Kansas, WellSky is a health care technology company. It has thousands of partners, including hospitals and a wide range of post-acute providers, including home health agencies.
Miller cited IBM Watson as one technological development that promised a lot, but failed to fully deliver. Unveiled in 2010, IBM Watson was a data analytics processor that leveraged natural language processing.
Recent AI developments show more promise than that, Miller believes. But “caution” and “responsibility” will be central to WellSky’s strategy with AI in the future, particularly around all things clinical.
“It holds a lot of promise that we’re excited about,” Miller said. “As a company that has a lot to gain or lose on these technology trends, we’re going to embrace them. And, on the other side of things, I think we’re having the appropriate caution and responsibility that should be in tow for any new technology platform that our house sells.”
Three buckets
Broadly, Miller and WellSky consider the future of AI – particularly as it relates to home-based care – in three buckets.
The first is around internal innovation. With AI, WellSky can develop code much quicker, and with more efficiency. With that in mind, WellSky has purchased licenses for all of its around 900 engineers to use AI tools in their work every day.
“There is no doubt that AI tools can speed up the rate at which WellSky can develop code internally and innovate,” Miller said. “AI tools can help what our engineers do internally, in terms of building things out faster, and testing them faster. All these internal things that maybe a client would never really see. From an internal perspective, it is meaningful work. It makes us faster and more efficient. That bucket, I don’t think we can mash the pedal down harder to the ground than we are.”
Then, on the provider use case side, there’s another AI area where WellSky is mashing the pedal down. That is around documentation.
Along with recruiting and retention initiatives, easing the documentation burden has been one of the primary focuses among home health agencies when it comes to AI.
Home health leaders believe the industry offers a lot to health care workers, including flexible scheduling and general autonomy. One of the downsides, however, is the amount of time these workers spend on documentation.
AI has the ability to significantly reduce this burden.
“Particularly in home health, the OASIS form. Why can’t that be automated?” Miller said. “Why can’t dragging all the meds and med history out of someone’s file be summarized and pulled together? We’re already doing a ton of the work there, and the amount of time we’re saving clinicians and caregivers, that’s meaningful work, too.”
Pinnacle Home Care CEO Shane Donaldson recently told HHCN that he believes the ability to harness AI in home health documentation will be a complete and total game changer, so much so that it will largely alleviate the industry of downward payment pressures.
“We now have technologies that can automate much of our documentation. Much of the scheduling can be automated,” Donaldson said. “Through the use of these technologies, I think we’re in a great place for the future.”
But then comes the third bucket, which needs to be guarded with responsibility and caution. And that’s the clinical bucket.
This is also area that most providers were concerned about years ago when it came to AI. With a limited view of what AI could help with, leaders were concerned about the idea of replacing trained health care professionals.
“This is where you’ll hear me – and our company in general – speak about responsibility and caution,” Miller said. “And that’s in this concept of using AI to do diagnostic things, to deliver care, to insert that technology in the place of the human. Clinical decision support is something that the industry has used for years. Doing it with AI has some incredible promise, but we also see examples of bias entering some of these models. So, we always want to keep a human in a loop. This is where we think about responsibility, every day, and where we want things to be ironclad.”
The reality is that doctors, home health aides, nurses, caregivers and other health care professionals make mistakes all of the time. But, at least for right now, it’s harder for patients and families to accept the mistake of a machine or AI than it is for them to accept the mistake of a human.
AI can help reduce human mistakes in many instances. But, at the same time, non-human mistakes give off the impression to consumers that the provider is not putting the patient first, administering caution or being responsible.
As AI does make its way into all forms of health care, providers have to toe the line between innovation and responsibility. A non-innovative approach could keep providers behind, while an irresponsible approach could erode the trust that exists between them and the patients.