HHCN Continuum: Embracing Technology to Recruit and Retain Caregivers

This article is sponsored by AlayaCare. This article is based on a discussion with Sarah Khalid, Product Manager at AlayaCare and Guillaume Vergnolle, Sr. Data Scientist at AlayaCare. This discussion took place on December 7, 2023 during the Continuum Conference. The article below has been edited for length and clarity.

Sarah Khalid: I’m the product manager on the Labs team, so we focus more on the research and development, and the next suite of innovative features that are going to hit the platform, so typically AI, machine learning, these innovative next-generation technologies. We test a lot with those and get them into production for our clients.

Guillaume Vergnolle: I’m a senior data scientist in AlayaCare, so basically, I’m working on the implementation and how to create those AI models that can eventually help in some of the healthcare problems such as caregiver retention, or patient re-hospitalizations, or trying to extract important information from the clinical documentation among others.

Advertisement

Home Health Care News: Artificial intelligence has obviously been around for a while, but there’s been an explosion in terms of the popularity of it over the last few years. People are talking about it more, especially with ChatGPT and other language models. People are really wondering how it’s going to affect them, how it’s going to affect their business, how it’s going to affect their industry. I’m curious how you think it’s going to affect the whole health industry specifically.

Khalid: Yes, absolutely. Just to take it back and make this industry-specific, so we know that, of course, caregiver retention is such an issue within our industry. Just referencing Home Care Pulse’s recent reports: 64% turnover rates for our caregivers and our nurses in the field, 40% averages for back-office staff, including coordinators’ intake. This is a real problem that plagues our industry, of course.

Additionally, one in four caregivers are likely to leave within the first 30 days of being hired. What we’ve uncovered with some of the data that we’ve been looking into is that this is often due to scheduling-related issues, not being scheduled for preferred hours or preferred care within the first few months of working. There’s a lot of data and a lot of interesting insights that we can extract and pull out of our industry today.

Advertisement

When we think about how to combine some of that with AI and a lot of the generative models that are now being used and thought of, we can get really creative with some of the ways that we can combine those powerful technologies with our data. There’s a lot of creative different use cases. I think more and more, we’re going to start to see this being seamlessly integrated in the entire retention problem. I’m excited to talk more about how we also envision that.

Vergnolle: Yes. I guess we’ll be seeing AI being used more and more in many different industries, healthcare among others. It will also take a bit of time, because we know that AI models can be super good at very complex tasks, but sometimes can perform poorly on very simple tasks. We’ve seen the emergence of models like ChatGPT and others that are being better at tasks that they have never been trained on.

At the end of the day, the models will be as good as the data they’re being fed, so the data is really central in how you design your models. We’ve seen some very simple limitations. For instance, like models that can predict if nurses are about to quit their job. Then when applied to PSW, it will perform really poorly, just because it wouldn’t be looking at the right data points. We’re ramping up. We’re trying to find how to properly apply AI in this sector and all the steps that will lead us there.

HHCN: It sounds like a good place to start. It’s starting to collect data for businesses so they have that applicable data that can be fed to the AI model eventually.

Vergnolle: Definitely. Sometimes I often describe data as clay. It’s like your base material in order to build a solution that could be like a vase, for instance, if I keep on the metaphor of the clay. I guess one thing that you need is to get the right expertise in order to come up with those solutions. When it comes to retention problems, make sure that you’re actually collecting the right data to measure when your employee leaves, and for what reasons. To try to capture this data should really be the base when it comes to building solutions to try to figure out that.

Khalid: Just to add on to that, so now we’re starting to see that a lot of the ways that companies are storing their data, these databases are now being seen as knowledge bases that are eventually going to feed into the model. We can do a lot of interesting tasks on top of that knowledge base. Now, there’s the recent release of a lot of these Retrieval Augmented Generation models.

What we can do is we can ask a series of questions on top of that data and pull out some of these interesting ways that we can use this in combination with the current retention practices that we have. Absolutely the distress, the data is the most. It’s still the most important thing when we’re using AI as a tool to solve these problems.

HHCN: What are some of those retention use cases that you’ve seen so far?

Vergnolle: What comes to mind is identification first. That’s like how we can identify among the employees at your company who are dissatisfied, or who is about to leave. There are very simple ways to go through that. Surveying should be the first solution that comes to mind. Then we can try to think of how we can continuously measure this dissatisfaction across time, because surveys are just punctual, we can do them every couple of months eventually.

If we’re collecting the right data points, it can be like the number of hours that your employees have received. We can compare it against the number of hours in their contract or compare it to their availability. There’s a lot of data points that you can try to measure to make sure that your employees are having the service, like the volume of service they’re asking for.

HHCN: Yes, schedule volatility has been one of the biggest reasons. Owners and C-suite executives have told me that caregivers and home health aides are quitting. Particularly of late, even the best of the best workers have been burnt out because their schedule is so unpredictable from one week to the other, so it gives them no real assurance that their personal life isn’t going to be thrown a wrench week by week. I’m curious, what challenges do owners and C-suite executives face when they’re trying to implement AI that you guys have seen?

Khalid: The thing that comes across time and time again, as we touched on, is how the data is being collected, just to keep stressing on that point. Particularly when we’re tackling some of these retention problems, a lot of the time, just in the nature of this conference, Continuum, the data is being stored in so many different places. We actually see a lot of different owners that are storing their data, maybe in Google Docs, maybe just still in Excel, in a bunch of different sources. Still the challenge, not just in making sure that the data that’s being collected, is of quality.

Actually a lot of these large language models, they’re quite forgiving for some data quality issues if they have enough context. They’re forgiving for some spelling errors and some mistakes that traditionally have been really difficult to manage with more classical machine learning problems. Still, making sure that all of that data is harmonized, and as well making sure that when we’re speaking to the different stakeholders that are collecting and putting in the data, that everyone agrees that this is the central source of truth. When we’re talking about, for example, speeding into and predicting whether someone is satisfied, whether someone is at risk of churn.

If we’re collecting reasons such as termination reasons, a lot of the time these will be unstructured. They live in, again, a bunch of different places. We all need to come together and come up with more formal data contracts when we really start to work on these problems.

Vergnolle: Yes. I guess the two main challenges I’ve been facing on my journey as a senior data scientist is, one, the data pre-processing. It’s good to have the right data captured, but then to make the data speak, basically too. You need to curate your data to have it taking the right shape, and that takes most of your time. I was surprised because at the university, when you’re doing your master’s degree in artificial intelligence, you mostly work around models and AI, how to properly train them, how to run the evaluation. Then coming to the professional world, I was surprised how much time it takes pre-processing just to get the data in great shape. That’s one of them.

I guess the second other big challenge I’ve been facing is definitely getting a model into production because you need to validate, to ensure to have a pipeline in place, but also you need monitoring just to make sure your model keeps on spitting good results, that your results make sense. So many things can evolve. For instance, COVID-19 had a huge impact on all the models that were in place just because all of a sudden, all the data distribution has been changing. All of a sudden, new data points were coming in. We need to adapt. We need to measure how our models are performing before being deployed but also after. That’s also super important.

HHCN: Yes. If you have a thesis of where AI could help in your business and you go to implement it, what’s that process like over time? How do you make sure that once you implement it, it’s actually going to be able to be drawn out widely across the organization, and ultimately work and save you on your bottom line or whatever?

Vergnolle: One thing that really worked for us is being really close to the users. The end users, they will actually use the system that you’re trying to design, because in the end, they will be the one using it in their day-to-day to add it in their workflows. Adoption is very important. To have a good adoption rate, for that you need a good explainability and also trust. You need to build trust with your model.

If you can derive metrics, have a good evaluation metrics to be able to say, “My model works X percent of the time.” If you have access to historical data, you can run your model on historical data and see how if it was applied at that time, how it would have performed. That can help build trust. Then also with time, of course. You can give some trial periods where you can get used to the prediction and how to interact with your solutions. That can really help.

For the explainability parts, I’d say that most of the time AI is seen as a black box. Sometimes it is, to be honest. If we go back on the retention problem, it is not enough to say that an employee is at risk of leaving a job. You need also to be able to give the reasons why. That helps build that trust that eventually helps with the adoption. That’s, I would say, the highway to make sure that your product down the line actually helps the user in their workflows.

Khalid: Yes, just to stress that, of course, at the individual user level, it’s very important to build that trust. When we bubble it up to the agency level and the teams that we’re working with, it’s even more important to build that collective trust. Making sure that the workflows that we’re designing are not very foreign to their current practices, this is something that’s really key. It’s going to help a lot with adoption. That’s typically the method that we co-innovate with our users. I think that that’s probably the best method to make sure that these systems are being adopted well, and there is that trust being built in.

Then I think just having these conversations, again, at more of a collective level, because this is a tool, this is a technology that isn’t going anywhere. It’s going to continue to adapt and change rapidly. I think just having these open forum conversations about what other people are doing with AI and how they’re building these systems, these are all ways that we can start to build collective trust with the tool being used and being adopted.

HHCN: Yes. In regards to trust, I’ve heard, for instance, that sometimes like ChatGPT gives information to someone that shouldn’t have it or to make up answers. How do you grapple with those sorts of things that are still embedded in AI as we know it, if you’re implementing this into your business?

Khalid: Yes, it’s a real challenge. For us, I think, and for a lot of companies that are creating these AI systems, it’s still of utmost importance to stress that this is not meant to be a replacement tool, but it’s meant to be decision support. Throughout the entire process, when we’re training, when we’re building, when we’re thinking up how AI can be used, we are considering the process of the human being in the loop. Ensuring that we’re collecting the correct reasoning for how we’re able to deduct certain predictions, how we’re able to come up with certain responses, that’s of utmost importance.

I think that there’s a lot of fear generally when we talk about AI as it’s going to be replacing a lot of us humans and the tasks that we do day-to-day. The truth is, again, it’s meant to be more of a decision support tool. In addition to that, the industry that we’re in, the data is extremely sensitive. When we begin to experiment with a lot of these models, these large language models, even as simple as running an API call with, say, an openAI and playing with a ChatGPT. This is something that actually can result in data leaks, and breaches, and actually giving over your data to an external company.

I think we all need to also just be very informed when we’re experimenting with these models, some of the risks there when it comes to personal health information also being leaked. There’s a number of different things.

Vergnolle: I guess to add to that, it’s even more important that we’re in the healthcare sector. Down the line, all the predictions that you’re doing using those models could affect a patient’s health, so eventually life. Having the proper security layers are even more important in that sector. You need to adapt your strategies. For instance, all the proprietary models that are out there available, like the ChatGPTs and so on, most of them have been trained on web data. They’ve been scraping Wikipedia pages and many other pages.

You need to make sure that you can actually apply those models to the application domain that you’re working on. That may also require you to build your own guardrails. A guardrail is something that has been used by OpenAI and all those model providers to prevent the models to behave in a certain way. For instance, you can give it a try yourself. If you ask ChatGPT how to wire a car, it should say, “Hey, sorry, but I’m not allowed to answer that question.” In our fields, we also need to build our own guardrails. How do we prevent those large, like those models, those chatbots to not go in certain areas?

HHCN: If I am a Home Health Care executive and I’m considering AI and implementing it, how do you get started? Are certain organizations too small? What do you need in order to get started?

Vergnolle: I would say first, being data-driven is very centric. If you want to include any AI project, what is good is there’s a good amount of AI models out there if you need to use them directly. Though I would not start from which model to use. I would first encourage you to start to identify your problems and ask yourself which problem can be solved using AI.

I feel like often, AI is seen as a solution, whereas it should be seen as a tool. You can look for yourself. There are some articles talking about coffee machines augmented with AI like the first. That’s questionable, is AI properly used in that context. Whenever you’re starting a project, ask yourself the question, what would be the right tool to solve this problem? AI could be a part of the answer, but also make sure to consider which tool would be optimal.

HHCN: Oh, for instance, it can tell when a worker might be about to leave. The onus is still on the owner to make sure that they don’t leave. It’s part of the process, it’s not the entire thing.

Vergnolle: That’s it. For instance, if you don’t even have your hands on the data on what the turnover metrics are, maybe this is where I would start before actually envisioning adding AI on top of that.

HHCN: In terms of when you guys are working with home-based care providers, what are the biggest pain points that you’ve seen so far? What are some of the questions they have most of the time? Are there many themes that have come up?

Vergnolle: It’s one of the areas of challenges that we had, is generalization among others, just because each care, each agency or each provider, may have different ways, different workflows, and how to make solutions applicable to different markets. That’s one of the challenges that we’ve been facing. You need to make sure that you’re, in a way, flexible to allow for different markets to use your solutions, but also accurate in each of those. That’s one of the challenges that we’ve been facing.

Khalid: I’d also say, just to stress on Guillaume’s point of generalization, when we’re thinking about the different stakeholders that are going to be interacting with the systems, we really need to ensure that they are considered and thought of day zero. When we’re thinking about those processes, oftentimes, maybe when an owner is excited to just dive right into AI as a solution, as Guillaume stressed, it’s a tool. It is not necessarily going to be the answer for some of these problems. Being able to have that thorough setting up of what is the objective involving the stakeholders, because ultimately, what we’re trying to predict in the end is some of their reasoning processes for how they come to the conclusion that, actually, this person is at risk of churn.

A lot of these models now too, in the frameworks that have been released, which are really interesting, is you can begin to see how the model is reasoning in each and every step, and how it’s using the data, how it’s interacting with a set of different APIs, and how it’s coming up with the conclusion that this is why I believe that this nurse is at risk of churn.

We still need the stakeholders throughout that entire validation process to make sure that we have that trust. I would say that, yes, for owners, not getting too excited about how we can start to use LLMs right now, and maybe thinking about it in a more holistic perspective of that.

HHCN: I also imagine the owner bringing in a caregiver or a home health aid may be beneficial because you have their perspective as you’re building the model, and you’re understanding why they might churn.

Khalid: Completely, yes. Another use case that actually feeds into some of those problems with churn is when caregivers and nurses are out on the fields, often with some of their day-to-day tasks, they’re doing a lot of this documentation. They’re doing a lot of note-leaving and note-taking. We can actually use another set of LLMs and machine learning to pick up on different patterns when they are leaving some of those notes, pick up on different sets of behaviors.

Maybe we can pick up on sentiment when they are leaving those notes as well. That gives us a clue, too into how they’re feeling and how that feeds into retention as well. That’s an indirect way to understand their overall sentiment. Doing direct interviews, that is always something that’s completely valuable, and something that we need to consider when we’re building these systems.

HHCN: Is there anything that you’re excited about that could be applied in the future that maybe not be possible now, whether it’s in the home health industry or just in business at large?

Vergnolle: I’m personally very excited. We’ve seen those models coming up that are mostly around chats. We’re seeing more and more models coming out that use multi-modalities, so not only text, but also pictures, videos, sound even. Can you imagine having almost an assistant that could pull you for the whole day where you can monitor your patient’s health like vitals, for instance.

Even with pictures, you could look at a wound. Say, if it has evolved, if it’s actually getting better. As a caregiver, for instance, just having that device and just saying out loud all the care documentation that you want to take. At the end of the day, it just makes you a nice paragraph that is just a summary of what you’ve been sharing doing that day. I’m seeing a lot of applications that mix all those different applications and modalities. In the future, they could build great products.

Khalid: I‘m actually quite excited about a lot of the creativity that we can evoke with these models. You’re starting to see a lot of different creative use cases that, yes, we can implement into our businesses. Just from, again, more of an outside perspective, so I personally love to write. I actually like to evoke large language models in more of a Socratic dialogue to get deeper into understanding some different holes of creativity. I think that that’s something that’s really interesting that we could also in the future implement with some dialogue with some of these agents.

Right now, it’s an interesting time because there’s a lot of focus on the knowledge that is available on artificial intelligence. I do think in the future, businesses, society at large is going to start to adapt to more artificial wisdom, in a way. How can we ensure that the systems that we’re building out are in sound principles? We’re not just focused on this age of information and getting overwhelmed there.

I think that just due to the nature of where we are right now in the industry, the ML industry, the home care industry, we’re going to start to think about these more perennial, I think, larger existential questions, and build our systems in a way that it’s touching on these principles very, very comprehensively.

To learn more about how AlayaCare can help your organization ensure operations are consistent across multiple locations with real-time information updates for key stakeholders, visit https://www.alayacare.com/.

Companies featured in this article: