As AI tools proliferate in post-acute care, more attention is being paid to risks related to data privacy and security. As your organization begins to evaluate these tools, it’s important to understand some basics about how they gather, use and store data, and how you can evaluate whether they meet regulatory requirements such as those included in HIPAA.
There are two main aspects to consider: how data privacy is handled by the organization and how data works within the models themselves. These are a combination of regulatory and technology considerations. There are also ethical and data bias considerations that should be addressed as these tools are developed. I’ll outline some best practices to consider when working with vendors who develop AI tools to help ensure your patient data stays safe and secure.
Existing regulatory policies
In post-acute care, data is mainly regulated by HIPAA, which uses thorough and time-tested rules about what constitutes private health information, who has access to data, and when data can be shared. These regulations apply whether data is stored on-premises or in the cloud.
HIPAA defines more than a dozen different aspects of data that are considered identifiable—that is, information that can be tied back to a specific person. Those pieces of data need to use a level of higher encryption to help ensure privacy. But HIPAA also has identified specific aspects of data, through their Safe Harbor rules that can be removed so data is safe to use for purposes such as medical research.
This de-identified data is what should be used to train artificial intelligence models and algorithms. Once the body of data is large enough, the model trains on it, learning different patterns it can identify to successfully complete a task. A good example that we’ve talked about before is predicting resident falls. When there’s a lot of data about patients and their outcomes, and there’s a fall in the history, you can feed the information into an AI model and it will identify patterns of events that occurred leading up to a fall.
By sifting through millions of records to find the combinations of different factors that can lead to a fall, AI builds what we commonly refer to as a “model.” And once it learns the pattern, it no longer stores the data it sifted through. This is important because there’s a common misconception that AI tools hold on to everyone’s data, putting it at risk for being compromised.
But that’s not how AI works. Once it learns from a de-identified set of information, it does not retain the information. Once the tool has been tested on its new learning, a clinical team can feed it new information about a specific patient. Then, using the pattern, the model will calculate a score for that patient. The score is what provides value and insights to the end user.
In addition, most concerns about AI data being auditable are unfounded. AI tools are typically separate from your main EHR software, which means any AI-generated scores or recommendations are not considered clinical documentation without validation from a caregiver.
Potential data bias
The patterns that AI tools recognize raise other considerations, including the need for these tools to be non-discriminatory and safe. Large regulatory bodies, including the FDA and HHS, are doing due diligence to make sure AI is being applied to appropriate data so that anyone using AI tools can get a safe, non-discriminatory result. For now, the path to achieving that is to promote transparency. This means the AI system or algorithm has to explain how it got to its answer from the data it was given, so the process can be checked to ensure the result is safe to use.
There’s another aspect of bias in data, which also touches on ethics. An AI model in itself can be non-discriminatory. But most of the data a model works with is captured in one way or another by a person. That can inherently lead to some degree of bias, in that some patterns that occur in society reinforce certain biases or reinforce certain aspects of data.
In the future, data bias will likely be reduced because there will be less human-entered data and more data automatically added to EHRs from wearable devices, video and voice recordings. But at the moment, the people who develop AI systems need to recognize that there may be different aspects of data that can cause discriminatory results.
What to consider when adopting AI tools
There’s still some hesitation in post-acute care about using AI technology, especially when it comes to data privacy and security. But the journey to safe adoption of these tools begins with awareness: how the technology works, what de-identified data is safe to share, how privacy is protected.
It’s important to remember these tools can help make providers more effective and improve quality of life for a lot of patients. Being aware of both the risks and benefits and asking the right questions when you’re considering new technology are places to start. Find out whether data has been sourced ethically and biases have been explored or disclosed. Understand the existing regulations for protecting data. Ask whether the vendor developing the tool is compliant with the FDA’s clinical decision support guidelines, as well as HIPAA guidelines for data storage and sharing.
Every organization has its own level of risk tolerance in terms of data sharing. But in general, regulatory bodies are doing a good job in mitigating those risks and holding healthcare organizations accountable. Keep up with new developments and ask the right questions to help your organization use these powerful tools most effectively.
Request a demo today for a closer look at MatrixCare.
Daniel Zhu comes from a diverse background of clinical experience and technology entrepreneurship. Having spent five years in clinical and clinical research roles with the Alberta health system and the University of Toronto health network, he has diverted his medical expertise to architecting and building technology solutions to optimize health care practice. At various organizations, he has lead engineering teams, product teams, and founded his own natural language processing start-up in the clinical research space. Stepping away from the start-up world, Daniel has spent time as a data consultant for large corporations such as Ford, Co-op, and RBI. Re-entering the health technology field, today Daniel has recently joined the ResMed and MatrixCare team to lead the productization of AI and machine learning capabilities.
Start by having a call with one of our experts to see our platform in action.
MatrixCare offers industry-leading software solutions. Thousands of facility-based and home-based care organizations trust us to help them improve efficiency and provide exceptional care.
© 2024 MatrixCare is a registered trademark of MatrixCare. All rights reserved.