URAC starts AI accreditation for users and developers of AI

3 Views
URAC starts AI accreditation for users and developers of AI

Healthcare accreditation body URAC is rolling out the nation’s first accreditation program for users and developers of healthcare artificial intelligence. 

The first-in-the-nation program will evaluate risk management, business management and performance monitoring with specific modules for users and developers. URAC accredits organizations ranging from small pharmacies to multistate payer organizations. 

The organization, which has been accrediting healthcare organizations for decades, hopes the URAC gold star will help promote trust in AI.

“We think that this is a great opportunity to give people that seal of approval, that gold star, that someone independent has gone in behind the scenes and audited to make sure that this is trustworthy,” Shawn Griffin, M.D., CEO and president of URAC, said in an interview.

The two AI accreditation pathways share core components, including regulatory compliance and internal controls, protection of consumer information, impact analyses and staff management. 

To create the accreditations, URAC convened an advisory committee of insurance plans, providers, pharmacies, technology companies and legal teams. 

The accreditation application is anonymized and sent to an independent review board, so the board is not influenced by the size or reputation of the company. 

“Nobody can buy our accreditation,” Griffin said. “You don’t get it based on reputation. You get it based on your behavior.”

AI developers that seek URAC accreditation must submit information on their data training and governance. URAC accreditors will also consider the developer’s practices for pre-deployment testing, validation and evaluation, and addressing model drift. Developers must submit information on their disclosure procedures to customers, like how they communicate intended use and performance limitations. 

Users of AI, which includes health systems and provider organizations, must submit to URAC its user management, testing and training plans. Within these plans, deployers of health AI must describe how they will test and monitor AI in their settings and how they will ensure that the AI solution is applicable for their population. URAC will evaluate a responsible use assessment and disclosure procedures for the deploying organization. 

Griffin explained some of the questions URAC accreditors will be asking deployers and developers of AI: “Are we using tools in the right way, the way they’re designed? Are we monitoring them, appropriate for the risk that it brings into the interaction? Are we informing the patients? Are we informing the providers, and do we have a plan on what if it goes wrong?” Griffin said. 

URAC prides itself on the fact that its accreditations apply uniformly to large and small organizations. However, Griffin explained that URAC accreditors would not expect a federally qualified health center and a large academic medical center to have identical AI governance protocols. 

“What we’re going to say is, what is your governance structure? How are you vetting these tools? How are you rating these tools? And what’s your oversight monitoring and your feedback loop? And that’s not going to be dependent upon [whether] you have 27 [full-time] people … Tell us what your plan is,” Griffin said.

Accreditation processes are also more nimble than regulation, meaning they can be altered more quickly to account for the rapidly evolving nature of the technology. Griffin expects to make some updates to the AI accreditation programs within the first few months of accrediting organizations. 

For any trust and safety issues, the accreditation can be immediately updated and all accredited parties notified. 

Griffin explained that the AI accreditation programs were created more quickly than URAC’s other accreditations. The speed was influenced by a lack of regulation for AI.

URAC did not see a need for accreditation under the Biden administration because of its executive order, the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO advanced a governmentwide strategy to promote responsible use of AI. Trump rescinded the executive order on his first day in office. 

“When the new administration came in, what happened was they said, ‘OK, that rule is suspended,’” Griffin said. “And we thought, ‘Well, who is looking out for the patients here? Who’s looking out for the providers with these tools that are coming in?’”

Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by lifecarefinanceguide.
Publisher: Source link


Leave a comment