CHAI partners open shop to validate algorithms with BeeKeeper AI

34 Views
CHAI partners open shop to validate algorithms with BeeKeeper AI

The Coalition for Health AI (CHAI) has certified its first partnership for AI model validation, after a 16-month effort to get a nationwide network of AI assurance labs up and running. 

The first assurance service provider will be run through the secure multi-party collaboration software BeeKeeper AI with datasets from New York City-based Mount Sinai Icahn School of Medicine and Atlanta-based Morehouse University School of Medicine. 

While the assurance resource partnership will test how well a model performs on a nationally representative dataset, it won’t be able to provide local validation for health systems. 

The first available use case for BeeKeeper, Mount Sinai and Morehouse is for chronic heart failure (CHF). AI model developers who have built algorithms for CHF will be able to test the performance of their models on datasets curated by Mount Sinai and Morehouse. 

Both academic medical centers have a wide variety of patient data that prepare their institutions to test AI algorithms. Morehouse has a diverse dataset of patients along racial, ethnic and socioeconomic lines and has a particular interest in understanding how chronic heart failure is impacted by socioeconomic conditions.

With its product Escrow AI, BeeKeeper will act as a secure middleman to bring together the proprietary model and the de-identified patient datasets to validate the performance of the model. A CHAI spokesperson said CHAI will not mandate that the results be made public and that the vendor can decide when and if to publish results on CHAI’s registry.

There are not any models currently being tested under the new collaboration or any yet in the pipeline. “The trick is always to see … who’s got a model that is ready to be tested?” BeeKeeper CEO Michael Blum said in an interview. “It’s the startup world, right? A lot of people say they’ve got something and it’s ready to go, but it’s really earlier than that, or they’ve got something that they’re starting to sell, and this could create waves for them if they did another validation.”

CHAI touts that independent evaluation of AI models will accelerate time to market for model developers. 

The announcement of the first assurance service provider also gives the industry a first look into the pricing model for CHAI-certified AI validation. Blum could not provide exact numbers because the pricing will be individual to each developer based on their model’s unique requirements.

However, he said the cost for the model developer will be based on what data it needs access to, how long it needs access to the data and how complex the model is. 

“If it’s a relatively straightforward model, looking at one single element, and they can complete a validation run within a week to a couple weeks, it’s not particularly expensive,” Blum said. “But if they’re looking at getting fairly complex data, they want, say, EHR data and imaging data and some genomic data and EHR notes, and they have an algorithm that needs GPU acceleration to do the imaging analysis, then you get complex projects.” 

BeeKeeper is already pushing ahead with being able to validate AI for more use cases beyond chronic heart failure, including for AI that does administrative tasks like revenue cycle management. Blum said within three to six months he hopes to support validation for dozens of use cases. 

“One of the beauties of this announcement is Morehouse and Mount Sinai have already curated this master data set,” Blum said. “Then the model developer will say, ‘I need these elements of it’ and they will curate from the master dataset that already exists, exactly the dataset that the algorithm developer needs … From our perspective, most of it is completely automated, so going through the process, and our platform is a SaaS platform that supports the collaboration between the model owner and the data steward, and they just collaborate right on the platform, and all goes well. They could be done in a week.”

CHAI expects to announce more certified assurance services providers in short order, its CEO Brian Anderson said in an interview. 

CHAI has moved away from using the term “quality assurance lab,” though in principle, the concepts are similar.

“The idea of having a term like ‘quality assurance lab’ didn’t seem to capture the kinds of resources that it would be offering this or like a development community,” Anderson said. “Naming conventions aren’t perfect, but this is our best attempt to emphasize that these labs or these resource providers are supporting both the developer community on model training as well as the validation or testing community on the other aspects of the model AI model’s lifecycle.”

Moving forward, the AI assurance providers will likely be partnerships between private companies and health systems. 

“It seems to be the one that is going to be most inclusive of lower resource health systems, meaning, if the technology and the technical expertise and the infrastructure is all centralized, you can have a platform-like approach with commonly agreed upon APIs,” Anderson said.

He added, “We like that approach because I think it’s going to be more inclusive of rural health systems [and] big health systems.”

In fact, CHAI will likely leverage partnerships that already exist, like in the case of BeeKeeper, Mount Sinai and Morehouse. 

Blum said it has partnered with Mount Sinai for many years, and it recently announced a partnership with Morehouse, separate from the CHAI assurance resource certification. Mount Sinai has also been an investor in BeeKeeper AI since its series A fundraising round, Blum said. 

CHAI’s assurance resource certification framework includes conflict of interest disclosures and agreements. Additionally, becoming certified requires platforms to have secure infrastructure, access to competent staff and technical capabilities, and regulatory-grade data.

Blum said it took a few months to undergo review by CHAI and become certified.

“We think it’s a super important piece of AI in the healthcare landscape that there are these independent evaluations that it’s very hard for a care delivery organization to do these by themselves, if not impossible to do it by themselves,” Blum said. “So having an independent group where the model developers can go is super important to the ecosystem.

As the CHAI assurance service providers are conceived now, they will not be able to do any local validation for health systems. If health systems want a prospective vendor to test their model on the health system’s data, it will have to find a way to test locally. 

CHAI aims to have AI validation providers spread throughout the country, in partnership with roughly 30 health systems. In its certification pipeline, Anderson said there is geographic representation for the Upper Midwest, East Coast and Southeast. It’s currently missing representation in the Mountain West, Pacific Northwest and Southwest, Anderson said.

Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by lifecarefinanceguide.
Publisher: Source link


Leave a comment