The Coalition for Health AI has released its draft framework for responsible development and deployment of artificial intelligence in healthcare.
The framework – consisting of a standards guide and a series of checklists – was developed over more than two years, according to CHAI, which says it addresses an urgent need for consensus standards and practical guidance to ensure that AI in healthcare benefits all populations, including groups from underserved and under-represented communities.
It is now open for a 60-day public review and comment period.
WHY IT MATTERS
CHAI, which launched in December 2021, previously released a Blueprint for Trustworthy AI in April 2023 as a consensus-based effort among experts from leading academic medical centers, regional health systems, patient advocates, federal agencies and other healthcare and technology stakeholders.
CHAI said in its announcement Wednesday that a new guide combines principles from the Blueprint with guidance from federal agencies while the checklists provide actionable steps for applying assurance standards in day-to-day operational processes.
Functionally, the Assurance Standards Guide outlines industry agreed-upon standards for AI deployment in healthcare and Assurance Reporting Checklists could help to identify use cases, develop healthcare AI products and then deploy and monitor them.
The principles underlying the design of these documents align with the National Academy of Medicine’s AI Code of Conduct, the White House Blueprint for an AI Bill of Rights, several frameworks from the National Institute of Standards and Technology, as well as the Cybersecurity Framework from the Department of Health and Human Services Administration for Strategic Preparedness and Responses, according to CHAI.
Dr. Brian Anderson, CHAI’s chief executive officer, highlighted the importance of the public review and comment period to help ensure effective, useful, safe, secure, fair and equitable AI.
“This step will demonstrate that a consensus-based approach across the health ecosystem can both support innovation in healthcare and build trust that AI can serve all of us,” he said in a statement.
The guide would provide a common language and understanding of the life cycle of health AI, and explore best practices when designing, developing and deploying AI within healthcare workflows while the draft checklists assist the independent review of health AI solutions throughout their life cycle to ensure they are effective, valid, secure and minimize bias.
The framework presents six use cases to demonstrate considerations and best practices:
Predictive EHR risk (pediatric asthma exacerbation)
Imaging diagnostic (mammography)
Generative AI (EHR query and extraction)
Claims-based outpatient (care management)
Clinical operations and administration (prior authorization with medical coding)
Genomics (precision oncology with genomic markers)
Public reporting of the results of applying the checklists would ensure transparency, CHAI noted.
The coalition’s editorial board reviewed the guide and checklists, which were presented in May at a public forum at Stanford University.
One CHAI participant, Ysabel Duron, founder and executive director of the Latina Cancer Institute, said in a statement that the collaboration and engagement of diverse and multi-sector patient voices are needed to provide “a safeguard against bias, discrimination and unintended harmful results.”
“AI could be a powerful tool in overcoming barriers and bridging the gap in healthcare access for Latino patients and medical professionals, but it also could do harm if we are not at the table,” she said in CHAI’s announcement.
THE LARGER TREND
First addressed by the House Energy and Commerce Health Subcommittee at a hearing on the U.S. Food and Drug Administration’s regulation of medical devices and other biologics last month, more lawmakers are now asking FDA and the Centers for Medicare & Medicaid Services questions about their use and oversight of healthcare AI.
The Hill reported Tuesday that more than 50 lawmakers in both the House and Senate called for increased oversight of artificial intelligence in Medicare Advantage coverage decisions while STAT said it had a letter from Republicans criticizing FDA’s partnership with CHAI.
Dr. Mariannette Jane Miller-Meeks, R-Iowa, asked the FDA at the May 22 hearing if it would outsource AI certification to CHAI, a group she said was not diverse and showed “clear signs of attempt at regulatory capture.”
“It does not pass the smell test,” she said.
Dr. Jeff Shuren, director of the Center for Devices and Radiological Health, told Miller-Meeks the CDRH engages with CHAI and other AI industry coalitions as a federal liaison, and does not engage the organization for application reviews.
“We’ve told CHAI, too, that they need to have more representation in the medtech side,” Shuren added.
ON THE RECORD
“Shared ways to quantify the usefulness of AI algorithms will help ensure we can realize the full potential of AI for patients and health systems,” Dr. Nigam Shah, a CHAI co-founder and board member and chief data scientist for Stanford Healthcare, said in a statement. “The Guide represents the collective consensus of our 2,500-strong CHAI community including patient advocates, clinicians and technologists.”
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
The HIMSS AI in Healthcare Forum is scheduled to take place September 5-6 in Boston. Learn more and register.
Information contained on this page is provided by an independent third-party content provider. This website makes no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact editor @americanfork.business