Regulatory agencies across the globe had their hands full in 2020 and 2021 in responding to the COVID-19 pandemic, but that is just one reason that regulatory frameworks for artificial intelligence are lagging. Several regulatory proposals have been floated, but one of these hints at a need for regulatory harmonization, a requirement that seems certain to add yet more drag to a process that is already years behind the technology.
A number of regulatory and enforcement issues have cropped up over the past two years, including a more rigorous approach to mergers and acquisitions by the U.S. Federal Trade Commission. The question of Medicare coverage for FDA breakthrough devices also roiled the waters for device makers, although a legislative fix appears to be in the works. Another vital development is the nomination of Robert Califf to reassume the job of FDA commissioner, a nomination that enjoyed bipartisan support from the Senate committee of jurisdiction in a Dec. 14, 2021, hearing. …
Bradley Merrill Thompson, a regulatory attorney with Epstein Becker & Green P.C., told BioWorld that regulators face two barriers to keeping up with novel technologies. One is that regulators cannot anticipate everything, and the second is that regulators do not always have the capacity to deal with an impending technology until after it has arrived, even when they do see it coming.
However, Thompson said, “I think what you see going on in many instances is regulators not wanting to jump the gun before they understand the technology,” an approach he said is a very useful exercise of self-restraint. “Regulators worry that if they act precipitously, they will chill what might be a promising new development,” he said, adding that they may deliberately hold back “until the facts become clearer. We should applaud that.”
While these agencies may be stuck between the need to harmonize and the competitive economic ambitions of their parent governments, national cultures also vary with regard to the tension between innovation and risk. “As a result, while we tried to harmonize – particularly in such areas as inspections so that operationally we can achieve efficiencies – regulators also know full well that their constituents have different views from constituents in other countries,” Thompson remarked.
One of the key issues with AI and ML algorithms, particularly the latter, is that of change control. The FDA has said repeatedly that it is working on a draft guidance for this consideration, but has yet to produce one. Thompson said the absence of a guidance is a source of drag on applications but is not an insuperable barrier.
“FDA is working with individual companies to come up with individualized approaches to change control,” he said, adding that this approach allows the agency and industry to test out a few ideas about the concept. “It's actually not a bad thing that we experiment with different approaches driven by different applicants before FDA settles on a singular approach that it wants to embed in guidance. Embedding approaches and guidance to soon tends to freeze regulatory creativity,” he explained.
A guidance for clinical decision support systems has been conspicuous by its absence, given FDA’s repeated promises to produce such a document, but Thompson noted that the draft had emerged shortly before the onset of the COVID-19 pandemic. He said the FDA “struggles with guidance generally, because they are under user fee agreements that require them to move the freight, which means less staff time available to work on guidance.”
The guidance development process is no marvel of administrative ease either, Thompson indicated, stating, “I saw a slide once that had something like 30 or 40 different gates that a guidance document had to go through in its journey toward publication” in draft form. The complexities inherent to software do not make the task any easier for considerations such as clinical decision support products, all of which makes the agency wary of putting out a guidance it is not confident is well grounded in the technology.
The FDA has always lacked for an exhaustive set of employees to cover the vast range of technologies it has to deal with, but Thompson said that might not change anytime soon, including the kind of talent that can deal with applications involving AI and ML.
“I really don't think FDA will ever be in a position to buy the talent it needs,” he said, an all-too-familiar predicament for companies and government agencies that need chemists and engineers. “That said, I do think the current strategy is not working and they need to revisit it. For one thing, the government sometimes cannot seem to get out of its own way when it comes to hiring. That's a broader observation than just hiring for AI,” he noted.
Thompson said his regulatory wish list for 2022 includes a guidance on change control for AI and ML, but draft guidances on algorithm explainability and bias would be helpful as well. “I also think that it would be very helpful to go deeper and further into good machine learning practices,” he said, adding that a conversation about postmarket data collection and monitoring will have to be started at some point in the not-too-distant future. However, he predicted the software pre-cert program might not resurface at any point in the near term.
Some of the entities involved in the pre-cert pilot may have believed that the program did not sufficiently insulate the developer from a reversion to traditional regulatory mechanisms, “while others are concerned that there isn't enough benefit. I think it sort of got bogged down as they started to think through the details,” he said.