Bradley Merrill Thompson, Strategic Advisor with EBG Advisors and Member of the Firm at Epstein Becker Green, was quoted in Axios, in “Growth of AI in Mental Health Raises Fears of Its Ability to Run Wild,” by Sabrina Moreno.
Following is an excerpt:
The rise of AI in mental health carehas providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.
Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.
- Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
- The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.
What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.
- The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
- It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.
Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.
- KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
- Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.
Catch up quick: The FDA has been updating app and software guidance to manufacturers every few years since 2013 and launched a digital health center in 2020 to help evaluate and monitor AI in health care.
- Early in the pandemic, the agency relaxed some premarket requirements for mobile apps that treat psychiatric conditions, to ease the burden on the rest of the health system.
- But its process for reviewing updates to digital health products is still slow, a top official acknowledged last fall.
- A September FDA report found the agency's current framework for regulating medical devices is not equipped to handle "the speed of change sometimes necessary to provide reasonable assurance of safety and effectiveness of rapidly evolving devices."
That's incentivized some digital health companies to skirt costly and time-consuming regulatory hurdles such as supplying clinical evidence — which can take years — to support the app's safety and efficacy for approval, said Bradley Thompson, a lawyer at Epstein Becker Green specializing in FDA enforcement and AI.
- And despite the guidance, "the FDA has really done almost nothing in the area of enforcement in this space," Thompson told Axios.
- "It's like the problem is so big, they don't even know how to get started on it and they don’t even know what they should be doing."
- That's left the task of determining whether a mental health app is safe and effective largely up to users and online reviews.
Related reading:
Bradley Merrill Thompson Quoted in Globe Live Media, in “Artificial Intelligence Applied to Mental Health: Lack of Control Can Have Serious Consequences,” by Melissa Galbraith.