Back to News
investment

FDA Provides Thought-Provoking System Life-Cycle Scenario Encompassing AI-Enabled Mental Health Devices

Forbes
Loading...
16 min read
1 views
0 likes
FDA Provides Thought-Provoking System Life-Cycle Scenario Encompassing AI-Enabled Mental Health Devices

Summarize this article with:

The FDA provided an insightful scenario on the system life cycle of AI-enabled mental health medical devices.gettyIn today’s column, I examine the recently published FDA life-cycle scenario underlying AI-enabled mental health devices. There is much to be gleaned by exploring the scenario and surfacing its many real-world ramifications.Let’s talk about it.This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental HealthAs a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.Background On AI For Mental HealthI’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.MORE FOR YOUThis popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.FDA Is In This MilieuThe Food and Drug Administration (FDA) has been trying to figure out how to sensibly regulate medical devices that dovetail into the AI mental health realm. Overall, there appears to be a desire on the part of the FDA to craft a regulatory framework that balances safety and caution with a sense of encouraging innovation and progress. All eyes are on the FDA. Stakeholders include AI makers, MedTech firms, regulators, lawmakers, healthcare providers, researchers, and many others, especially the public at large. A Digital Health Advisory Committee was formed by the FDA and had its initial public meeting last year on November 20-21, 2024. The theme for that meeting was entitled “Total Product Lifecycle Considerations for Generative AI-Enabled Devices.” The second meeting took place this year on November 6, 2025, and was entitled “Generative Artificial Intelligence-Enabled Digital Health Medical Devices.” For more information about the FDA efforts on these matters, see the official FDA-designated website at the link here. The FDA definition of digital mental health medical devices, as utilized for the November 6, 2025, meeting, said this (excerpt):“For this meeting, ‘digital mental health medical devices’ refers to digital products or functions (including those utilizing AI methods) that are intended to diagnose, cure, mitigate, treat, or prevent a psychiatric condition, including those with uses that increase a patient’s access to mental health professionals.”AI-Enabled Systems And Their Life-CycleSystems undergo a life-cycle.When referring to a system life-cycle, the idea is that AI systems proceed from an initial beginning to an eventual endpoint. This equally applies to medical devices. And this also applies to mental health medical devices. In the normal parlance of the tech industry, this is broadly known as the systems development life cycle (SDLC). Someone comes up with an idea for a system, it gets designed, built, tested, fielded, maintained or updated, and eventually is retired or removed from usage. That is the typical life-cycle at play.In the case of medical devices, there is heightened recognition that the life-cycle must be especially rigorous. One can potentially tolerate a small flaw in a system that does an everyday non-medical function, but that same level of tolerance is not acceptable for medical devices. Lives depend upon these specialized systems. The system must be devised stoutly and contain a multitude of double-checks. The FDA wants to ensure that mental health medical devices have a heightened semblance of preparation and scrutiny. If someone crafts a mental health medical device in their garage at home, perhaps they aren’t considering the level of assurance that is needed. Even a commercial builder of systems might treat such devices as though they were of a normal casual design. That’s quite troubling, and the use of regulation can hopefully turn around those types of views and spur more systematic and thoughtful systems efforts.The twist is that an AI-enabled mental health medical device is going to be particularly tricky to appropriately test and double-check. In the case of generative AI and LLMs, the AI is deliberately shaped to exploit non-determinism. This means that the AI uses statistics and probabilities to generate responses that vary each time the AI responds. That’s why people like the fluency of contemporary AI – it doesn’t tend to repeat itself, and the natural language seems highly variable, akin to human interaction.There is a popular saying in the AI field that current AI tends to be like a box of chocolates, namely, you never know for sure what it is going to say.Despite this non-deterministic approach for AI, it is nonetheless still possible to adopt a rigorous life-cycle method. For my in-depth discussion on how to do proper and prudent validation for AI-enabled systems, see the link here.FDA Has A Useful Strawman ScenarioTo illustrate how a system life-cycle can occur for an AI-enabled mental health medical device, the FDA provided a scenario that was analyzed during the Digital Health Advisory Committee meeting held on November 6, 2025. Members were asked during the meeting to comment on various facets surfaced while analyzing the scenario.I will share with you the scenario, and then we can walk through some of the especially fascinating considerations underlying the case study. I am quoting the scenario from the FDA materials posted. The abbreviation “HCP” means health care provider.Here is the published scenario: “A patient diagnosed with major depressive disorder (MDD) by their healthcare provider is experiencing intermittent tearfulness due to increasing life stressors. Although the patient has consistently refused recommendations for therapy from their healthcare provider, the patient is willing to try a software device that provides therapy. This prescription therapy device is built on a large language model (LLM) that utilizes contextual understanding and language generation with unique outputs that mimic a conversation with a human therapist. This product is a standalone prescription digital therapy device indicated to treat MDD for adult patients (aged 22 years and older) who are not currently engaged in therapy.”That is how the scenario gets started. I will address that opening stage in a moment.The Second Stage Of The Life-Cycle ScenarioAs a heads-up, the scenario progresses into a second stage, as per the FDA depiction:“The manufacturer of the aforementioned, generative AI-enabled mental health medical device has decided to expand their labeled indications for use. They are contemplating the following changes:”“a. Making the device available over-the-counter (OTC) for people diagnosed with MDD.”b. Modifying the OTC device to autonomously diagnose and treat MDD in an ongoing manner without the involvement of an HCP. They intend for the device to be used by people who have not been diagnosed with MDD by an HCP but have been experiencing symptoms of depression.”“c. Modifying the OTC, autonomous diagnosis and treatment device to be used for multiple mental health conditions (e.g., multi-use indications), meaning that it can provide both diagnosis and treatment for multiple mental health conditions related to sadness (in contrast to a device that is specifically indicated for MDD).”“The user of the device may not be clinically diagnosed with any mental health condition but has been feeling sad and has not met with an HCP.”The gist is that in the opening stage, the system is meticulously overseen by a healthcare provider, can only be used via obtaining a prescription, and the AI is not operating without human oversight, i.e., without involving a certified therapist. In the second stage, the same system is going to no longer be exclusively available by prescription; it is now going to be available over the counter. Furthermore, the AI of the system is allowed to operate autonomously. There isn’t a human therapist providing oversight. Nor does the user have to first meet with a therapist before using the system. And, if those aren’t already a whopping set of changes, the AI isn’t going to be confined to depression as the only mental health condition of interest. The AI will be shaped toward entailing a multitude of mental health conditions. It’s the whole ball of wax.The Third Stage Of The Life-Cycle ScenarioFinally, the scenario goes into a third stage.Here’s the FDA depiction:“Expand the population to include a child or adolescent (i.e., 21 years and younger).”“a. As you consider the manufacturer’s proposed changes, please discuss whether your prior responses to question 1 would change if the population were children or adolescents.”“b. If so, how would the responses change?”As you can plainly see, the third stage posits that non-adults will next be able to make use of the AI-enabled mental health medical device. This obviously raises additional dynamics associated with the usage and potential qualms about its usage by minors.The third stage asked the group to reconsider their prior responses about the first and second stages, doing so in light of the inclusion of minors. The mention in the third stage about the prior responses to question 1 refers to a set of questions posed about the benefits of the system, the risks of the system, possible risk mitigations that could be undertaken, what kind of pre-market evidence would be required of the AI maker such as clinical evidence and trial design, what kind of post-market monitoring would be expected, and what type of labeling should accompany the system so that users will understand what they are getting into.About The ScenariosI like the scenarios due to the real-world resemblance of what could actually happen during the life-cycle of such a system. An AI maker or MedTech firm might proceed by initially deploying the system in a highly restrictive setting. They are being methodical and cautious. After getting some solid experience under their belt, they might feel emboldened to loosen the restrictions. This would be the circumstance of the second stage. That being said, the third stage would be perhaps a bridge too far, making their system available for use by minors, though there is presumably a case to be made that this could be beneficial, and the risks might be outweighed in an ROI tradeoff manner.That’s a prototypical scenario of the life-cycle. There are plenty of variations to be considered.

Skipping The First StageSuppose an AI maker or MedTech firm decided that they would skip the first stage and begin with the essence of the second stage. They would skip the depicted first stage entirely. No tightly woven undertaking at the get-go. Instead, an immediate release of the system to the open market would be undertaken. The AI would operate autonomously. There would be no therapist in the loop. This gets a hair-raising reaction on the back of skeptics’ heads. That’s a crazy way to proceed. Always start tightly. Only once there is sufficient evidence to further proceed, then open wider. Never let the horse out of the barn at the get-go.Do you think that a developer of an AI-enabled mental health medical device should be legally precluded from skipping the depicted first stage and, by regulatory requirement, must always start with the first stage?That’s the zillion-dollar question.The Opposing Sides At WorkOne viewpoint is that no such system should ever be released without an upfront close-in assessment. Keep all eyes on what is going on. Strictly limit what the system can do. Be careful about who can use the system. Undertake an exceptionally methodical and cautious approach. Period, end of story.An opposing perspective is that the time required to perform the first stage might be quite prolonged. Meanwhile, possibly millions of people who could be helped due to the system are being denied its use. You have a system in hand that can demonstrably aid the mental health of many, yet you are keeping it under lock and key. Shame on you. The people who could have been helped are likely getting worse and spiraling further down into an abyss. Have a heart and do the right thing by allowing the system to get into the hands of as many people as possible, right away. No delay. No fussy paperwork. No bureaucratic nightmares.The standard retort is that the system might be doing more harm than good. Without having started in the first stage, you are potentially letting a monster loose upon the unsuspecting. This is horrific. A famous line in the tech and AI field is to move fast and break things. Well, that’s quite a statement when it comes to the mental health of society. Do not let the bravado of AI development override the realization that people are at stake in these scenarios. You are dealing with living humans who deserve to be protected from haphazard and possibly endangering systems.Some liken this debate to the development and life-cycle of new drugs. An AI-enabled mental health medical device ought to be perceived as akin to releasing a new drug. The direct response is that the analogy is a fake. It sounds compelling, but it is a falsity. Do not scare people into getting confused on the weighty matter. On and on the heated discourse goes.Lots To DiscussThe FDA posted a summary of the discussion that arose concerning the scenario.Some of the benefits were that the AI-enabled mental health medical device could potentially allow earlier and wider access to therapy. Clients in rural areas and those in underserved communities might especially benefit. Risks included the disconcerting aspects of contemporary LLMs, such as the possibility of AI hallucinations, data drifts, model biases, and the like. Pre-market considerations included that the collection and review of pre-deployment clinical evidence ought to be expected, such as dutifully exploring “the relationship between patient engagement with the device, adherence to treatment, degree of clinical symptomatology, and clinical outcomes.” Checking for and ascertaining the handling of false positives and false negatives should be given sufficient attention. Post-market monitoring ought to mirror the same type of evidence gathering as occurred during the pre-market status.From a risk viewpoint, the loosening of controls and widening of dispersion during the second stage might be within tolerance if the risks of harm are low. They covered risks such as “worsening of symptoms, including self-harm behaviors, and the development of unhealthy parasocial relationships through anthropomorphizing the chatbot.” For the third stage, the assessment was that “the device should be developed specifically for each age group and may need to apply different functions or approaches as a child or adolescent moves from one developmental stage to another.” Furthermore, the “safety measures could include monitoring and limiting screen time, along with specialized training for those authorized to prescribe the device.”FDA Carries A Big StickWe need rapt attention toward these rapidly emerging capabilities to contend with the rising tide of mental health woes throughout society. The approach chosen must be sensible, workable, and aim to ensure that AI doesn’t take society down a doom-and-gloom rabbit hole, but at the same time, leverage AI to support and enhance mental health at scale. Benefits and risks need to be balanced.The FDA wields a big stick. Some are pushing for the FDA to move along more expeditiously and promulgate new regulations without delay. Others contend that if the regulatory aspects are overbearing or off target, the advent of these tremendous AI advances could get inadvertently squashed and not see the light of day. AI makers, MedTech, and HCPs might perceive the risks as too high for them to invest in and move forward on such advancements. Again, it’s all about risks and benefits.As per the famous remark by Napoleon Bonaparte: “Nothing is more difficult, and therefore more precious, than to be able to decide.” We are collectively at a crucial decision-making juncture when it comes to AI-enabled mental health medical devices. Let’s make sure that we make the right decision and do so on a timely tradeoff basis.

Read Original

Source Information