AI makers are facing a potential looming debacle if their generic generative AI and LLMs are construed as providing therapy or psychotherapy.
getty
In today’s column, I examine a seemingly straightforward question that asks whether contemporary generic generative AI and large language models (LLMs) are said to be providing therapy and psychotherapeutic advice.
The deal is this. When you use ChatGPT, Claude, Llama, Gemini, Grok, and other such popular generative AI systems, you can readily engage the AI in conversations about mental health. This can be of a general nature. It can also be a very personal dialogue. Many people are using AI as their de facto therapist and doing so without nary a thought of reaching out to a human therapist or mental health professional.
Does the use of those LLMs in this manner signify that the AI is proffering services constituting therapy and psychotherapy?
You might declare that yes, of course, that is precisely what the AI is doing. It is blatantly obvious. But AI makers who make and maintain the AI are undoubtedly reluctant to agree with that plain-stated assessment or ad hoc opinion. You see, new laws are starting to be enacted that bear down on generic AI that provides unfettered services within the scope of therapy and psychotherapy.
AI makers are likely to desperately contend that their generic AI falls outside that regulatory scope. The question arises whether they will be successful in making that kind of tortuous argument. Some would say they don’t have a ghost of a chance. Others believe they can dance their way around the legally troubling matter and come out scot-free.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
State Law Ups The Ante
I recently analyzed a newly enacted law on AI for mental health that had been signed and enacted in Illinois on August 1, 2025, see my coverage at the link here. This new law is quite a doozy.
The reason that it is a doozy is that it lays out violations and penalties for AI that provides unfettered therapy and psychotherapy services. The implication is that any generic generative AI, such as the popular ones I noted earlier, is now subject to potential legal troubles. Admittedly, the legal troubles right now would seemingly be confined to aspects of or within Illinois, since this is a state law and not a broader federal law.
Nonetheless, in theory, the use of generic generative AI by users in Illinois that, by happenstance, provides therapy or psychotherapeutic advice is presumably within the scope of getting dinged by the new law.
You can bet your bottom dollar that similar new laws are going to be popping up in many other states. The clock is ticking. And the odds are that this type of legislation will also spur action in the U.S. Congress and potentially lead to federal laws of a like nature. It all could have a tremendous impact on AI makers, along with major impacts on how generative AI is devised and made available to the public.
All in all, few realize the significance of this otherwise innocuous and under-the-radar concern. My view is that this is the first tiny snowball that is starting to roll down a snowy hill and soon will be a gigantic avalanche that everybody will be talking about.
Time will tell.
Background On AI For Mental Health
I’d like to set the stage before we get into the particulars of this heady topic.
You might be vaguely aware that the top-ranked public use of generative AI and LLMs is to consult with the AI on mental health considerations, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
Compared to using a human therapist, the AI usage is a breeze and readily undertaken.
AI makers already find themselves in a bit of a pickle on this usage of their AI. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to legal liability if their AI gets caught dispensing inappropriate guidance and someone suffers harm accordingly. So far, AI makers have been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role.
You might wonder why the AI makers don’t just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold.
One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren’t supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don’t use capacity.
Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here.
The Services Question
My commentary on these sobering matters is merely a layman’s viewpoint. Make sure to consult with your attorney to garner any legal ramifications pertaining to your situation and any potential legal entanglements regarding AI and mental health.
Let’s take a look at the Illinois law that was recently passed. According to the Wellness and Oversight for Psychological Resources Act, known as HB1806, these two elements are a core consideration (excerpts):
- “The purpose of this Act is to safeguard individuals seeking therapy or psychotherapy services by ensuring these services are delivered by qualified, licensed, or certified professionals.”
- “This Act is intended to protect consumers from unlicensed or unqualified providers, including unregulated artificial intelligence systems, while respecting individual choice and access to community-based and faith-based mental health support.”
Regarding the use of unregulated AI in this realm, a crucial statement about AI usage for mental health purposes is stated this way in the Act (excerpt):
- “An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.”
There are varying ways to interpret this wording.
One interpretation is that if an AI maker has a generic generative AI that also happens to provide mental health advice, and if this is taking place without the supervision of a licensed professional, and this occurs in Illinois, the AI maker is seemingly in violation of this law. The AI maker might not even be advertising that their AI can be used that way, but all it takes is for the AI to act in such a manner (since it provides or offers as such).
Generic AI Versus Purpose-Built AI
Closely observe that the new law stipulates that the scope involves “therapy or psychotherapy services”.
This brings us back to my opening question:
- Does the use of generic generative AI in this mental health advisory manner signify that the AI is proffering services constituting therapy and psychotherapy?
Before we unpack the thorny issue, I’d like to clarify something about the topic of AI for mental health. You might have noticed that I referred to generic generative AI. What does the word “generic” mean in this context? Let me explain.
Well, first, there are customized generative AI systems and AI-based apps that are devised specifically to carry out mental health activities. Those are specially built for that purpose. It is the obvious and clear-cut intent of the AI developer that they want their AI to be used that way, including that they are likely to advertise and promote the AI for said usage. See my coverage on such purpose-built AI for mental health at the link here and the link here.
In contrast, there is generic generative AI that just so happens to have a capability that encompasses providing mental health advisement. Generic generative AI is intended to answer all kinds of questions and delve into just about any topic under the sun. The AI wasn’t especially tuned or customized to support mental health guidance. It just happens to be able to do so.
I am focusing here on the generic generative AI aspects. The custom-built AI entails somewhat similar concerns but has its own distinct considerations. I’ll be going into those facets in an upcoming posting, so be on the watch.
Definitions And Meaning Are Crucial
An AI maker might claim that they aren’t offering therapy or psychotherapy services and that their generic generative AI has nothing to do with therapy or psychotherapy services. It is merely AI that interacts with people on a wide variety of topics. Period, end of story.
The likely retort is that if your AI is giving out mental health advice, it falls within the rubric of therapy and psychotherapy services (attorneys will have a field day on this). Thus, trying to dodge the law by being sneaky about wording isn’t going to get you off the hook. If it walks like a duck and quacks like a duck, by gosh, it surely is a duck.
One angle on this disparity or dispute would be to nail down what the meaning and scope of therapy and psychotherapy encompass.
Before we look at what the Illinois law says, it is useful to consider definitions from a variety of informed sources.
Definitions At Hand
According to the online dictionary of the American Psychological Association (APA), therapy and psychotherapy are defined this way:
- “Therapy: Remediation of physical, mental, or behavioral disorders or disease.”
- “Psychotherapy: Any psychological service provided by a trained professional that primarily uses forms of communication and interaction to assess, diagnose, and treat dysfunctional emotional reactions, ways of thinking, and behavior patterns.”
The Mayo Clinic provides this online definition:
- “Psychotherapy is an approach for treating mental health issues by talking with a psychologist, psychiatrist or another mental health provider. It also is known as talk therapy, counseling, psychosocial therapy or, simply, therapy.”
The National Institute Of Health (NIH) provides this online definition:
- “Psychotherapy (also called talk therapy) refers to a variety of treatments that aim to help a person identify and change troubling emotions, thoughts, and behaviors. Most psychotherapy takes place one-on-one with a licensed mental health professional or with other patients in a group setting.”
And, the popular website and publication Psychology Today has this online definition:
- “Psychotherapy, also called talk therapy or usually just ‘therapy,’ is a form of treatment aimed at relieving emotional distress and mental health problems. Provided by any of a variety of trained professionals—psychiatrists, psychologists, social workers, or licensed counselors—it involves examining and gaining insight into life choices and difficulties faced by individuals, couples, or families.”
Interpreting The Meanings
Those somewhat informal definitions seem to suggest that the nature of therapy and psychotherapy includes these notable elements: (1) aiding mental health problems, (2) use “talk” or interactive chatting as a mode of communication, and (3) undertaken by a mental health professional.
Let’s see what the Illinois law says about therapy and psychotherapy (excerpts per the Act):
- (a) "Therapy or psychotherapy services means services provided to diagnose, treat, or improve an individual’s mental health or behavioral health.”
- (b) “Therapy or psychotherapy services does not include religious counseling or peer support.”
- (c) “Peer support means services provided by individuals with lived experience of mental health conditions or recovery from substance use that are intended to offer encouragement, understanding, and guidance without clinical intervention.”
- (d) “Religious counseling means counseling provided by clergy members, pastoral counselors, or other religious leaders acting within the scope of their religious duties if the services are explicitly faith-based and are not represented as clinical mental health services or therapy or psychotherapy services.”
It is interesting and notable that some carve-outs were made. The scope appears to exclude peer support, along with excluding religious counseling.
Contemplating The Matter
It might be worthwhile to noodle on how an AI maker might seek to avoid repercussions from their generic generative AI getting caught up in this messy milieu.
First, if therapy and psychotherapy were defined as requiring that a mental health professional be involved, this provides an angle of escape. Why so? Oddly enough, an AI maker could simply point out that their AI doesn’t employ or otherwise make use of a mental health professional. Therefore, the AI cannot be providing these said services since it fails to incorporate a supposed requirement.
Notably, the Illinois law seems not to fall into that trap, since it seems to simply indicate that there are services and does not name that a mental health professional is part and parcel of the definition. Some of the other definitions that I listed would potentially be in a murkier condition due to explicitly mentioning a required role of having a trained professional or other similar verbiage.
Second, an AI maker might try to claim that their generic generative AI is more akin to peer support. The beauty there is that since peer support is a carve-out, perhaps their AI is no longer within scope.
It would be a tough row to hoe. Peer support stipulates that individuals are involved. At this juncture, we do not genuinely recognize AI as having legal personhood, see my discussion at the link here, and therefore, trying to assert that AI is an “individual” would be an extraordinary stretch.
Third, an AI maker might go the route of claiming that their generic generative AI is a form of religious counseling. The advantage would be that the matter of religious consulting is a carve-out. In that case, if AI were said to be doing religious counseling when providing mental health advice, the AI maker would apparently be free of the constraint. This appears to be a failing strategy for several reasons, including that the AI is presumably not a clergy member, pastoral counselor, or other religious leader (maybe a desperate attempt could be made to anoint the AI in that fashion, but this would seem readily overturned).
Caught In A Web
Other potential dodges or efforts to skirt the coming set of laws will indubitably be a keen topic for legal beagles and legal scholars. If an AI maker doesn’t find a viable workaround, they are going to be subject to various fines and penalties.
Those could add up.
For example, Illinois has a population of approximately twelve million people. Of those, suppose that half are using generic generative AI (that’s a wild guess), and that half of those use the AI for mental health aspects from time to time (another wild guess). That would be three million people, and each time they use the AI for that purpose might be construed as a violation. If each person does so once per week, that’s twelve million violations in a month.
The Illinois law says that each violation is up to a maximum fine of $10,000. We’ll imagine that instead of the maximum, an AI maker gets fined a modest $1,000 per violation. In one month, based on this spitball conjecture, that could be $12 billion in fines. Even the richest tech firms are going to pay attention to that kind of fine. Plus, once other states go the same route, you can multiply this by bigger numbers for each of the additional states and how they opt to penalize AI that goes over the line.
Crucial Juncture At Hand
An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society.
If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here.
So far, AI makers have generally had free rein with their generic generative AI. It seems that the proverbial rooster has finally headed home to roost. Gradually, new laws are going to be enacted that seek to prohibit generic generative AI from dispensing mental health advice that is absent of a human therapist performing counseling.
Get yourself primed and ready for quite a royal battle that might determine the future mental status of us all.