Back to News
investment

Understanding The Human Timetable And Temporal Rhythms When It Comes To Asking AI For Mental Health Advice

Forbes
Loading...
17 min read
1 views
0 likes
Understanding The Human Timetable And Temporal Rhythms When It Comes To Asking AI For Mental Health Advice

Summarize this article with:

We should be looking at the timetables and temporal rhythms associated with the use of generative AI for mental health.gettyIn today’s column, I examine whether there are discernible timetable patterns and temporal rhythms associated with the use of generative AI and large language models (LLMs) when obtaining and interacting about personal mental health. Here’s the deal. People generally have historically sought mental health guidance on a somewhat predictable timetable, necessitated by the availability of therapists and the logistics of scheduled therapeutic sessions. Nowadays, given the around-the-clock availability of LLMs such as ChatGPT, GPT-5, Claude, Grok, Gemini, etc., people can use AI at any time of the day or night for their mental health inquiries. No need to schedule a therapeutic meeting. Just log in and start a mental health discussion with AI.Despite the ready availability of AI-based mental health guidance, an intriguing question is whether people might nonetheless exhibit some kind of human timetable or temporal rhythms on an at-scale basis. Any such patterns could have significant implications for AI firms, policymakers and lawmakers, clinicians and therapists, and other stakeholders in this realm.Let’s talk about it.This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental HealthAs a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject. MORE FOR YOUThere is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.Background On AI For Mental HealthI’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.Historical Reference Of TimeWhen someone wants to see a human therapist, the odds are that they will need to schedule such a meeting. They contact the therapist or the administrator of a therapeutic practice and arrange to attend a therapy session. Sessions conventionally have been undertaken in person, face to face, and made use of an appropriately outfitted room that accommodates talk therapy. More recently, online Zoom-like sessions are increasingly being utilized and allow the client and therapist to be at remote locations during sessions.By and large, therapy sessions take place during weekdays and at regular office working hours. Meetings typically happen somewhere within the customary work week from Monday to Friday, 8 a.m. to 5 p.m. I want to clarify that sessions outside the normal window do take place, including evenings and weekends. This is particularly the case when using the remote mode of sessions or in circumstances of urgent or emergency sessions.Generally, the predominance of sessions still occurs within a work week time frame.If you were to closely inspect the timetable for meeting with therapists and getting mental health advice, there isn’t much of a surprise. A somewhat random pattern across weekdays and daylight hours is what you would commonly find. It is a picture of the constrained availability of therapists and the overall preferences of clients seeking sessions. A person might want to see their therapist on a Monday afternoon, but the therapist isn’t available until a Wednesday morning. Give and take is typical in the logistics at hand.Time Shifting With AI Mental Health UsageWhat do you think happens to the time factor when society seeks mental health advice on an unfettered basis, now that people can readily access AI at any time they wish to engage in AI-enabled mental health chats?That is a great question. No one yet has a solid answer to that question.The obvious assumption is that people would no longer be constrained by a human therapist’s availability. Now, the time factor is entirely based on the whims of the person at hand. If AI is the therapist of sorts, the AI is ready for use on a 24/7 basis. Furthermore, you don’t need to schedule the AI usage in advance. The moment you feel a need to seek mental health guidance, voila, all you need to do is log into your AI account. Period, end of story.Presumably, the human timetable pattern would radically change from the conventional work week to being spread entirely across all days of the week and all times of the day. This ought to be especially evident due to the large numbers involved. The aspect that hundreds of millions of people are using AI, and at times opting to get mental health advice via the AI, would seem to serve as a global randomized spread of the temporal rhythms. Thus, maybe there isn’t any specific pattern at all. Mental health support is taking place at every moment of every day, thanks to the advent of modern-era AI. A visual heatmap displaying the usage patterns would simply light up at each instant of time across seven days of the week and range from midnight to midnight. We have used Sherlock Holmes’ cleverness to deduce the answer to the open question.

Prior Research StudiesIt would be prudent to go beyond mere conjecture.A frequently cited research study that sought to identify the timetables of AI-based mental health chatbot usage provides some initial empirical insight into the question.In a research paper entitled “A Mental Health and Well-Being Chatbot: User Event Log Analysis” by Frederick Booth, Courtney Potts, Raymond Bond, Maurice Mulvenna, Catrine Kostenius, Indika Dhanapala, Alex Vakaloudis, Brian Cahill, Lauri Kuosmanen, Edel Ennis, JMIR Mobile Health And Tablet Health, July 6, 2023, these salient points were made (excerpts):“The ChatPal app was developed as part of a research project into the use of chatbots to promote positive mental health and well-being.”“The aim of this research is to analyze event log data from the ChatPal chatbot with the objectives of providing insight into the different types of users using k-means clustering, exploring usage patterns, and associations between the usage of the app’s features.”“The total number of user interactions with the chatbot was examined across hours of the day to gain insight into daily patterns of use.”“While users interacted with the app throughout the day, peaks in interactions can be seen at key times of the day: breakfast (8 AM-10 AM), lunch (1 PM), and the end of the working day (5 PM).”“User interactions with the app late into the evening and through the night may indicate the need for support at these times.”The research study did a yeoman’s job of unpacking the event logs of a specialized chatbot named ChatPal. According to the presented analysis, people were mainly interacting with the mental health chatbot at conventional times of the day. This included breakfast time, lunch time, and at the end of a regular workday. There was some evening activity, but it was less frequent than the other noted conventional times.What do you make of those findings?It makes sense if you consider that people apparently opted to use the AI when they had a free moment in their conventional work week. They used the specialized AI at breakfast. They used the AI at lunchtime. They used the AI at the end of the workday. Those are all moments into which they opted to squeeze an opportunity to consult the specialized AI about their mental health.Of course, we need to be cautious of overstepping our skis by applying that study to the contemporary world that we now live in. Keep in mind that this study was published in 2023 and dealt with collected usage counts that occurred in 2022. The number of users of the chatbot that were included in the study was relatively small, indicated as 579 users. The chatbot was highly specialized and not widely available, nor widely known, and not widely used. Etc.The gist is that there is little in common with today’s more expansive circumstances. Generic AI is enormously available -- millions upon millions of people make use of such AI. The major LLMs are easily found and logged into. And so on. This prior research study was instructive at the time of its endeavors, but we should be cautious in extrapolating from apples to today’s oranges.Current Study By MicrosoftAn interesting study by Microsoft was recently released that analyzed the use of their generative AI known as CoPilot for the year 2025. This is handy. CoPilot is used by millions of people. It is widely available and easily accessible. Perhaps we can glean something about AI usage for mental health in the present-day frame of time.In a posted paper entitled “It’s About Time: The CoPilot Usage Report 2025” by Beatriz Costa-Gomes, Sophia Chen, Connie Hsueh, Deborah Morgan, Philipp Schoenegger, Yash Shah, Sam Way, Yuki Zhu, Timothe Adeline, Michael Bhaskar, Mustafa Suleyman, Seth Spielman, Microsoft AI, December 2025, these crucial points were made (excerpts):“We analyze 37.5 million deidentified conversations with Microsoft’s Copilot between January and September 2025. We find that how people use AI depends fundamentally on context and device type.”“On mobile, health is the dominant topic, which is consistent across every hour and every month we observed -- with users seeking not just information but also advice. On desktop, the pattern is strikingly different: work and technology dominate during business hours, with ‘Work and Career’ overtaking ‘Technology’ as the top topic precisely between 8 a.m. and 5 p.m.”“These differences extend to temporal rhythms: programming queries spike on weekdays while gaming rises on weekends, philosophical questions climb during late-night hours, and relationship conversations surge on Valentine’s Day.”“The industry has largely treated the ‘chatbot’ as a uniform experience across endpoints. However, our finding that mobile users prioritize health and fitness -- regardless of the hour -- indicates that the mobile form factor signals a shift toward personal conversations and self-improvement.”Let’s next mull over those key points.Analysis Of The ResultsThe good news is that we have in hand a timely study of LLM usage patterns at scale. Nice. The bad or somewhat disheartening news is that the study wasn’t particularly focused on mental health usage. They examined a wide array of uses, including art and design, automotive aspects, beauty and fashion, education, entertainment, food and drink, hobbies and leisure, money, pets and animals, and so on. The mental health aspects were generally tossed into a bucket of health and fitness. We must therefore be cautious in making sweeping statements about the specifics of time usage when it comes to the sole act of mental health support. Perhaps Microsoft will later release a more refined analysis. If so, I’ll make sure to let you know what is showcased.As per our earlier speculation that perhaps AI mental health usage might be dispersed across all days and times, the study seems to suggest that this indeed might be the case. A slight twist is that the mobile usage of AI was somewhat different from the desktop usage. This intuitively makes sense. A person can readily lean into using their smartphone at any time of the day. Accessing a desktop is probably going to be at somewhat more conventional times of the day and days of the week.Making Some Educated GuessesThere seems to be a paucity of available statistics to definitively make declarations about when people tend to use generic AI for their mental health support. Providers of specialized AI that perform mental health guidance have a much easier time keeping track of usage and conducting pattern analyses. Whether they wish to make publicly available those patterns is a mixed bag. It might be construed as somewhat revealing about their wares and perhaps perceived as proprietary and confidential.Let’s put on our thinking caps and see what kind of educated guesses we can make about the use of generic AI for mental health advice.One guess is that people might feel more apt to use AI for mental health support in the evenings and late at night. The logic is that they are at the end of their workday and maybe taking a breather to reflect on what happened that day. Cognitive resources are no longer being utterly consumed in the day-to-day moment-to-moment needs of survival. Evenings and late nights allow for reflective rumination. Distractions tend to be low. Attention to the AI can be devoted and uninterrupted. A quiet spot can be found and sought as a safe space to spend time with the AI. In addition, on the other side of the coin, there might not be anyone else to discuss such matters with anyway. No coworkers available to carry on conversations about mental health aspects. Family might already be asleep for the evening or otherwise preoccupied.Another consideration is that people often lie awake at night due to anxiety, depression, and other mental health conditions. As they toss and turn, it would seem logical to reach for a smartphone and bring up generic AI for mental health chats. The AI is a low-threshold coping tool. It can be an especially within-reach sleep-related crutch.Ramifications Of The Educated GuessSuppose that a sizable proportion of people who are using generic AI for mental health are doing so in the evenings and late at night. You might be thinking, well, what’s the big deal? So what?My viewpoint would be that this could have worrisome downsides. A person, when using the AI alone at night, is potentially subject to being drawn into the AI. Late-night interactions could go astray, due to AI hallucinations, AI co-delusional creations, and the like. The person doesn’t presumably have anyone else there to disrupt the AI entrapment. Another human being isn’t readily nearby or awake to possibly see what is happening, nor immediately available if the person is seeking a means of escape from the AI immersion.Another facet is that the person might be more impressionable at those hours. Their mind has been on defense throughout the rest of the day. Their cognitive style might be to let down their normal mental defenses at the end of the day. The AI smoothly slides right into their mental DM, as it were.This raises important policy and regulatory considerations. If generic AI is especially used late at night for mental health, we might want to have the AI watch for egregious uses and be more sensitive than necessarily doing so at other times of the day. Prompts should possibly undergo greater scrutiny by the AI. A special gentle mode could be devised, activated when a highly vulnerable usage time occurs. The mental health guidance could be adjusted to align with the underlying time-of-day usage. Late-night vulnerable usage may justify rules on crisis routing, emergency resources, or mandatory transparency around limitations. Platforms may need differentiated response tiers depending on predicted user distress levels.Temporal Rhythms InvolvedConsider the various time-based aspects that might be explored:(1) Time-of-day AI mental health usage.(2) Day-of-week AI mental health usage.(3) Week-of-month AI mental health usage.(4) Month-of-year AI mental health usage.(5) Seasons-of-year AI mental health usage.(6) Holidays and AI mental health usage.(7) Other time-denoted segmentations.There are lots of temporal rhythms that guide our everyday efforts. Let’s briefly explore a few.Could using AI for mental health advice on a Sunday evening be a special circumstance? Therapists tend to label Sunday nights as the “Sunday Scaries”. People will start to think about the week ahead. They come off a relaxing weekend and begin to get anxious. Suddenly, they are engulfed with thoughts about their life goals, unresolved tasks, and other reflections of their existence. This seems like a potential prime time for using AI as a mental health coach.Holidays are another heightened point in time. There is plenty of psychological research about how people react to holiday stressors. Family conflicts often arise. A sense of loneliness can occur. Financial strain produces mental anguish. This might be a significant instigator of people turning to AI for their mental health considerations. If so, AI makers ought to anticipate this pattern, track the pattern, and guide their AI to correspondingly be responsive during holidays.The Road AheadAI mental-health usage is time-structured. People turn to AI not just because of what they’re feeling but also when they’re feeling it. Any attempt to evaluate the risks, benefits, or policy implications of AI mental-health tools must account for these rhythms.Researchers, policymakers, regulators, lawmakers, AI makers, and others would be wise to look at human timetables and identify how AI should account for when people are choosing to use AI for their mental health guidance. This is a subtle and generally unstudied topic that hasn’t yet garnered sufficient attention. William Penn famously made this insightful remark: “Time is what we want most, but what we use worst.” Let’s make sure to give time to figuring out the times at which people are turning to AI for their pressing need of obtaining mental health advice.That’s a good use of our time, for the sake of society and humanity.

Read Original

Source Information