Your Next Mental Health Therapist Will Be An AI-Based 3D Holographic Avatar Displayed In A Tube On Your Desk Or Bedside Table

Summarize this article with:
Your next therapist might be an AI-based 3D holographic avatar that is displayed in a device sitting in your home and interacts with you.gettyIn today’s column, I examine a newly announced hardware and software product at the CES show in Las Vegas that opens the door to having your very own mental health therapist as rendered via AI in a somewhat tangible way. The hardware is a computer graphics tube that is about 6 inches tall and a few inches wide, allowing the display of a 3D hologram showcasing a selectable avatar character. Inside the device, there is an embedded video camera to visually capture what’s happening near the tube, a microphone for listening, and a speaker for speaking aloud.AI is running at the core of the device to present a seemingly dynamic personality associated with the avatar. The hologram appears to move and react to you. You can talk directly to the avatar. The avatar can essentially see you, describe you, detect your facial expressions, and otherwise interact as though the 3D hologram is alive (it’s not, but it might seem that way). Right now, the product is intended to aid your daily existence by keeping track of your schedule or acting as your wingman when you are gaming. The vendor says that this is intended to be your 24/7 handy AI desk companion.My prediction is that these kinds of devices will inevitably shift from being an AI companion to becoming your in-home AI-based mental health therapist. Is this good for society, or does it seem eerie and untoward? Let’s talk about it.This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). MORE FOR YOUAI And Mental HealthAs a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.Background On AI For Mental HealthI’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of 2025 accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.Humanoid Robots As In-Home TherapistsShifting gears, I have been predicting that humanoid robots that resemble walking and talking humans will soon become in-home mental health therapists, see my analysis at the link here and the link here.First, let’s discuss the hardware aspects of the robots.You’ve probably seen lots of online videos that showcase a human-looking robot that walks around and tries to pick up toys on the floor or folds the laundry. These are primarily of an experimental nature and not yet fully ready for open-ended use in a home. That being said, the advent of humanoid robots will be a huge money-maker, and there is a tremendous push to infuse AI into the robots so they will be capable of suitable and safe use in your house.Initially, people will treat humanoid robots as nothing more than robots. You won’t perceive the robot as anything other than a perfunctory vacuum cleaner or other common household machinery. The next step will be to perceive the humanoid as a friend or companion. People will assign names to their robots. They will get used to having a robot walking and moving throughout the home. Imagine that the robot has ChatGPT or a similar LLM running inside it that can directly converse with you.In my view, it is a ready slippery slope that the next act will be that the AI serves as your mental health advisor.
The Robot Therapist In The HouseSome envision this therapeutic process entailing you in your cozy living room, residing on your favorite couch, and the humanoid robot sitting in a nearby comfy chair. There you are, acting as though you are in a therapy session like the days of Freudian analysis. You pour out your heart to the humanoid robot. The generative AI inside the robot responds by speaking to you via the robot. Tell me more, the humanoid robot says.That’s not necessarily how things will go. The effort will be much more seamless and integrative. Though you can certainly opt to do a kind of staged therapy activity, the chances are that the discussions will be more fluid and shall we say naturally or spontaneously arise.For example, you might be walking around your home, and the humanoid robot is doing so with you. You decide to point to the laundry basket and tell the robot that it should do the laundry later that day. At that moment, you make a quip that your roommate doesn’t seem to be willing to put their dirty clothes into the basket. This mindless act by your roommate really irritates you, and you are quite angry with your roommate. Exceedingly angry.The humanoid robot then politely asks you if you’d like to chat about your brewing anger. You say yes, you would. You and the robot continue to walk throughout the house, doing various chores, and simultaneously discussing your anger. Turns out your anger is more than the mere act of your roommate leaving clothing around the house. It is much more deeply rooted, and the humanoid robot helps you cope with the unbridled anger.This discussion happens seamlessly and with a natural kind of rhythm.Nearer-Term Options Are AppearingThe reaction by some is that they would never interact with a humanoid robot in this way. It seems creepy. Maybe even a bit scary. You see, opting to reveal your innermost feelings is something sacred, and doing so with a robot is zany.An irony is that many of those same people would readily do the same via their PC or smartphone. They do not perceive such a simple-looking device in the same way they perceive the humanoid robot. The humanoid robot resembles a human. Their PC or smartphone doesn’t appear to be human-like. In their minds, this difference allows them to pour out their heart to the simple device, but they are hesitant or refuse to do so to the humanoid robot.Aha, there is a middle ground that is already forming and will take us down the path toward humanoid robots as in-home therapists. I am referring to new devices that are not as impressive as walking-talking robots, but that nonetheless provide an intriguing possibility for having mental health discussions with AI.Let’s consider one such device that was revealed at the popular CES show in Las Vegas. I am referring to a device and system by the well-known gaming company Razer that they have coined as Project AVA. It isn’t intended as a mental health advising tool, but I am anticipating that people will ultimately use these kinds of devices in that manner, regardless of whether the vendor aimed for that realm or not.Background About The New DeviceRazer describes Project AVA as a fully integrated AI desk companion that features several selectable animated holographic avatars. The hardware is a tube-like device that displays a 5-inch hologram that appears to have natural movement. Via the included camera, the on-board AI can see you and others nearby, and can perform eye-tracking, detect facial expressions, and even do lip syncing for convincing levels of interactivity.Contextual awareness is the distinctive feature, and either will enthrall you or maybe give you a breathless pause. The holographic avatar seems to converse with you. The device can be connected to your PC so that if you are gaming on your computer, it can act as your esports coach. The claim is that this will give you a top-tier teammate to help you in acing online games.But there’s more, according to the website for the system (posted on January 6, 2026): “Project AVA is more than a gaming assistant – it is a true digital partner designed to support every facet of modern life. From organizing daily tasks to delivering real-time gaming strategies, AVA combines intelligence with personality for an experience that feels natural, engaging, and personal.”You can see videos online that have been prepared by Razer to show off the capabilities of the device. They indicate that the product will be shipping sometime in the second half of 2026. No pricing has yet been stated. Reservations can be made via a $20 deposit, which they say can be refunded if you change your mind about buying the product.Devices Of This Sort In The HomeI’m sure we will eventually be inundated with similar types of in-home devices. The first buyers will be those who relish being at the leading edge of technology. It makes sense that the esports industry would be a ripe market for this. Many serious gamers are willing to plop down dough to get a gizmo that has the latest hardware and AI. Besides possibly being helpful, it can be braggadocious to have something like this on your desk.Whether everyday people at scale will buy these types of standalone devices as an in-home companion might be a bit of a stretch, especially depending on whatever the price is going to land on. To some degree, you can somewhat do the same with your desktop PC or other in-home devices, though certainly the hologram is nifty.Anyway, I want to focus instead on how these budding devices can be used beyond the normal aims.Acting As A TherapistIn the same manner that a humanoid robot can serve as a therapist, the holographic avatar can do likewise. The device admittedly won’t walk around the house with you. It is up to you to place it on your desk in your workroom, or move it to the kitchen while cooking, or possibly put it on your bedside table. That’s on you to decide.The reason that this would stand out in comparison to using your conventional smartphone is the holographic avatar and heightened level of interactivity. For those people who use ChatGPT in a traditional text-based mode on a normal screen, the use of voice to interact with AI is a step up, and doing so with an avatar that you can see with your own eyes is even more compelling.Sometimes you will initiate a mental health chat with an avatar. Other times, the avatar will do so. I’ve pointed out that AI providing mental health advice is going to shift from a principally reactive mode to a more proactive mode, see my discussion at the link here. Currently, users bring up mental health topics and get the AI to engage. The AI is increasingly likely to initiate a conversation about your mental health status. You can tell the AI you don’t want to converse about your mental health, and the AI will say that it was just checking in to see how things are going.The laundry situation regarding the humanoid robot is a prime example of this. The person mentioned their anger but didn’t directly tell the AI to engage in a devout therapeutic mode. Instead, the generative AI analyzed what the person said and offered that perhaps a therapeutic conversation might be helpful. The AI was proactive in that circumstance.Asking Tough QuestionsHaving humans obtain mental health guidance via AI, including by means of these types of devices, raises a slew of tough questions.Consider these vexing issues:Will people be less likely or more likely to use generative AI for mental health advice if it comes from a tube-shaped device with a holographic 3D avatar versus using a conventional PC or smartphone?Will people opt to seek mental health guidance from these devices rather than going to human therapists?Will human therapists essentially be replaced by in-home devices, especially since people won’t have to go to a therapist’s office and can easily interact with the device at home?Will there be sufficient privacy protection and other AI safeguards to ensure that people aren’t giving away their innermost secrets when garnering mental health support in this way?These are all significant questions. Layered on top of those questions are ones that are already being grappled with when using conventional LLMs. For example, what is the legal liability associated with AI proffering mental health advice? What are the ethical implications? Those aspects will indubitably become much more visible to the public at large once these devices become popular. Right now, the visibility of these concerns is generally low.
The World We Are InLet’s end with a big picture viewpoint.It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and make the upsides as widely and readily available as possible.A final thought for now.Benjamin Franklin famously made this remark: “He that won't be counseled can't be helped.” We might augment the famed insight by further stipulating that the counseling needs to be appropriate to the person and situation at hand. An AI that can see you and hear you is probably going to have higher odds of attaining that goal, though this isn’t a surefire guarantee, so please don’t bet your mind and soul on it.
