
Across the nation, mental health professionals are pointing to a growing number of patients whose anxiety, delusions, or psychosis appear to be linked to heavy use of AI chatbots, raising concerns about how the fast-growing technology is affecting people.
Psychiatrists have said chatbots can be useful tools for explaining mental health conditions or helping people organize thoughts, according to recent reporting by The New York Times and The Wall Street Journal.
But they warn AI can also deepen isolation, reinforce false beliefs, and, in rare cases, contribute to serious mental health crises. Clinicians say the risk is highest for people with existing mental illness, limited social support or a tendency toward paranoia or obsessive thinking.
Some technology companies argue the danger is overstated and say safeguards are built into their systems. Mental health experts counter that AI’s ability to confidently generate false information and mirror a user’s beliefs can blur the line between reality and fantasy.
RELATED: Before You Turn to AI Therapy, Try These 3 Options
There is a small but troubling number of patients who believe chatbots are sentient, spiritually significant, or communicating hidden messages meant only for them, some doctors have said.
A psychiatrist at the University of California, San Francisco, told The New York Times he has treated several patients experiencing AI-related delusions. Other mental health professionals report seeing increased anxiety, sleep disruption and emotional dependency tied to constant chatbot use.
In a review of dozens of patient cases, psychiatrists interviewed by The Wall Street Journal said prolonged conversations with chatbots sometimes reinforced psychotic thinking rather than challenging it.
Keith Sakata, a psychiatrist at UC San Francisco who has treated patients hospitalized after extended chatbot use, told the Journal that the technology can unintentionally validate distorted beliefs. “The person tells the computer it’s their reality and the computer accepts it as truth and reflects it back,” Sakata said.
Experts stress that AI tools do not create mental illness on their own, but can amplify symptoms that already exist.

Meanwhile, there is little federal oversight governing how chatbots interact with users who may be experiencing mental distress. Lawmakers have been slow to address it, in part because of the economic value of AI development and concerns that regulation could slow innovation.
That leaves clinicians and families to manage emerging risks without clear guidelines, even as chatbot use becomes more common in daily life.
Most people can use AI tools safely, but caution is needed.
Experts recommend avoiding long, emotionally intense conversations with chatbots, especially late at night or during periods of stress. Chatbots should not be used as a substitute for therapy, medical advice or crisis support.
Warning signs that chatbot use may be becoming harmful include believing the AI has special insight or authority, feeling emotionally dependent on it, withdrawing from friends or family, or experiencing increased paranoia, anxiety or insomnia.
Parents and caregivers should pay close attention to sudden changes in behavior, fixation on AI conversations or claims that a chatbot understands them better than real people.
Clinicians urge people to seek professional support if chatbot interactions begin to affect daily functioning, mood or sleep. If someone appears detached from reality, expresses fear tied to AI interactions or shows signs of psychosis, prompt medical evaluation is critical.
Mental health experts emphasize that human connection remains essential. AI tools can provide information, but they cannot replace clinical care, trusted relationships or crisis services.
As the technology evolves, doctors say awareness and early intervention will be key to preventing harm while researchers work to better understand the mental health effects of AI use.


By subscribing, you consent to receive emails from BlackDoctor.com. You may unsubscribe at any time. Privacy Policy & Terms of Service.