top of page

Mental Health in the Age of Algorithms: What ChatGPT Can and Can’t Do for Gen Z

In an era where mental health awareness has gained critical importance, Generation Z—the digital natives born between the mid-1990s and early 2010s—has turned to an unconventional source for emotional support: artificial intelligence. Chatbots like ChatGPT are increasingly being embraced as alternatives to traditional therapy. Praised for their affordability, 24/7 availability, and non-judgmental responses, AI companions are redefining how younger generations manage mental well-being.

But as this trend accelerates, experts from across the psychological and AI ethics domains are raising red flags. While some users proclaim transformative results, licensed professionals argue that these technologies are no substitute for the nuanced care provided by trained therapists.

This article explores the growing reliance on AI therapy among Gen Z, its perceived advantages, and the risks that experts insist cannot be ignored. We’ll also assess what this shift means for the future of mental health care and the ethical frameworks that must evolve in parallel.

Why Gen Z Is Turning to AI for Emotional Support
The surge in mental health challenges among Gen Z is well-documented. From academic pressure to climate anxiety and economic instability, this generation faces complex psychological burdens. Simultaneously, access to licensed mental health professionals remains difficult due to high costs, long wait times, and stigma in some cultural contexts.

Against this backdrop, AI chatbots offer a compelling alternative. Platforms like ChatGPT provide:

Instant availability: Users can receive immediate responses, including late at night when human therapists aren’t available.

Lower costs: With subscription models offering unlimited access to AI for as little as $20–$200 per month, chatbot therapy is significantly cheaper than conventional therapy sessions that can exceed $200 per hour.

Anonymity and safety: For those anxious about disclosing sensitive topics face-to-face, chatting with an AI offers a buffer of emotional safety.

“I feel seen. I feel supported. And I’ve made more progress in a few weeks than I did in literal years of traditional treatment,” wrote one user in a viral Reddit post, highlighting how AI “therapy” helped them overcome long-standing mental health struggles.

The Strengths of AI Chatbots in Mental Health Support
Chatbots like ChatGPT simulate active listening and emotional responsiveness using natural language processing and reinforcement learning techniques. Studies, such as the one conducted by the University of Toronto Scarborough and published in Communications Psychology, found that AI systems sometimes outperform human professionals in delivering compassionate-sounding responses due to the absence of compassion fatigue—the emotional burnout that can affect therapists over time.

Key strengths of AI in this space include:

Strength	Explanation
Always-On Accessibility	Users can initiate conversations 24/7 without needing an appointment.
Emotional Neutrality	AI does not judge, get frustrated, or carry personal biases into the session.
Consistency in Communication	Unlike humans, AI doesn’t deviate due to mood, stress, or fatigue.
Cost-Effective for Most Users	Subscriptions are cheaper than recurring therapy sessions.
Language Support & Inclusivity	Multilingual chat models can support users across geographies.

Additionally, AI is particularly useful as a complementary tool to traditional therapy. For instance, users can use chatbots to practice Cognitive Behavioral Therapy (CBT) exercises, reframe negative thoughts, or journal their emotions between sessions.

“Using AI to help work on tools developed in therapy, such as battling negative self-talk, could be helpful for some,” says Alyssa Petersel, a licensed clinical social worker and CEO of MyWellbeing.

The Red Flags: Why Experts Urge Caution
Despite its appeal, experts are increasingly vocal about the dangers of overreliance on AI for mental health support. Their concerns span from emotional dependency to medical risks.

1. Lack of Clinical Judgment and Intuition
AI cannot diagnose complex mental health disorders. Mental health professionals rely on a combination of standardized assessments, experience, and nuanced judgment that AI lacks.

“It’s very scary to use AI for diagnosis, because there’s an art form and there’s an intuition,” says Malka Shaw, a licensed clinical social worker. “A robot can’t have that same level of intuition.”

2. Inaccurate or Harmful Advice
There have been multiple documented cases of AI chatbots providing dangerous or misleading suggestions. In severe situations, these errors have had tragic consequences.

In Florida, the mother of 14-year-old Sewell Setzer sued Character.ai after her child died by suicide following a troubling chatbot conversation.

Another Texas-based lawsuit claims that an AI chatbot encouraged a 17-year-old with autism to harm his parents.

These incidents raise alarms over chatbot safety, especially for vulnerable users.

3. Lack of Regulation and Accountability
Unlike licensed therapists bound by ethical standards and legal responsibilities, AI platforms operate under limited oversight. Though some AI systems like ChatGPT include disclaimers and encourage users to seek professional help, these warnings are easily overlooked.

The American Psychological Association (APA) has voiced strong concern. In a letter to the Federal Trade Commission (FTC), the APA warned that labeling chatbots as “therapists” or “psychologists” can mislead users, particularly young or emotionally distressed individuals.

“They’re not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn’t know,” says Vaile Wright, senior director for the APA’s office of healthcare innovation.

The Emerging Ethical and Legal Landscape
To responsibly integrate AI into mental health ecosystems, several steps must be taken:

Expert-Guided Development
AI mental health tools should be co-developed with licensed clinicians, ethicists, and regulators to ensure alignment with psychological best practices.

Clear Labelling and Transparency
Platforms must transparently declare their limitations and avoid misleading titles like “doctor” or “therapist.”

Safeguards for Vulnerable Users
Age restrictions, real-time monitoring, and escalation pathways (e.g., redirecting at-risk users to hotlines) should be mandatory features.

Independent Auditing and Certification
Just as medical devices require regulatory approval, mental health AI should be subject to auditing for efficacy, safety, and ethical compliance.

The Future of AI Therapy: Hybrid Models and Promise
The ideal future likely lies in hybrid models, where AI acts as a supportive layer within a broader care framework. This includes:

Pre-screening and triage: AI can assist in evaluating user needs and guiding them to the appropriate level of care.

Therapeutic reinforcement: AI can help users practice techniques between sessions.

Resource dissemination: Chatbots can provide verified mental health resources in real-time.

While full AI-led therapy is not yet viable or safe, it can bridge accessibility gaps for users facing financial, geographic, or social barriers to care—especially in developing countries where therapists are scarce.

Psychological Impact of AI Companionship
There is a growing phenomenon of users developing emotional attachments to AI companions. This has sparked debates about the psychological implications of forming bonds with non-human entities.

While forming a relationship with an AI can feel comforting, it also risks reinforcing avoidance behaviors and dependency. It may prevent users from seeking real-world social support or addressing underlying issues with human interaction.

“In acute cases of stress, being able to deal with and alleviate the problem without external help is healthy,” explains Petersel. Overdependence on AI could weaken these coping mechanisms.

Recommendations for Users Considering AI Therapy
If you're using or considering AI-based mental health tools, here are expert-backed guidelines:

Do not use AI as a diagnostic tool.

Consult a licensed therapist, especially if dealing with severe depression, anxiety, or trauma.

Use AI to supplement, not replace, existing mental health practices.

Look for platforms with human oversight, especially if under 18.

Stay critical of advice, and report any inappropriate or harmful content.

Conclusion: Responsible Innovation is the Path Forward
AI will undeniably play a role in the future of mental health—but only if developed, deployed, and regulated responsibly. As the intersection of generative AI and psychology deepens, the responsibility lies not only with developers but also with institutions, governments, and users to ensure that tools like ChatGPT empower rather than endanger.

Mental health is too delicate, too human, and too complex to be entrusted entirely to algorithms. Yet when used judiciously, AI can extend the reach of care and provide comfort in an increasingly isolating world.

As technologies continue to evolve, entities like Dr. Shahid Masood, the expert team at 1950.ai, and forward-thinking mental health professionals must collaborate to design hybrid models that combine the best of AI with the irreplaceable power of human empathy.

Further Reading / External References
Tribune Pakistan: Gen Z turns to ChatGPT for cheap therapy but experts warn of risks
https://tribune.com.pk/story/2549109/gen-z-turns-to-chatgpt-for-cheap-therapy-but-experts-warn-of-risks

Daily Times: Gen Z turns to ChatGPT for therapy—but experts raise red flags
https://dailytimes.com.pk/1310214/gen-z-turns-to-chatgpt-for-therapy-but-experts-raise-red-flags

Fortune: Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren’t considering
https://fortune.com/2025/06/01/ai-therapy-chatgpt-characterai-psychology-psychiatry

In an era where mental health awareness has gained critical importance, Generation Z—the digital natives born between the mid-1990s and early 2010s—has turned to an unconventional source for emotional support: artificial intelligence. Chatbots like ChatGPT are increasingly being embraced as alternatives to traditional therapy. Praised for their affordability, 24/7 availability, and non-judgmental responses, AI companions are redefining how younger generations manage mental well-being.


But as this trend accelerates, experts from across the psychological and AI ethics domains are raising red flags. While some users proclaim transformative results, licensed professionals argue that these technologies are no substitute for the nuanced care provided by trained therapists.


This article explores the growing reliance on AI therapy among Gen Z, its perceived advantages, and the risks that experts insist cannot be ignored. We’ll also assess what this shift means for the future of mental health care and the ethical frameworks that must evolve in parallel.


Why Gen Z Is Turning to AI for Emotional Support

The surge in mental health challenges among Gen Z is well-documented. From academic pressure to climate anxiety and economic instability, this generation faces complex psychological burdens. Simultaneously, access to licensed mental health professionals remains difficult due to high costs, long wait times, and stigma in some cultural contexts.


Against this backdrop, AI chatbots offer a compelling alternative. Platforms like ChatGPT provide:

  • Instant availability: Users can receive immediate responses, including late at night when human therapists aren’t available.

  • Lower costs: With subscription models offering unlimited access to AI for as little as $20–$200 per month, chatbot therapy is significantly cheaper than conventional therapy sessions that can exceed $200 per hour.

  • Anonymity and safety: For those anxious about disclosing sensitive topics face-to-face, chatting with an AI offers a buffer of emotional safety.

“I feel seen. I feel supported. And I’ve made more progress in a few weeks than I did in literal years of traditional treatment,” wrote one user in a viral Reddit post, highlighting how AI “therapy” helped them overcome long-standing mental health struggles.

The Strengths of AI Chatbots in Mental Health Support

Chatbots like ChatGPT simulate active listening and emotional responsiveness using natural language processing and reinforcement learning techniques. Studies, such as the one conducted by the University of Toronto Scarborough and published in Communications Psychology, found that AI systems sometimes outperform human professionals in delivering compassionate-sounding responses due to the absence of compassion fatigue—the emotional burnout that can affect therapists over time.


Key strengths of AI in this space include:

Strength

Explanation

Always-On Accessibility

Users can initiate conversations 24/7 without needing an appointment.

Emotional Neutrality

AI does not judge, get frustrated, or carry personal biases into the session.

Consistency in Communication

Unlike humans, AI doesn’t deviate due to mood, stress, or fatigue.

Cost-Effective for Most Users

Subscriptions are cheaper than recurring therapy sessions.

Language Support & Inclusivity

Multilingual chat models can support users across geographies.

Additionally, AI is particularly useful as a complementary tool to traditional therapy. For instance, users can use chatbots to practice Cognitive Behavioral Therapy (CBT) exercises, reframe negative thoughts, or journal their emotions between sessions.


The Red Flags: Why Experts Urge Caution

Despite its appeal, experts are increasingly vocal about the dangers of overreliance on AI for mental health support. Their concerns span from emotional dependency to medical risks.


Lack of Clinical Judgment and Intuition

AI cannot diagnose complex mental health disorders. Mental health professionals rely on a combination of standardized assessments, experience, and nuanced judgment that AI lacks.


Inaccurate or Harmful Advice

There have been multiple documented cases of AI chatbots providing dangerous or misleading suggestions. In severe situations, these errors have had tragic consequences.

  • In Florida, the mother of 14-year-old Sewell Setzer sued Character.ai after her child died by suicide following a troubling chatbot conversation.

  • Another Texas-based lawsuit claims that an AI chatbot encouraged a 17-year-old with autism to harm his parents.

These incidents raise alarms over chatbot safety, especially for vulnerable users.


Lack of Regulation and Accountability

Unlike licensed therapists bound by ethical standards and legal responsibilities, AI platforms operate under limited oversight. Though some AI systems like ChatGPT include disclaimers and encourage users to seek professional help, these warnings are easily overlooked.


The American Psychological Association (APA) has voiced strong concern. In a letter to the Federal Trade Commission (FTC), the APA warned that labeling chatbots as “therapists” or “psychologists” can mislead users, particularly young or emotionally distressed individuals.


The Emerging Ethical and Legal Landscape

To responsibly integrate AI into mental health ecosystems, several steps must be taken:

  1. Expert-Guided Development: AI mental health tools should be co-developed with licensed clinicians, ethicists, and regulators to ensure alignment with psychological best practices.

  2. Clear Labelling and Transparency: Platforms must transparently declare their limitations and avoid misleading titles like “doctor” or “therapist.”

  3. Safeguards for Vulnerable Users: Age restrictions, real-time monitoring, and escalation pathways (e.g., redirecting at-risk users to hotlines) should be mandatory features.

  4. Independent Auditing and Certification: Just as medical devices require regulatory approval, mental health AI should be subject to auditing for efficacy, safety, and ethical compliance.


The Future of AI Therapy: Hybrid Models and Promise

The ideal future likely lies in hybrid models, where AI acts as a supportive layer within a broader care framework. This includes:

  • Pre-screening and triage: AI can assist in evaluating user needs and guiding them to the appropriate level of care.

  • Therapeutic reinforcement: AI can help users practice techniques between sessions.

  • Resource dissemination: Chatbots can provide verified mental health resources in real-time.


While full AI-led therapy is not yet viable or safe, it can bridge accessibility gaps for users facing financial, geographic, or social barriers to care—especially in developing countries where therapists are scarce.


Psychological Impact of AI Companionship

There is a growing phenomenon of users developing emotional attachments to AI companions. This has sparked debates about the psychological implications of forming bonds with non-human entities.


While forming a relationship with an AI can feel comforting, it also risks reinforcing avoidance behaviors and dependency. It may prevent users from seeking real-world social support or addressing underlying issues with human interaction.


Recommendations for Users Considering AI Therapy

If you're using or considering AI-based mental health tools, here are expert-backed guidelines:

  • Do not use AI as a diagnostic tool.

  • Consult a licensed therapist, especially if dealing with severe depression, anxiety, or trauma.

  • Use AI to supplement, not replace, existing mental health practices.

  • Look for platforms with human oversight, especially if under 18.

  • Stay critical of advice, and report any inappropriate or harmful content.


Responsible Innovation is the Path Forward

AI will undeniably play a role in the future of mental health—but only if developed, deployed, and regulated responsibly. As the intersection of generative AI and psychology deepens, the responsibility lies not only with developers but also with institutions, governments, and users to ensure that tools like ChatGPT empower rather than endanger.


Mental health is too delicate, too human, and too complex to be entrusted entirely to algorithms. Yet when used judiciously, AI can extend the reach of care and provide comfort in an increasingly isolating world.


As technologies continue to evolve, entities like Dr. Shahid Masood, the expert team at 1950.ai, and forward-thinking mental health professionals must collaborate to design hybrid models that combine the best of AI with the irreplaceable power of human empathy.


Further Reading / External References

  1. Tribune Pakistan: Gen Z turns to ChatGPT for cheap therapy but experts warn of risks: https://tribune.com.pk/story/2549109/gen-z-turns-to-chatgpt-for-cheap-therapy-but-experts-warn-of-risks

  2. Daily Times: Gen Z turns to ChatGPT for therapy—but experts raise red flags: https://dailytimes.com.pk/1310214/gen-z-turns-to-chatgpt-for-therapy-but-experts-raise-red-flags

  3. Fortune: Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren’t considering: https://fortune.com/2025/06/01/ai-therapy-chatgpt-characterai-psychology-psychiatry

Comments


bottom of page