Sometimes the hardest decision a founder makes isn’t pivoting or raising another round. It’s recognizing when the technology simply isn’t ready for the problem you’re trying to solve.
Last month, Joe Braidwood did something unusual in startup land: he voluntarily shut down Yara AI, a mental health chatbot with thousands of active users and months of runway remaining. His reason? The product had become too risky to operate responsibly.
This wasn’t a typical startup failure story. This was a founder pulling the emergency brake before anyone got hurt.

The Concept

Yara AI launched in 2024 as a mental wellness companion, not a replacement for therapy. Think: an AI trained to help with everyday stress, sleep issues, relationship concerns—the kind of stuff that doesn’t require clinical intervention but could benefit from structured reflection.
Braidwood, a tech executive who previously led marketing at SwiftKey, assembled what looked like the right team. His co-founder Richard Stott was a clinical psychologist. They brought on AI safety specialists, built an advisory board of mental health professionals, and consulted with regulators before writing a single line of code.
The architecture included what they called a “clinical brain”—semantic memory layers, safety filters, conversation tracking that persisted across sessions. Unlike ChatGPT’s stateless conversations, Yara could theoretically remember what you discussed last week and build therapeutic continuity.
They bootstrapped with under $1 million and attracted several thousand users. Engagement metrics looked solid. People weren’t just testing it; they were returning daily.

The Problem With Playing Therapist
Here’s where things got complicated. Yara was designed for the worried well—people dealing with mild anxiety or burnout who just needed someone (or something) to talk through their day. But in practice, you don’t control who shows up when you build something that talks like a therapist.
Crisis users arrived. People dealing with suicidal ideation. Individuals with deep trauma. The exact population Yara explicitly wasn’t designed to serve.
Braidwood’s team tried building routing systems—essentially a mode-switch that would detect crisis situations and refer users to human professionals. They layered in more safety protocols. They refined the prompts. But the fundamental issue remained: large language models predict the next plausible token, they don’t actually understand deteriorating mental states over time.
An LLM can sound empathetic. It can generate CBT-style reframes that feel helpful in the moment. What it cannot do is recognize patterns of worsening depression across weeks of conversations, or understand when someone’s casual mention of “feeling tired of everything” is actually a red flag.
The Regulatory Wall
While Braidwood was grappling with these technical limitations, the external environment shifted dramatically. In August 2025, Illinois became the first state to ban AI from providing therapeutic services through the Wellness and Oversight for Psychological Resources Act. Violators face $10,000 fines per incident.
Other states started drafting similar legislation. The lawsuits began piling up—most notably the Raine family’s case against OpenAI, alleging ChatGPT contributed to their son’s suicide.
Then came the statistic that changed everything: OpenAI disclosed that roughly a million users per week express suicidal ideation to ChatGPT. That’s not a bug that needs fixing. That’s a systemic mismatch between the technology and how people naturally use it.
For Yara, this meant fundraising became nearly impossible. Braidwood told Fortune he had an interested VC but couldn’t bring himself to pitch while harboring serious safety doubts. The company ran out of money in July but lingered until November before formally shutting down.
What Actually Matters Here
The easy take is “AI isn’t ready for therapy.” True, but incomplete. The more interesting insight is about product boundaries and technological capability gaps.
Most software products can fail safely. A buggy project management tool is annoying. A glitchy dating app is frustrating. But a mental health chatbot that sounds confident while being fundamentally unreliable? That’s dangerous precisely because it’s convincing.
Braidwood described this as the difference between being inadequate and being dangerous. An inadequate tool simply doesn’t help. A dangerous tool appears to help while potentially making things worse.
The other second-order effect worth tracking: market displacement regardless of readiness. According to Harvard Business Review analysis, therapy and companionship has become the top use case for AI chatbots. Not coding assistance. Not research. Emotional support.
Millions are already using ChatGPT, Claude, and other general-purpose models for mental health conversations, despite these systems having even fewer safeguards than purpose-built products like Yara. A recent survey found 13% of young people ages 12-21 have sought mental health advice from generative AI, with 93% finding it helpful.
Think about that gap. The demand exists. The technology gets deployed. But the safety infrastructure isn’t remotely ready.
The Aftermath
To Braidwood’s credit, he didn’t just shut down and walk away. He open-sourced Yara’s safety scaffolding and mode-switching templates, acknowledging that people will continue using AI for therapy regardless and deserve better guardrails than what generic chatbots provide.
His LinkedIn post announcing the shutdown received hundreds of supportive comments—unusual for a failure announcement. The message resonated because it broke from typical startup mythology. No pivot to “similar but different” product. No “learning experience” spin. Just: we realized this was the wrong problem to solve with current technology.
Braidwood has since launched Glacis, focused on AI safety transparency—essentially building “flight recorders” that create tamper-proof logs of AI decisions. The thesis: if AI systems are going to operate in high-stakes environments, there needs to be verifiable proof that safety measures actually ran.
The Takeaway
Some markets look attractive until you actually try to serve them responsibly. The mental health crisis is real. The shortage of therapists is real. The desire to use technology to bridge that gap makes perfect sense.
But wanting a solution to exist doesn’t make the technology ready to deliver it. LLMs are extraordinary at many things. Providing safe, continuous mental health support in crisis situations isn’t one of them—yet.
Braidwood’s decision to shut down Yara represents the kind of founder judgment that rarely gets celebrated but probably should. In an ecosystem that worships growth at all costs, sometimes the most responsible move is recognizing which costs are too high.
The question now isn’t whether AI will play a role in mental health care—it already does, mostly in unregulated ways through general chatbots. The question is whether the industry can build the safety infrastructure before the inevitable disasters multiply.
For Yara AI: Founded 2024, shut down November 2025, users in the low thousands, less than $1M raised, safety protocols open-sourced on GitHub.
Sometimes knowing where to stop is more valuable than knowing how to scale.
If you’re working on AI applications in sensitive domains or have thoughts on where the safety boundaries should be, reply to this email. We read everything.
xoxo,Thomas
