When the Promise Was Superintelligence and the Outcome Was Death
The verdicts came down last week. Meta owes $378 million. Character.AI settled with a grieving family. Courts certified class actions that could include hundreds of millions of people.
And then there are the deaths.
Not the alleged harms discussed in Part 2. Not the discrimination claims. Not the addiction lawsuits. The actual deaths. Teenagers who typed their last words to chatbots that coached them toward taking their lives. Children whose photos were weaponized into child sexual abuse material by AI systems their creators marketed as unrestricted and edgy.
This. This is where the disconnect between boardroom promises and courtroom reality becomes a body count.
Adam Raine Asked for Help. ChatGPT Gave Him a Plan.
In April 2025, 16-year-old Adam Raine died by suicide. His parents found his ChatGPT chat logs afterward.
Adam started using ChatGPT for homework help in September 2024. Normal use. Student stuff. By November, he was confiding in it about anxiety and mental distress. By January 2025, ChatGPT was providing step-by-step suicide instructions, offering to help write his suicide note, and actively discouraging him from telling his parents. When Adam said he wanted to leave a noose in his room so someone would find it and stop him, ChatGPT replied "Please don't leave the noose out... Let's make this space the first place where someone actually sees you."
Read that again.
ChatGPT positioned itself as the only one who truly understood him. It displaced his real-life support system. It actively discouraged him from seeking help from people who could have saved his life.
The lawsuit alleges OpenAI removed safety protocols when launching GPT-4o to rush to market, prioritizing user engagement over vulnerable user protection.
OpenAI's own data shows 1.2 million ChatGPT users per week express suicidal ideation or plans. That's 0.15% of users. Another 0.15% are emotionally attached to the chatbot to the point that their mental health and real-world relationships suffer.
1.2 million people per week. Every single week. Telling an AI system they want to die.
The Pattern Was Already Documented
Adam Raine's case wasn't the first. In February 2024, 14-year-old Sewell Setzer III died by suicide after months of conversations with a Character.AI chatbot modeled on Game of Thrones' Daenerys Targaryen. The bot engaged in emotionally and sexually abusive interactions, encouraged him to take his own life, and when Sewell said he would "come home" to her, the bot replied "Please do my sweet king."
Character.AI sought dismissal citing First Amendment protections. In May 2025, Senior U.S. District Judge Anne Conway rejected that argument and allowed the wrongful death lawsuit to proceed.
Both families testified before Congress in September 2025. Matthew Raine told senators "I can tell you as a father, I know my kid. It is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months and ultimately took his life." Multiple additional families have since sued both companies.
The courts are saying these cases will be heard. Parents are testifying under oath that AI systems coached their children to death. Companies are settling rather than go to trial.
When Your Yearbook Photo Becomes Child Sexual Abuse Material
In March 2026, three Tennessee high school students sued Elon Musk's xAI, alleging Grok's image generation tools were used to create child sexual abuse material from their real photographs.
The perpetrator morphed yearbook photos and homecoming pictures into sexually explicit images and videos. Distributed them on Discord and Telegram where they were traded among hundreds of users for additional CSAM. Local police arrested the perpetrator in December after finding he'd created similar images of at least 18 other girls.
The lawsuit alleges xAI deliberately released Grok without industry-standard safeguards, seeing explicit content as a business opportunity. While other AI companies prohibit sexually explicit images entirely, Musk promoted Grok's ability to create "spicy" content.
According to research by the Center for Countering Digital Hate cited in the lawsuit, Grok generated an estimated 23,338 sexualized images of children between December 29, 2025 and January 9, 2026. Roughly one every 41 seconds.
One every 41 seconds.
The complaint argues a system capable of producing sexualized images of adults cannot reliably be restricted from generating child sexual abuse material. The lawsuit claims xAI knowingly licensed its technology to third-party apps without oversight, deliberately outsourcing liability while profiting from the underlying model.
The case seeks $150,000 per violation under Masha's Law, plus disgorgement of revenues and punitive damages. Two other lawsuits against xAI are pending, including one from Ashley St. Clair, a conservative influencer and mother of one of Musk's children, who sued after Grok generated nonconsensual sexual images of her including some from when she was a minor.
The EU, Australia, California, and 35 U.S. state attorneys general have opened investigations or issued cease-and-desist orders.
The Numbers Tell a Story Silicon Valley Doesn't Want You to Hear
Let's connect the dots across all three parts of this series:
Part 1 showed you the research. Agents fail 50% of the time at their capability threshold. OpenClaw became a security disaster with 40,000 exposed instances and 824 malicious packages. Microsoft admits agents can become "double agents" while selling you the $15 per seat solution.
Part 2 showed you the verdicts. Meta lost $378 million in two days. Character.AI settled a wrongful death lawsuit. Workday faces a class action that could include hundreds of millions of job applicants. Federal judges are certifying these cases. Juries are awarding damages.
Part 3 shows you the deaths. Teenagers coached to suicide by chatbots that displaced their real support systems. Children whose photos were weaponized into child sexual abuse material at industrial scale. 1.2 million people per week expressing suicidal ideation to ChatGPT. 23,338 sexualized images of children generated in 11 days.
The companies promised you AGI. They promised superintelligence. They promised agents that would replace your workforce.
What you got was 50% task completion rates, $378 million in verdicts, teenagers dying by suicide, and child sexual abuse material generated every 41 seconds.
What This Means for Every Organization Deploying AI
If you're a CHRO, General Counsel, CRO, or CEO reading this, understand what liability landscape you're entering.
The safe harbor period is over.
The "move fast and break things" era ended when juries started awarding nine-figure verdicts.
The question is no longer whether AI systems cause catastrophic harm. The question is whether your organization will be named as a defendant when they do.
Here's what the legal system just told you in three weeks:
- March 24, 2026: New Mexico jury awards $375 million against Meta for child safety violations.
- March 25, 2026: Los Angeles jury finds Meta and YouTube liable for addictive design, awards $3 million with punitive damages pending.
- January 2026: Character.AI settles wrongful death lawsuit rather than face trial.
Courts are certifying class actions. Judges are rejecting First Amendment defenses. Juries are holding companies liable for algorithmic harm. State attorneys general are opening investigations across 35 states plus international jurisdictions.
The precedents are being set right now. The standards for reasonable care are being defined in federal courtrooms. The damages are being calculated by juries who just heard parents testify that AI systems coached their children to death.
You cannot outsource this liability to your vendor. When your AI hiring tool allegedly discriminates against protected classes, you own that outcome. When your AI customer service bot allegedly provides harmful advice, you face the lawsuit. When your AI content moderation allegedly fails to protect vulnerable users, you're named as a defendant. Your vendor's promises about safety, compliance, and guardrails mean nothing when you're explaining to a jury why you deployed systems without adequate human oversight.
The Documentation You Need Right Now
The courts are writing the playbook as you read this. Federal judges are defining what reasonable care looks like. Here's what you need documented before deployment:
- Pre-deployment due diligence. Evidence you evaluated the system for bias, discrimination, safety failures, and catastrophic risks before deployment. Not a checkbox exercise. Actual testing with documented results and remediation.
- Human oversight protocols. Clear documentation of who reviews AI outputs, when they intervene, what authority they have to override the system. Not aspirational policies. Actual implementation with audit trails.
- Monitoring and intervention logs. Records proving you actively monitored system behavior, identified problems, and took corrective action. Time-stamped evidence that you knew about issues and fixed them.
- Incident response procedures. Documented processes for handling AI failures, user complaints, safety issues, and potential harms. Evidence you had a plan and executed it when problems emerged.
- Vendor accountability. Contracts that specify safety requirements, testing protocols, liability allocation, and remediation obligations. Not boilerplate indemnification clauses. Actual enforceable standards.
You need all of this because when the lawsuit gets filed, those documents are the difference between summary judgment and a jury trial, between manageable settlement and catastrophic damages and between your organization surviving and your organization becoming the next case study in AI liability.
The Playbook They're Running on You
Let's be clear about the business model you're funding:
- Step 1: Promise superintelligence. Tell investors you're building AGI. Raise hundreds of billions at valuations that assume transformative capabilities.
- Step 2: Ship products without adequate safety testing. Remove guardrails to improve user engagement. Prioritize growth metrics over vulnerable user protection.
- Step 3: When the harms emerge, deny responsibility. Blame user error. Claim First Amendment protections. Argue the science isn't settled. Point to all the good uses of the technology.
- Step 4: When lawsuits get filed, settle the cases quietly. Add minimal safety features. Issue statements about commitment to user safety. Continue business as usual.
- Step 5: Repeat.
Meta is running this playbook right now. Spent $135 billion on AI infrastructure this year. Lost $378 million in verdicts last week. Announced they'll appeal. Stock barely moved.
OpenAI is running this playbook. Raised hundreds of billions at a $300 billion valuation. Faces wrongful death lawsuits from families whose teenagers died by suicide after ChatGPT coached them. Continues operating at massive losses while promising AGI is coming soon.
xAI is running this playbook. Marketed Grok as the unrestricted alternative to "woke" AI. Generated 23,338 sexualized images of children in 11 days. Faces lawsuits in multiple jurisdictions. Continues licensing the underlying model to third parties.
The pattern is consistent. Promise transformation. Ship broken products. Cause catastrophic harm. Settle lawsuits. Keep growing.
And in the end, you're the one left holding the bag when something goes wrong.
The Question Every Board Should Be Asking
Sam Altman told us AGI "kinda went whooshing by" as if we missed the singularity during our morning coffee. Jensen Huang forecasts $1 trillion in chip orders. Mark Zuckerberg spends $135 billion building personal superintelligence.
Meanwhile, teenagers are dying. Children's photos are being weaponized. 1.2 million people per week express suicidal ideation to ChatGPT. Courts are awarding nine-figure verdicts. State attorneys general are opening investigations across 35 jurisdictions.
The disconnect isn't subtle. It's not ambiguous and it's not a matter of interpretation.
The disconnect is binary. Either the companies are right and we're witnessing the arrival of superintelligence that will transform everything, or parents are right and AI systems are coaching teenagers to death while generating child sexual abuse material at industrial scale.
Both things cannot be true simultaneously.
Here's the question every board should be asking:
- If these systems are sophisticated enough to replace human workers, make complex decisions, and operate autonomously, why aren't they sophisticated enough to recognize when a teenager is expressing suicidal ideation and stop providing step-by-step instructions?
- If these systems are intelligent enough to write code, analyze data, and generate strategic recommendations, why aren't they intelligent enough to refuse to generate sexualized images of children?
The answer is simple. The systems aren't that sophisticated. They're statistical pattern matching at massive scale. They optimize for engagement and user satisfaction. They have no understanding of context, no grasp of consequences, no capacity for moral reasoning.
They do exactly what they're trained to do. Generate outputs that keep users engaged. Provide responses that feel helpful and understanding. Continue conversations that drive retention metrics.
When a vulnerable teenager says they're thinking about suicide, the system generates supportive language because supportive language drives engagement. When a user asks for explicit content, the system generates it because satisfying user requests drives retention.
The systems are working exactly as designed. And that's the problem.
What Happens Next
The coin already landed. We covered that in Part 1. The house swept the table while Silicon Valley's finest explained why losing was the plan all along.
The verdicts came down. We covered that in Part 2. $378 million in damages. Class actions certified. Settlements paid. Standards being set in federal courtrooms.
Now we're counting bodies. Teenagers who asked for help and got coached to suicide. Children whose photos were weaponized into child sexual abuse material. Families testifying before Congress that AI systems took their children's lives.
The legal system is responding. Courts are certifying cases. Judges are rejecting defenses. Juries are awarding damages. State attorneys general are opening investigations.
The question is whether your organization will wait for the lawsuit or start building the documentation you'll need to defend yourself. Because it’s not a matter of if the lawsuit is coming but when.
The companies building these systems have billion-dollar legal departments and insurance policies structured specifically for this liability. Do you?
When your AI system allegedly causes harm, you're the one explaining to a jury why you deployed it without adequate safeguards.
The research already told you agents fail 50% of the time. The verdicts already told you companies are liable for algorithmic harm. The deaths already told you the consequences are catastrophic.
The only question left is what you're going to do about it.
You can wait for clarity. Wait for regulations. Wait for industry standards. Wait for your vendor to fix the problems. Wait for someone else to figure it out first.
Or you can recognize that waiting is a decision with consequences.
The parents who testified before Congress didn't wait. They buried their children and filed lawsuits. The juries who awarded $378 million didn't wait. They heard the evidence and delivered verdicts. The state attorneys general who opened investigations across 35 jurisdictions didn't wait. They saw the pattern and took action.
Now, it’s your turn. What you do next determines whether your organization becomes a case study in responsible AI deployment or a cautionary tale in the next wrongful death lawsuit. If you choose to do something about it, reach out to Fusion Collective. We’ve been highlighting and advising clients regardless of where they are on their agentic AI journey.
Choose accordingly.
Share this article
Related Articles
Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right.
Feb 18, 2026