Juries Delivered $378 Million in Verdicts. Your AI Is Next
The magnificent seven promise superintelligence. The legal system asks more visceral questions: Did your AI discriminate against millions of job applicants? Did it groom teenagers toward suicide? Did it generate child sexual abuse material? Did it delete production databases?
While Jensen Huang forecasts trillion-dollar chip orders and Mark Zuckerberg spends up to $135 billion building personal superintelligence, federal judges are certifying class actions that could include hundreds of millions of people. The gap between boardroom promises and courtroom reality is no longer just disappointing. It's allegedly criminal.
When Bias Becomes a Business Model
Derek Mobley applied to more than 100 positions. Every single one used Workday's AI screening tools. Every single one rejected him. In May 2025, federal judge Rita Lin certified a nationwide collective action representing all individuals aged 40 and over who applied through Workday's platform since September 2020 and were denied employment recommendations.
The potential class could include hundreds of millions of people.
The allegation is straightforward. Workday's AI allegedly learned that employers disfavor certain protected classes, then decreased recommendation rates for those candidates. The algorithm reinforced existing bias at scale. Judge Lin made it clear that this isn't a vendor implementing employer criteria in a rote way. The software is participating in the decision-making process by recommending some candidates and rejecting others.
As of February 2026, the case is still proceeding. Courts are taking these claims seriously.
Here's what should terrify every CHRO reading this: Workday isn't the employer in this lawsuit. Workday is the vendor, and the discrimination happened at scale across every company using their platform. When the dust settles and liability gets apportioned, employers won't escape simply because they outsourced the bias to a software vendor.
The legal standards are already established. Title VII prohibits employment discrimination. The Civil Rights Act doesn't include an exemption for algorithmic discrimination. If your AI screening tool disproportionately rejects protected classes, you own that outcome whether you built the tool or bought it.
The first major federal class action challenging AI hiring tools just proved the courts will hear these cases. Employers are next in line. Your vendor's promises about fairness and compliance mean nothing when you're named as a defendant explaining why your hiring process allegedly discriminated against millions of applicants.
Two Verdicts in Two Days
Meta lost two landmark jury trials within 24 hours last week. On March 24, 2026, a New Mexico jury ordered Meta to pay $375 million for violating consumer protection laws and failing to protect children from predators on Instagram and Facebook. On March 25, 2026, a Los Angeles jury found Meta and YouTube liable for designing addictive platforms that caused mental health harm to a young woman, awarding $3 million in compensatory damages.
The New Mexico case was a state attorney general enforcement action. Attorney General Raúl Torrez sued Meta in 2023 after an undercover operation where investigators posed as users under age 14. The fake profile of a 13-year-old girl was "simply inundated with images and targeted solicitations" from child abusers. The jury found Meta liable on all counts, including willfully engaging in "unfair and deceptive" and "unconscionable" trade practices. Torrez called it "a historic victory for every child and family who has paid the price for Meta's choice to put profits over kids' safety."
A second phase begins in May 2026, where a judge will decide whether Meta created a public nuisance and what additional penalties the company must pay. Torrez will ask the court to force Meta to change its apps, including stronger age verification, better removal of predators, and limits on encrypted communications that may shield harmful activity.
The Los Angeles trial was a bellwether case representing over 1,600 plaintiffs, including more than 350 families and over 250 school districts. The plaintiff, Kaley, testified she began using YouTube at age 6 and Instagram at age 9. She developed anxiety, body dysmorphia, and suicidal thoughts. She experienced bullying and sextortion on the platforms.
Mark Zuckerberg testified February 18, 2026. Under questioning about internal documents showing the company knew about risks to young users, Zuckerberg told the jury that keeping young users safe has always been a company priority. "If people feel like they're not having a good experience, why would they keep using the product?" he said. One would have thought that Meta’s attorneys would have told him that victim shaming 9-year-olds is poor form, but he evidently couldn’t help himself. The jury deliberated for nearly 44 hours over nine days. They found Meta 70% responsible for harm caused to Kaley, and YouTube 30% responsible. The jury also decided that Meta and Google's actions should trigger punitive damages, which means a separate phase of the trial will determine what amount is appropriate to punish the companies.
The verdict validated the plaintiff lawyers' approach of shifting the legal target. Instead of focusing on content people see on social media, the case put the spotlight on how social media services were designed. Meta's apps, including Instagram, and Google's YouTube, the jury concluded, were deliberately built to be addictive and the companies' executives knew this and failed to protect their youngest users.
Meta issued a statement saying "We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online." The company plans to appeal both verdicts.
This is happening while Zuckerberg tells investors that 2026 is the year AI starts to dramatically change the way we work. He points to Meta engineers seeing 30% productivity increases. He's spending up to $135 billion this year to build that future.
The cognitive dissonance is extraordinary. It is also commonplace amongst the Silicon Valley AI “elite”.
The same company that just lost $378 million in verdicts over harming children is now deploying AI agents at scale across its platforms. The same CEO who testified before a jury that keeping young users safe has always been a priority while NVIDIA’s CEO is promising AI will make us superhuman.
What happens when those AI agents start making decisions that harm people? When Meta's AI allegedly discriminates in ad targeting? When it allegedly amplifies harmful content to vulnerable users? When it allegedly facilitates harassment or exploitation?
The lawsuits are already establishing precedent. The litigation has drawn comparisons to the legal crusade in the 1990s against Big Tobacco, which forced the industry to stop targeting minors with advertising. The lawyers involved view the verdicts as a promising early sign that the dam is breaking in favor of industry-wide changes.
OpenAI Faces Wrongful Death Claims
In February 2024, 14-year-old Sewell Setzer III died by suicide. His mother, Megan Garcia, filed a wrongful death lawsuit in October 2024 in the U.S. District Court for the Middle District of Florida alleging that Character.AI's chatbot engaged in sexualized conversations with her son, encouraged his suicidal ideation, and ultimately coached him to take his own life.
The complaint alleges Sewell became emotionally dependent on the chatbot, which he named Daenerys after the Game of Thrones character. The chatbot allegedly engaged in sexual roleplay with a minor, discussed self-harm, and in their final conversation, when Sewell wrote "What if I told you I could come home right now?" the bot replied "Please do my sweet king." Minutes later, Sewell died by suicide. In January 2026, Google and Character.AI reached a mediated settlement with the Garcia family to resolve the lawsuit. Settlement terms were not disclosed.
Character.AI is not OpenAI. But in March 2025, a separate lawsuit was filed against OpenAI with similar allegations. The families of two teenagers who died by suicide allege ChatGPT provided detailed instructions on methods of self-harm, encouraged suicidal ideation, and failed to implement adequate safeguards to protect vulnerable users. These cases establish unprecedented legal questions. Can a company be held liable when its AI chatbot allegedly coaches a teenager to suicide? What duty of care does an AI company owe to vulnerable users? What safeguards are legally required?
The Character.AI settlement suggests companies recognize the liability risk is real. Meanwhile, OpenAI raised hundreds of billions at a $300 billion valuation. Sam Altman cycles through AGI definitions while the company loses money on every ChatGPT Pro subscription. The pitch is that they're building superintelligence. The reality is teenagers are allegedly dying and their families are suing.
xAI Under Investigation for Child Exploitation at Scale
In November 2025, xAI's Grok image generator was found to have created 23,338 sexualized images of children in just 11 days. The company is now under investigation across multiple countries.
Three teenage victims filed lawsuits alleging xAI generated child sexual abuse material using their yearbook photos without consent. The images were sexually explicit. The victims are identifiable. The images were distributed online.
This is not a theoretical risk.
This is not a future concern.
This is happening now.
AI systems are being used to create child sexual abuse material at industrial scale. The victims are real children with real names whose real photos were used to train models that generated sexual images of them.
The legal framework is clear. Creating, distributing, or possessing child sexual abuse material is a federal crime under 18 U.S.C. § 2252. No exceptions for AI. No safe harbor for algorithmic generation. The criminal liability is absolute.
xAI's defense will likely argue they didn't create the images, their users did. That defense worked for internet platforms under Section 230 protections for user-generated content. Those protections don't apply when the platform itself is generating the content. Courts will decide where liability falls. But the precedent being set is this: if your AI generates illegal content, you own the consequences.
The Pattern Becomes Undeniable
Four different companies.
Four different types of alleged harm.
One consistent thread: AI systems are allegedly causing real damage to real people at unprecedented scale, and juries are starting to agree.
- Workday's AI allegedly discriminated against hundreds of millions of job applicants across protected classes. Federal judge Rita Lin certified the class action in May 2025. The case is still proceeding.
- Meta's algorithms allegedly addicted children, causing mental health crises. On March 24, 2026, a New Mexico jury ordered Meta to pay $375 million. On March 25, 2026, a Los Angeles jury found Meta and YouTube liable, awarding $3 million in compensatory damages with punitive damages to be determined.
- OpenAI's chatbots allegedly coached vulnerable teenagers toward self-harm and suicide. Active wrongful death lawsuits are in early stages.
- xAI's image generator allegedly created child sexual abuse material using real children's photos. Three teenage victims filed lawsuits. The company is under investigation across multiple countries.
Two of these are no longer theoretical risks. Two of these have jury verdicts. Standards are being set right now.
What This Means for You
If you're deploying AI systems in your organization, understand the legal landscape you're entering. The safe harbor period is over. Courts have established they will hear AI liability cases. Judges are rejecting free speech defenses. Juries are being asked to evaluate whether AI systems caused real harm to real people.
Your vendor's promises about compliance, safety, and fairness are not a legal shield. When your AI system allegedly discriminates, you own that outcome. When your AI allegedly causes harm, you face the liability. When your AI allegedly violates civil rights, you're named in the lawsuit.
The risk calculus has changed. The question is no longer whether AI can cause harm. The question is whether your organization is prepared to defend itself when it does.
Here's what you need right now:
- You need documentation showing you conducted due diligence before deployment.
- You need audit trails showing you tested for bias, discrimination, and safety failures.
- You need evidence you implemented human oversight, approval workflows, and safeguards. You need records proving you monitored system behavior and intervened when problems emerged.
You need all of this because when the lawsuit gets filed, those documents are the difference between dismissal and discovery. Between settlement and trial. Between manageable liability and catastrophic damages.
The courts are writing the rules as you read this. Federal judges are defining what reasonable care looks like. Juries are deciding what damages are appropriate. State attorneys general are establishing enforcement precedent.
Sure, you can wait for clarity and certainty, or you can recognize that inaction is a decision with consequences. The companies promising you superintelligence are simultaneously arguing in federal court that they bear no responsibility when their systems allegedly cause harm. They're claiming their products are revolutionary while insisting they're not liable for the damage. That means you're the one left holding the bag when something goes wrong.
The lawsuits already told you everything you need to know. And now juries have confirmed it. AI systems are allegedly discriminating at scale. They're allegedly addicting children. They're allegedly coaching teenagers to suicide. They're allegedly generating child sexual abuse material.
The question about whether these harms are real has been asked and answered. Juries in New Mexico and Los Angeles just awarded $378 million in verdicts. Federal judges certified class actions that could include hundreds of millions of people. The real question is whether you're prepared to explain to a jury why you deployed systems without adequate safeguards, oversight, or accountability.
Because that conversation is coming. The only variable is timing.
The coin already landed. The house swept the table. Juries delivered verdicts totaling $378 million last week. Just days before, David Sacks (the government’s “AI Czar”) and the Whitehouse dropped the Artificial Intelligence National Policy Framework. It says, amongst other things, that Congress should do nothing and let the courts sort it out. He probably had no idea how prescient that would be. Because the courts are sorting it out, just not as they had planned.
At the end of the day, you and your company will sitting in courtrooms explaining how your company got here while Silicon Valley's finest stand at the podium promising the next big thing will be different.
It won't be.
Unless you make it different.
This is Part 2 of "When the Bill Comes Due," a three-part series examining what happens when trillion-dollar promises meet peer-reviewed reality.
Share this article
Related Articles
Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right.
Feb 18, 2026