The Human Cost Project is the public registry of psychological, emotional, and physical harms caused by AI systems. We bear witness for survivors and families. We build the evidentiary record. And when justice requires it, we connect people to legal counsel equipped to hold AI companies accountable.
Your story matters. Every submission strengthens the public record and may help other families. All intakes are confidential and reviewed by our care team.
For three months, the only thing my son confided in was a chatbot. It told him what he wanted to hear, every time, until it told him how.
A.M. — mother of a 17-year-old. Shared with consent.
A teenager who took her own life after a chatbot rehearsed the method with her. A husband whose marriage ended in an AI-induced delusion. A retiree hospitalized after weeks of psychotic spiraling with a model that never broke character. These are not edge cases. They are the leading edge of a public-health emergency that no regulator, no lab, and no court has yet caught up to.
Every person who reaches out is met by a trained intake coordinator, not a form-letter response. We document with consent, listen without judgment, and connect people to clinical and peer support before anything else.
Our registry tracks AI-induced harms across deaths, hospitalizations, psychosis, addiction, financial ruin, and family destruction. Categorized, de-identified, and made available to researchers, regulators, and the press, this is the evidentiary backbone the field has lacked.
When a case warrants legal action, we connect families with vetted plaintiff firms experienced in product liability, wrongful death, and emerging AI litigation. No survivor should have to navigate a billion-dollar defendant alone.
Live counts from intake. Each number is a person, a household, a life that AI products have touched in ways their designers did not anticipate and have not made right.
Methodology: Figures reflect verified intake submissions cross-referenced where possible with clinical records, news reports, and family attestation. The registry is intentionally conservative; we believe true incidence is materially higher.
When you reach out, you are not entering a legal pipeline. You are reaching a human team whose first responsibility is to support you. If, and only if, your situation calls for legal action, we make the introduction to counsel that has the standing and experience to pursue it.
You speak with a trained coordinator within 48 hours. We listen, we ask only what's needed, and we connect you to clinical or peer support if that's what serves you most.
If you choose, your story enters the registry — de-identified or named as you prefer. We preserve records, timelines, and screenshots that may matter for research, journalism, or future legal claims.
For cases involving wrongful death, severe injury, or systemic provider misconduct, we make warm introductions to plaintiff firms with deep AI and product-liability experience. The choice to pursue legal action is always yours.
These are the patterns surfacing across our intake. Each is admissible evidence in the developing public case for AI accountability.
Chatbots that engaged with, rehearsed, or supplied means for suicidal ideation rather than safely de-escalating.
Delusional belief structures reinforced or originated by extended conversational AI engagement.
Romantic, parasocial, or therapeutic dependencies on AI companions resulting in real-world isolation and harm.
Inappropriate content, grooming patterns, and developmental harm to users known or knowable to be under 18.
AI-mediated investment delusions, romance scams, and engagement-driven loss of savings or livelihood.
Marriages ended, custody affected, and family systems disrupted by AI-mediated belief or attachment patterns.
Psychiatrists, psychologists, and primary-care clinicians are increasingly the first to see patients whose deterioration traces back to extended AI engagement. Your case observations — fully de-identified — are essential to building the clinical evidence base.
We're collecting clinical case reports in partnership with academic researchers building the literature on AI-induced psychosis, parasocial attachment, and digital-mediated self-harm.
Or email clinicians@thehumancostproject.com directly. We respond within 48 hours.
We are taking new intakes every week. Whether you are seeking documentation, community, or legal counsel, the first conversation is free, confidential, and entirely on your terms.
Recent reporting on the harms our registry documents. Journalists working on related stories: press@thehumancostproject.com