The plaintiffs (Raine’s parents) are suing for damages in tort (both products liability and negligence) as well as under California’s Unfair Competition Law for injunctive relief, which means a request for a court to require (“enjoin”) a defendant to either do something or stop doing something. In this case, the plaintiffs are requesting injunctive relief that amounts to public policy:
…an injunction requiring Defendants to: (a) immediately implement mandatory age verification for ChatGPT users; (b) require parental consent and provide parental controls for all minor users; (c) implement automatic conversation-termination when self-harm or suicide methods are discussed; (d) create mandatory reporting to parents when minor users express suicidal ideation; (e) establish hard-coded refusals for self-harm and suicide method inquiries that cannot be circumvented; (f) display clear, prominent warnings about psychological dependency risks; (g) cease marketing ChatGPT to minors without appropriate safety disclosures; and (h) submit to quarterly compliance audits by an independent monitor.
The complaint, of course, is one side of the story, crafted by attorneys to be as persuasive as possible. For example, the complaint includes few references to, and no quotes of, any efforts by the model to encourage Raine to seek mental healthcare, talk to a human, or take other preventive steps (and I’m sure GPT-4o did this many times). In addition, the complaint makes clear that Raine routinely jailbroke the model, telling it he was asking questions about suicide for purposes of fiction he was hoping to write.
We have not yet heard OpenAI defend itself, and we have not had a trial, where new facts and context might well come to light.