Unsurprisingly, lawyers aren’t the only ones to use AI programs (such as ChatGPT) to write portions of briefs, and thus end up filing briefs that contain AI-generated fake cases or fake quotations (cf. this federal case, and the state cases discussed here, here, and here). From an Oct. 23 opinion by Chief Judge William P. Johnson (D.N.M.) in Morgan v. Community Against Violence:
Rule 11(b) of the Federal Rules of Civil Procedure states that, for every pleading, filing, or motion submitted to the Court, an attorney or unrepresented party certifies that it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation,” that all claims or “legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law,” and that factual contentions have evidentiary support….
Plaintiff cited to several fake or nonexistent opinions. This appears to be only the second time a federal court has dealt with a pleading involving “non-existent judicial opinions with fake quotes and citations.” Quite obviously, many harms flow from such deception—including wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system (to name a few).
The foregoing should provide Plaintiff with enough constructive and cautionary guidance to allow her to proceed pro se in this case. But, her pro se status will not be tolerated by the Court as an excuse for failing to adhere to this Court’s rules; nor will the Court look kindly upon any filings that unnecessarily and mischievously clutter the docket.
Thus, Plaintiff is hereby advised that she will comply with this Court’s local rules, the Court’s Guide for Pro Se Litigants, and the Federal Rules of Civil Procedure. Any future filings with citations to nonexistent cases may result in sanctions such as the pleading being stricken, filing restrictions imposed, or the case being dismissed. See Aimee Furness & Sam Mallick, Evaluating the Legal Ethics of a ChatGPT-Authored Motion, LAW360 (Jan. 23, 2023, 5:36 PM), https://www.law360.com/articles/1567985/evaluating-the-legal-ethics-of-a-chatgpt-authored-motion.