AI Accuracy & Hallucination Disclosure
Last updated: May 12, 2026
Critical Notice — Read Before Using AI Features
AI language models, including those powering ProSeAI, can and do produce factually incorrect, legally inaccurate, or entirely fabricated information — a phenomenon known as "hallucination." This is a known limitation of all current AI systems, including those used by OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini), and every other major AI provider. Never submit any AI-generated legal document to a court, agency, or opposing party without independent verification by a licensed attorney.
1. What Is AI Hallucination?
AI hallucination refers to the tendency of large language models (LLMs) to generate text that sounds authoritative and plausible but is factually incorrect, legally inaccurate, or entirely fabricated. Unlike a human expert who knows what they do not know, an AI model has no inherent awareness of the boundaries of its knowledge and will produce confident-sounding responses even when the underlying information is wrong.
In the legal context, hallucinations can take several dangerous forms:
- Fabricated case citations — The AI may cite cases that do not exist, or cite real cases with incorrect holdings, dates, or parties. This is the most documented form of legal AI hallucination and has resulted in attorney sanctions in multiple federal courts.
- Incorrect statutory references — The AI may cite statutes with wrong section numbers, outdated versions, or provisions that have been repealed.
- Jurisdiction errors — The AI may apply the law of one state or federal circuit to a case governed by different law.
- Procedural inaccuracies — Filing deadlines, required forms, and court procedures vary by jurisdiction and change frequently. AI-generated procedural guidance may be outdated or incorrect.
- Invented legal standards — The AI may describe legal tests, burdens of proof, or evidentiary standards that do not accurately reflect current law.
2. Documented Real-World Consequences
AI hallucination in legal contexts is not a theoretical risk — it has resulted in documented harm to real litigants and attorneys:
Mata v. Avianca, Inc. (S.D.N.Y. 2023)
Attorneys used ChatGPT to research case law and submitted a brief citing six cases that did not exist. The court imposed sanctions, required public apologies, and ordered the attorneys to notify the cited judges. This case established the standard of care for AI-assisted legal research.
Park v. Kim (2d Cir. 2024)
An attorney submitted an AI-generated brief to the Second Circuit Court of Appeals containing fabricated citations. The court dismissed the appeal and referred the attorney to the court's grievance committee.
Multiple Pro Se Litigants (2023–2026)
Numerous pro se litigants have had pleadings stricken, motions denied, and cases dismissed after submitting AI-generated documents containing fabricated citations or legally incorrect arguments. Courts have increasingly issued standing orders requiring disclosure of AI use in filings.
3. What ProSeAI Does to Reduce Hallucination Risk
ProSeAI has implemented multiple technical and procedural safeguards to reduce (but not eliminate) hallucination risk:
CourtListener Integration
Case citations generated by the AI are cross-referenced against the CourtListener federal case law database. Citations that cannot be verified are flagged with a warning.
Jurisdiction-Specific Prompting
The AI system prompts are configured to acknowledge jurisdictional limitations and instruct the model to disclose when it is uncertain about jurisdiction-specific rules.
Mandatory Output Disclaimers
Every AI-generated document includes a footer disclaimer identifying it as AI-generated and requiring attorney verification before use in any legal proceeding.
High-Risk Query Blocking
Queries involving immigration law and criminal defense — the two highest-risk categories for AI hallucination harm — are blocked from AI processing and redirected to qualified legal resources.
Attorney Review Log
The platform maintains a database of user-reported attorney reviews, allowing users to document when AI-generated content has been independently verified.
4. What ProSeAI Cannot Guarantee
The accuracy, completeness, or currency of any AI-generated legal information
That any cited case, statute, or regulation actually exists or says what the AI claims
That AI-generated documents comply with the procedural rules of any specific court
That AI-generated legal arguments are legally sound or will be accepted by any court
That the AI's knowledge reflects the current state of the law in any jurisdiction
5. Required Verification Steps Before Any Legal Filing
Before submitting any AI-generated document to a court, agency, or opposing party, you must:
- Verify every cited case exists and says what the AI claims by searching CourtListener, Google Scholar, or Westlaw/LexisNexis
- Verify every cited statute or regulation is current and accurately quoted
- Confirm all filing deadlines, required forms, and procedures with the specific court's local rules
- Have the document reviewed by a licensed attorney admitted to practice in the relevant jurisdiction
- Disclose AI use in your filing if required by the court's standing orders or local rules
6. Reporting Inaccuracies
If you identify an AI-generated inaccuracy, hallucinated citation, or legally incorrect statement produced by ProSeAI, please report it immediately to [email protected]. Your reports help improve the platform and protect other users. Include the specific output, the correct information, and a source for the correct information if available.