CCI Manitoba Chapter News

March 14, 2025 - CCI Manitoba News

Beware of artificial intelligence hallucinations

CBC News reported on Feb. 18, 2025 B.C. couple referenced non-existent, AI-generated court rulings in condo dispute, tribunal finds. The couple were seeking to resolve a dispute with their condo corporation (call strata in B.C.) and had made a submission to the Civil Resolution Tribunal. Unfortunately the couple relied on artificial intelligence (A.I.) to help them preapre and the A.I. created some fake legal precedents. These fakes are often referred to as hallucinations.

Since this item dealt with A.I. hallucintations, I thought I would go to a trusted source for an opinion, in this case OpenAI’s ChatGPT 3.5. https://chat.openai.comto see if this was a type of fraud.  

"An AI hallucination—where an AI generates false or misleading information—can be considered fraud if certain legal conditions are met. However, in most cases, an AI hallucination alone is not automatically fraud.

When Could an AI Hallucination Be Considered Fraud?

Fraud typically requires intent to deceive and resulting harm (financial or otherwise). If AI-generated false information is knowingly used in a deceptive way, it could amount to fraud under various laws.

1. If There Is Intent to Deceive

• If a person or company knowingly uses AI-generated false information to mislead others for personal or financial gain, it could be fraudulent misrepresentation.

• Example: A company uses an AI chatbot to generate fake legal citations and submits them in court, knowing they are false.

2. If It Causes Harm

• If an AI hallucination leads to real-world financial, reputational, or legal harm, it could support a fraud claim.

• Example: A bank uses AI to approve loans, and the AI falsely classifies high-risk applicants as low-risk. If the bank knows about the error but continues, it could be fraud.

When Is It Not Fraud?

No intent to deceive: If an AI hallucination occurs by accident and is corrected once discovered, it is likely not fraud.

No material harm: If the false information does not cause real damage, it may not qualify legally as fraud.

AI autonomy: If the AI acts independently and no human knowingly exploits the falsehood, the responsibility might be unclear.

Legal Precedents & Emerging Cases

• Lawyers have been sanctioned for submitting fake AI-generated legal cases in U.S. courts (Mata v. Avianca case).

• Companies using AI-generated financial reports or job application screenings with errors may face legal action for misrepresentation or negligence.

Final Verdict

Unintentional AI hallucinations are usually not fraud.

Knowingly using or ignoring false AI-generated data can be fraud.

Legal liability depends on whether a person or entity intentionally misuses AI hallucinations."

 

So was it fraud in this case? Nothing in the article suggests the couple knew about the hallucinations so it probably was not fraud. Anyway, March is fraud awareness month. Don't become a victim — be informed. If you are using A.I. generated information, be sure to double-check against credible sources, especially if you are pursuing legal action.


Thank You to Our 2024-2025 Sponsors

  •  
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor logo
  • sponsor
  • sponsor logo
  • sponsor
  • sponsor logo

© 2025 CCI Manitoba Chapter
P.O. Box 48067 Lakewood PO, Winnipeg, MB R2J 4A3
Tel. 204-794-1134 Email ccimanitoba@cci.ca