REGULATION

Canadian Privacy Commissioners Find OpenAI Violated PIPEDA and Provincial Privacy Laws

P Priya Sharma May 7, 2026 4 min read
Engine Score 8/10 — Important

Canadian officials claim OpenAI violated federal/provincial privacy laws

Editorial illustration for: Canadian Privacy Commissioners Find OpenAI Violated PIPEDA and Provincial Privacy Laws
  • Canadian Privacy Commissioner Philippe Dufresne and provincial commissioners in Alberta, Quebec, and British Columbia concluded on May 6, 2026 that OpenAI was “not compliant with” Canadian federal and provincial privacy laws in training its AI models.
  • Cited violations include collection of vast personal information without adequate safeguards, failure to obtain consent, and lack of access/correction/deletion mechanisms for users.
  • OpenAI has retired earlier models that violated Canadian privacy regulation and now uses a filtering tool to detect and mask personal information in training data.
  • OpenAI committed to add a signed-out ChatGPT notice within 3 months and to make data export tools easier and protect retired datasets within 6 months.

What Happened

The Privacy Commissioner of Canada, Philippe Dufresne, found that OpenAI was “not compliant with” Canadian federal and provincial privacy laws in training its AI models, Engadget reported on May 6, 2026. Dufresne and his counterparts in Alberta, Quebec, and British Columbia conducted a joint investigation. Cited statutes include Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), which governs business handling of personal information.

Why It Matters

This is the first formal multi-jurisdictional finding by Canadian privacy authorities that a major frontier AI lab violated privacy law. The decision sits alongside parallel investigations in the EU (under GDPR), Italy’s Garante actions against ChatGPT in 2023, and ongoing scrutiny in other privacy-strict jurisdictions. Canada’s investigation began in 2023 but accelerated after OpenAI’s connection to the February 2026 Tumbler Ridge mass shooting — where OpenAI had reportedly flagged the alleged shooter’s account in 2025 for warnings of real-world violence but failed to escalate to Canadian law enforcement. The combined privacy and safety failures gave Canadian regulators concrete leverage that produced enforceable commitments.

Technical Details

The commissioners identified four primary privacy failures. First, OpenAI “gathered vast amounts of personal information without adequate safeguards” to prevent that information being used to train its models. Second, the company failed to acquire consent for collecting and using that personal information. Third, ChatGPT users had no way to access, correct, or delete their data. Fourth, OpenAI’s attempts to acknowledge inaccuracy of certain ChatGPT responses were lackluster.

Specific issues: warnings inside ChatGPT note that interactions could be used in training, but third-party data OpenAI has purchased or scraped also contains personal details people likely aren’t aware of. The commissioners’ finding indicates these scraped datasets failed to meet PIPEDA’s consent and notice requirements.

The Privacy Commissioner contends OpenAI was “open and responsive to the investigation” and has already committed to multiple ChatGPT changes:

  • Retired earlier models that violated Canadian privacy regulation
  • Now uses a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models
  • Within 3 months: add a notice to the signed-out version of ChatGPT explaining that chats can be used for training and sensitive information shouldn’t be shared
  • Within 6 months: make data exports tools easier to understand and use, and better explain how users can challenge the accuracy of information ChatGPT provides; confirm strong protection for future retired datasets so they can’t be used for active development; test protective measures for minor relatives of public figures who are not themselves public figures, ensuring models deny requests to share their name or date of birth

Who’s Affected

OpenAI faces operational obligations to implement the commitments within the 3-6 month timelines. The company also faces a precedent that other privacy regulators globally — particularly UK ICO, Australian OAIC, and Brazilian ANPD — may cite when scrutinizing similar issues. Anthropic, Google, Meta, and other AI labs face an empirical anchor for what Canadian privacy law requires of training-data practices, with implications for any model deployed in Canada. Canadian users gain new rights to challenge ChatGPT response accuracy and to access/correct/delete their data within 6 months. Privacy advocates gain a high-profile precedent for AI training-data accountability. Minor relatives of public figures gain explicit protections against models sharing their identifying information.

What’s Next

OpenAI must implement the 3-month signed-out notice change and the 6-month data-export and dataset-protection commitments. Canadian commissioners will likely publish a follow-up validating compliance. Watch for whether other Canadian regulators (CRTC for telecoms, OPC for marketing) or international counterparts cite this decision when issuing comparable directives to other AI labs. The Tumbler Ridge connection — OpenAI’s failure to escalate flagged shooter accounts to law enforcement — may also produce separate safety-focused regulatory actions distinct from the privacy track.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime