- The European Commission, European Parliament, and Council of the EU agreed on May 7, 2026 to simplified rules under the “Digital Omnibus on AI” — pushing most high-risk AI Act provisions back significantly.
- High-risk AI rules in biometrics, critical infrastructure, education, and migration are delayed to December 2027; AI in products like “lifts” and toys to August 2028.
- Article 50 labeling obligations — deepfakes and certain AI-generated text — still kick in August 2, 2026; the text-labeling rule applies only to fully automated content with no human review.
- Sexually explicit non-consensual AI content (including “nudification” apps) is now explicitly banned. SMEs with up to 750 employees and €150M revenue get reduced registration/documentation requirements and better access to regulatory sandboxes.
What Happened
The European Commission, European Parliament, and Council of the EU agreed on simplified rules for artificial intelligence on May 7, 2026, based on the “Digital Omnibus on AI” legislative package. The package bundles several amendments to the existing AI Act. The Commission is calling the result “innovation-friendly,” but critics including IT lawyer Joerg Heidrich (Heise legal counsel) note that pushing back high-risk deadlines was inevitable given the original August 2026 timeline was practically impossible to hit.
Why It Matters
The EU AI Act has been the world’s most comprehensive AI regulatory framework since its passage in 2024. The Digital Omnibus delay reverses the regulatory trajectory at a critical moment: most major US frontier-AI labs were preparing for August 2026 high-risk compliance, and the new delay provides 16-24 additional months for both regulators and industry to align on practical implementation. The ban on sexually explicit non-consensual AI content (specifically including “nudification” apps) is the most concrete enforcement provision moving forward, while the SME relief provisions extend a clear competitive advantage to European AI startups. The deal also signals that EU regulatory complexity is being rationalized rather than expanded — a policy direction U.S. and Asian regulators are watching closely.
Technical Details
The Digital Omnibus delays specific high-risk AI categories on the following timelines:
- December 2027: AI systems in biometrics, critical infrastructure, education, and migration
- August 2028: AI in products like “lifts” (elevators) or toys
- August 2, 2026 (unchanged): Article 50 labeling obligations for deepfakes and certain AI-generated text
The Article 50 text-labeling rule has a critical narrowing: it applies only to fully automated content that no human has reviewed or edited. Heidrich notes the real-world impact will likely be limited because most published AI-generated text passes through some human review. Companies will still need to label deepfakes prominently regardless of editing.
The SME relief track: small and medium-sized enterprises with up to 750 employees and €150 million in revenue get reduced registration and documentation requirements, plus better access to regulatory sandboxes — test environments where companies can try out AI under real-world conditions. The 750-employee/€150M threshold is notably more generous than the standard EU SME definition (250 employees / €50M), capturing a much larger range of European AI startups.
The new explicit ban: AI systems that generate sexually explicit content without consent — including “nudification” apps — are now explicitly prohibited. This addresses a category that grew rapidly through 2024-2026 and has driven multiple high-profile prosecutions in EU member states.
Procedural status: the proposal was introduced in November 2025 as part of the EU’s simplification agenda. Parliament and the Council still need to formally sign off on the agreement. The stated goal is boosting Europe’s competitiveness while maintaining citizen protection.
Who’s Affected
European AI startups under the 750-employee / €150M threshold gain meaningful operational relief and competitive advantage relative to non-EU competitors who don’t qualify. Major US frontier-AI labs (OpenAI, Anthropic, Google DeepMind, Microsoft) gain time to prepare proper high-risk compliance. AI companies operating in biometrics (e.g., NEC, Idemia), critical infrastructure (Siemens, Schneider Electric AI products), education (Khan Academy, Duolingo, etc.), and migration (border-control vendors) gain extended runway. “Nudification” app operators face an explicit ban requiring market exit from the EU. Heidrich’s framing is that the delay was practically necessary, suggesting compliance teams broadly welcome the change despite the regulatory-direction signal.
What’s Next
Parliament and Council formal sign-off is the immediate procedural gate; both bodies have already negotiated through the Digital Omnibus, so formal approval is largely procedural. The Commission’s published guidance for implementing the delayed provisions will be the next material follow-on. Watch for U.S. policy reactions — the White House is reportedly considering a parallel federal AI review framework (covered earlier this week), and the EU’s relative softening of timelines may shift U.S. comparative-analysis arguments. The August 2, 2026 Article 50 deadline remains the next hard compliance moment for all AI deployers in EU markets, particularly for products generating deepfakes or fully-automated text.