- The EU AI Act’s prohibited practices under Article 5 have been enforceable since February 2, 2025, with fines up to 35 million euros or 7% of global annual turnover
- Bans cover social scoring, manipulative AI, untargeted facial recognition scraping, emotion recognition in workplaces and schools, and predictive policing of individuals
- The next major deadline is August 2, 2026, when full compliance requirements for high-risk AI systems and general-purpose AI model obligations take effect
- The Act’s extraterritorial reach applies to any AI system whose output is used in the EU, regardless of where the provider is headquartered
What Happened
The European Union’s AI Act, the most comprehensive AI regulation in the world, has entered its most consequential enforcement phase. Since February 2, 2025, the Act’s prohibited practices under Article 5 have been fully enforceable, with penalties already active. The next major milestone arrives on August 2, 2026, when the full framework for high-risk AI systems and general-purpose AI model obligations takes effect.
The regulation entered into force on August 1, 2024, with a phased implementation timeline designed to give organizations time to adapt. That grace period for prohibited practices has expired. Organizations deploying banned AI systems now face the Act’s highest penalty tier. Since August 2, 2025, organizations deploying any AI system must also ensure their staff have sufficient AI literacy, and member states were required to designate national competent authorities to oversee enforcement.
Why It Matters
The AI Act establishes the first binding legal framework that categorically bans specific AI applications. Unlike voluntary guidelines or industry self-regulation, violations carry financial penalties calibrated to make non-compliance materially painful for even the largest technology companies. A company with 10 billion euros in annual revenue could face fines up to 700 million euros for deploying a prohibited AI system.
The Act’s extraterritorial reach is its most significant structural feature. Any AI system whose output is used within the EU falls under its jurisdiction, regardless of where the provider is headquartered. This means American and Chinese AI companies serving European customers must comply with the same rules as EU-based competitors.
Technical Details
The prohibited practices under Article 5 include six categories of banned AI systems. These are: AI systems that manipulate people’s decisions or exploit their vulnerabilities through subliminal techniques; social scoring systems operated by governments that evaluate individuals based on social behavior or personal traits; systems that assess individuals’ risk of committing criminal offenses based solely on profiling; untargeted scraping of facial images from the internet or CCTV footage to build recognition databases; emotion recognition systems deployed in workplaces and educational institutions; and biometric categorization systems that classify people by sensitive attributes such as race, political beliefs, or sexual orientation.
The penalty structure operates on three tiers. Prohibited practice violations carry fines of up to 35 million euros or 7 percent of global annual turnover, whichever is higher. Other breaches face fines of up to 15 million euros or 3 percent. Supplying misleading information to authorities triggers fines of up to 7.5 million euros or 1 percent. Limited exceptions exist for law enforcement purposes, including searching for missing persons and preventing specific terrorist threats.
Who’s Affected
Every major AI company operating in Europe is actively preparing for August 2026 compliance. OpenAI, Google, Meta, Microsoft, and Anthropic have all initiated compliance programs to meet the high-risk AI system requirements, which include conformity assessments, risk management systems, data governance protocols, technical documentation, human oversight mechanisms, and registration in the EU database.
General-purpose AI model providers face additional obligations starting August 2, 2026, including mandatory model evaluations, systemic risk assessments, and incident reporting. Member states must designate national competent authorities to oversee enforcement, and the EU’s AI Office coordinates cross-border cases.
What’s Next
The August 2, 2026 deadline is four months away and represents the Act’s full activation. From that date, providers and deployers of high-risk AI systems in healthcare, critical infrastructure, law enforcement, and education must demonstrate compliance across the entire framework. The practical test will be whether national authorities have the technical capacity and political will to enforce the rules against well-resourced companies that challenge fines or dispute risk classifications.
Several implementation questions remain unresolved. The European Commission is still finalizing guidelines for how general-purpose AI model providers should conduct systemic risk assessments and what constitutes adequate transparency documentation. Companies preparing for compliance have noted that some requirements remain ambiguous enough to create divergent interpretations across member states. The first enforcement actions after August 2 will set important precedents for how strictly the rules are applied in practice.