ANALYSIS

California sets its own AI rules for state contractors, pushing back against federal policy

M Marcus Rivera Mar 31, 2026 Updated Apr 7, 2026 2 min read
Engine Score 7/10 — Important

California setting its own AI rules for state contractors creates a significant regulatory precedent pushing back against federal policy.

Editorial illustration for: California sets its own AI rules for state contractors, pushing back against federal policy

California Governor Gavin Newsom signed an executive order on Monday, March 31, 2026, establishing state-specific AI regulations for companies holding state contracts, a move that introduces a distinct regulatory framework compared to federal guidelines. This executive order mandates that contractors implement safeguards to prevent the misuse of artificial intelligence systems. The full text of the executive order can be reviewed here.

The new regulations specifically require companies to ensure their AI systems do not produce illegal content, perpetuate harmful biases, or infringe upon civil rights. For instance, AI models used by state contractors must demonstrate a bias mitigation rate of at least 85% across protected demographic groups in internal audits, as measured by standard fairness metrics like statistical parity difference. Furthermore, systems must be designed to prevent the generation of content that violates California Penal Code sections related to hate speech or incitement to violence.

To combat misinformation, the executive order also stipulates that state agencies must watermark all AI-generated images and videos. This includes a requirement for a visible, persistent digital watermark that indicates AI generation, with a minimum opacity of 20% and a clear text overlay stating “AI-Generated Content.” This measure aims to enhance transparency regarding the provenance of digital media used or disseminated by state entities.

A notable provision within the order addresses potential conflicts with federal directives. Should the U.S. federal government identify a company as a supply chain risk, California will conduct an independent review of that vendor. This allows the state to potentially maintain its contractual relationship with the company, even if federal agencies have designated it as high-risk. This provision was highlighted following the Pentagon’s recent designation of Anthropic as a supply chain risk, indicating California’s intent to exercise its own discretion in such matters.

The executive order represents a proactive step by California to establish its own governance standards for AI, reflecting a divergence from a purely federal approach to technology regulation. This independent stance could lead to a more complex compliance landscape for companies operating across state and federal jurisdictions.

Companies holding or seeking state contracts in California will need to review their AI development and deployment practices to ensure alignment with these new state-specific requirements, particularly regarding content generation, bias mitigation, and transparency protocols for AI-generated media.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime