California Governor Gavin Newsom signed an executive order on Monday, March 31, 2026, establishing state-specific AI regulations for companies holding state contracts, a move that introduces a distinct regulatory framework compared to federal guidelines. This executive order mandates that contractors implement safeguards to prevent AI misuse, aiming to protect against the generation of illegal content, the reinforcement of harmful biases, and civil rights violations. The full text of the executive order can be found here.
The new regulations specifically require companies to ensure their AI systems do not produce content deemed illegal under California law. Furthermore, contractors must demonstrate active measures to mitigate algorithmic bias, ensuring that AI applications do not perpetuate or amplify existing societal inequalities. This includes a focus on preventing discrimination in areas such as housing, employment, and public services, where AI models could inadvertently lead to disparate outcomes.
To combat misinformation, the executive order stipulates that state agencies utilizing AI-generated images and videos must apply watermarks to clearly identify synthetic media. This measure is intended to enhance transparency and help the public distinguish between authentic and AI-created content. The technical implementation of these watermarks will likely involve digital signatures or embedded metadata, adhering to standards that ensure their persistence and verifiability.
A notable provision within the order addresses potential conflicts with federal directives regarding supply chain risks. If the U.S. federal government designates a company as a supply chain risk, California will conduct its own independent review of that vendor. This allows the state the discretion to potentially continue working with the company, even if it has been flagged federally. This provision highlights California’s intent to maintain autonomy in its procurement decisions, even when federal security concerns are raised.
This independent review process was prompted, in part, by the Pentagon’s recent designation of Anthropic as a supply chain risk. California’s executive order suggests a mechanism for the state to evaluate such designations on a case-by-case basis, potentially allowing for continued collaboration with companies deemed critical to state operations, provided they meet California’s specific security and ethical standards. The order does not specify the exact criteria for California’s independent review, but it implies a comprehensive assessment beyond federal classifications.
The executive order represents a significant step by California to establish its own governance framework for AI, reflecting a proactive stance on emerging technological challenges. Future developments will likely involve the detailed articulation of compliance metrics and auditing procedures for contractors to demonstrate adherence to these new state-level AI safeguards.