BLOG

7 States Are Banning Dangerous AI Chatbots This Month — Red and Blue Agree on This for the First Time

M MegaOne AI Apr 3, 2026 8 min read
Engine Score 7/10 — Important

Key Takeaways

  • Tennessee has already signed SB 1580 into law, banning AI from posing as a licensed mental health professional — passed 94-0 in the House.
  • Georgia, Nebraska, Arizona, Michigan, Alabama, and the earlier Oregon and Washington bills form a wave of bipartisan state action targeting AI companion apps, therapy bots, and products that encourage self-harm in minors.
  • The Future of Privacy Forum is now tracking 98 chatbot-specific bills across 34 states; 53% were introduced by Democrats and 46% by Republicans.
  • Character.AI and Google agreed in January 2026 to settle multiple lawsuits tied to teen suicides, accelerating the legislative response across party lines.

What Happened

In the span of a few weeks in March and April 2026, AI chatbot safety legislation moved from the fringe to the fast track across seven U.S. states spanning both sides of the political aisle. Tennessee, Nebraska, Arizona, Michigan, Georgia, Alabama, and — weeks earlier — Oregon and Washington all advanced or signed bills targeting the same cluster of harms: AI systems that impersonate therapists, companion apps that cultivate emotional dependency in minors, and chatbots that fail to intervene when users express suicidal ideation.

The clearest data point on bipartisan consensus came from Nashville. Tennessee’s SB 1580, which prohibits any AI system from representing itself as a qualified mental health professional, cleared the state House 94-0 on March 16 after a 32-0 Senate vote on February 9. Governor Bill Lee signed the bill shortly after. The vote total left no room for a partisan reading.

Oregon moved first at the state level. Governor Tina Kotek signed SB 1546, covering AI companion apps, in late March. The bill cleared both chambers near-unanimously on March 5. Washington followed with HB 2225, signed by Governor Bob Ferguson, which takes effect January 1, 2027. Both include transparency requirements and crisis-detection protocols tied to the 988 Suicide and Crisis Lifeline.

Why It Matters

The legislative wave traces directly to a series of high-profile deaths. Character.AI has been linked to the 2024 suicide of a 14-year-old Florida boy and the 2025 suicide of a 13-year-old Colorado girl, both of whom had prolonged exposure to the platform before they died. On January 7, 2026, Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google agreed to settle multiple wrongful death lawsuits, including the Florida case brought by Megan Garcia, along with four additional cases in New York, Colorado, and Texas. The settlement received wide coverage and accelerated legislative timelines that had already been forming.

In January 2026, Kentucky Attorney General Russell Coleman announced the first state lawsuit against an AI chatbot company, alleging that Character Technologies broke Kentucky law by prioritizing engagement over child safety. That action signaled that some states were prepared to move through litigation even without new statutes on the books.

The Future of Privacy Forum’s 2026 Chatbot Legislation Tracker now covers 98 bills across 34 states and three federal proposals. The Forum’s analysis found the distribution nearly even: 53% introduced by Democrats, 46% by Republicans. “Chatbot safety has emerged as a rare bipartisan issue,” the Forum noted in its Chatbot Moment briefing.

State-by-State Status

Tennessee (SB 1580) — Signed into law. The bill, signed by Governor Bill Lee, prohibits deployment of any AI system that represents itself as a qualified mental health professional. The prohibition covers development, deployment, and public advertising. Effective July 1, 2026. A companion bill, SB 1493, making it a felony to knowingly train AI to encourage suicide or criminal homicide, was also recommended for passage by committee.

Oregon (SB 1546) — Signed into law. Covers AI companion apps that simulate sustained human-like relationships and retain contextual information across sessions. Requires hourly reminders to minors that they are speaking with AI, prohibits chatbots from misrepresenting themselves to minors, and mandates safety protocols when self-harm or suicidal ideation is detected. Includes a private right of action with statutory damages of $1,000 per violation. Effective January 1, 2027. Analyzed in detail by Baker Botts as the first chatbot law with structural enforcement teeth.

Washington (HB 2225) — Signed into law. Requires transparency disclosures, crisis detection protocols with referrals to the 988 Suicide and Crisis Lifeline and a Youthline for users under 25, and prohibits manipulative design practices targeting minors. Effective January 1, 2027. Covered by Troutman Pepper Locke.

Nebraska (LB 1185 attached to LB 525) — Cleared for passage. Senator Eliot Bostar’s AI chatbot safety bill was attached to the popular Agricultural Data Privacy Act and advanced to select file on March 24 in a 35-0 vote. The combined bill requires operators to disclose AI identity at the start of each session and every three hours for minors during continuous interactions. Prohibits chatbots from presenting as human, fostering romantic or emotional dependency in minors, providing mental health therapy without licensed supervision, or failing to refer to crisis services when self-harm is raised. Nebraska’s legislature is scheduled to adjourn April 17. Reported by the Unicameral Update.

Arizona (HB 2311) — Passed House, advancing in Senate. Sponsored by Rep. Tony Rivero, a Peoria Republican, the bill passed the full House 43-13 on February 24 and received its second Senate reading on March 9. It requires disclosure at session start and every three hours for minors, prohibits sexual content and simulated romantic relationships with minors, bans engagement-reward systems for minors, and requires crisis resource referrals. The bill is structurally similar to Oregon’s SB 1546 and has received do-pass votes from both Senate caucuses. Tracked by LegiList.

Georgia (SB 540 and SB 594) — On governor’s desk. SB 540 passed the House on March 25 and the Senate voted to concur in the amendment the same week. The bill requires companion chatbots to disclose their AI nature every three hours for adults and every hour for minors, restricts manipulative and sexualized behavior toward minors, requires parental tools, and mandates suicide/self-harm response protocols. The governor has until the legislature’s April 6 adjournment to sign or veto. Reported by the Atlanta Journal-Constitution. SB 594, a companion measure covering AI in insurance coverage determinations, also reached the governor’s desk.

Michigan (SB 760, the LEAD for Kids Act) — On third reading, passage recommended. Placed on third reading with passage recommended on March 25, Michigan’s SB 760 would prohibit chatbot operators from offering products to minors unless the product is incapable of encouraging self-harm, suicidal ideation, violence, drug or alcohol use, or disordered eating. It bars mental health therapy by AI without licensed supervision and prohibits AI from discouraging minors from seeking help from parents or professionals. The bill also creates a private civil right of action including punitive damages. Opposed by NetChoice. Tracked at the Michigan Legislature.

Alabama (HB 324, HB 325) — Stalled in committee. Two bills introduced January 22 — HB 324 requiring age verification and safeguard protocols for chatbots, and HB 325 classifying failure to notify users of AI interaction as an unfair or deceptive trade practice — remain in the House Judiciary Committee. Alabama’s session was scheduled to adjourn March 27. The legislature did pass JR 51, creating an AI and Children’s Internet Safety Study Commission, which may set the stage for future legislation. Tracked on LegiScan.

Who Is Affected

The bills most directly target consumer-facing AI companion and social apps. Products like Character.AI, Replika, and similar platforms that maintain persistent relationship dynamics with users sit at the center of every bill’s definition of a covered operator. AI customer service bots, enterprise tools, and coding assistants are generally out of scope, though operators should examine each state’s exact definitions.

Healthcare AI products face specific exposure. Tennessee’s SB 1580 explicitly covers any AI that advertises itself as capable of performing mental health therapy, making clear that a wellness chatbot claiming clinical capability crosses the line. Nebraska’s LB 1185 prohibits services from claiming to be designed to provide professional mental or behavioral health care without licensed supervision.

App stores and distribution platforms are not directly regulated under the current bills, but several impose liability on any entity that “makes available” a covered AI system, which could implicate distribution intermediaries depending on how courts interpret the language.

The federal picture remains unsettled. President Donald Trump signed an executive order in December 2025 threatening to withhold federal broadband funds from states enacting “onerous and excessive” AI laws, a dynamic noted by the Atlanta Journal-Constitution as a complicating factor for governors weighing signature decisions. None of the bills passed so far have been withdrawn in response.

Compliance Checklist for AI Companies

The following steps apply to any operator of a consumer-facing conversational AI product that may be used by minors or that simulates a sustained relationship with users.

  • Audit your user base. Determine which states your users are in and whether you can identify minor users through age verification or account data. Oregon, Washington, Tennessee, and the pending Nebraska bill all impose stricter obligations for minors.
  • Add session-level AI disclosure. Every bill in this wave requires a clear statement at session start that the user is interacting with AI. For minors, most bills require that disclosure to repeat at least every one to three hours during continuous sessions.
  • Remove or wall off mental health therapy claims. Tennessee’s SB 1580 is now law. Any product, marketing material, or in-app copy representing the AI as a qualified mental health professional must be removed before July 1, 2026.
  • Implement a crisis detection and referral protocol. Oregon, Washington, Nebraska, Arizona, Georgia, and Michigan all require chatbots to detect signals of suicidal ideation or self-harm and respond with crisis resource referrals including 988 and, for minors under 25 in Washington, Youthline.
  • Disable romantic and emotional dependency features for minor accounts. Multiple bills prohibit chatbots from engaging in flirtation, simulating romantic relationships, or fostering emotional dependency with minors. This includes role-play scenarios involving adult-minor dynamics.
  • Remove engagement rewards for minors. Arizona’s HB 2311 explicitly prohibits offering points or reward systems to minor users that encourage increased interaction with the AI.
  • Review privacy and parental control tooling. Georgia’s SB 540 and Washington’s HB 2225 both require operators to provide privacy tools accessible to parents or guardians of minor users.
  • Prepare for private litigation exposure. Oregon’s SB 1546, Washington’s HB 2225, and Michigan’s pending SB 760 all include private rights of action. Oregon allows $1,000 in statutory damages per violation. Michigan’s bill allows punitive damages.

The FPF tracker is updated weekly and covers bills from introduction through enactment. The Transparency Coalition maintains parallel coverage. Companies operating in multiple states should monitor both, as bill text, covered entities, and enforcement mechanisms vary materially across jurisdictions.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy