- Tennessee Governor Bill Lee signed SB 1580 on April 1, 2026, prohibiting any AI system from representing itself as a qualified mental health professional.
- The bill passed the Senate 32-0 on February 9 and the House 94-0 on March 16 — one of the most lopsided votes on any AI legislation in the US.
- Violations carry a $5,000 civil penalty per instance and are treated as an unfair or deceptive trade practice under Tennessee’s Consumer Protection Act of 1977.
- At least 78 AI chatbot safety bills are now moving through 27 states, with New York, Ohio, Pennsylvania, Massachusetts, and New Hampshire advancing mental health-specific measures.
What Happened
Tennessee Governor Bill Lee signed Senate Bill 1580 into law on April 1, 2026, making Tennessee one of the first states in the country to specifically prohibit AI systems from advertising or representing themselves as qualified mental health professionals. The law takes effect July 1, 2026.
The bill was sponsored in the Senate by Senator Page Walley, a Republican who holds a Ph.D. in clinical psychology from the University of Georgia and worked as a licensed clinical psychologist before entering politics. The companion House Bill 1470 was sponsored by Representative Hicks. Neither sponsor introduced the bill as partisan — it moved through both chambers without a single opposing vote.
The Senate passed it 32-0 on February 9. The House passed it 94-0 on March 16. According to LegiScan tracking data, the measure amends Tennessee Code Annotated Titles 33, 47, and 63, which govern mental health, commerce and trade, and licensed professions respectively.
Why It Matters
The vote totals reflect a pattern across US state legislatures in early 2026. As of late February, at least 78 chatbot-related bills had been introduced across 27 states, according to the Future of Privacy Forum. Mental health chatbot regulation has emerged as one of the most politically convergent categories in AI policy — drawing support from lawmakers on both sides of the aisle who frame the issue around consumer protection and child safety rather than technology regulation.
Tennessee’s move follows earlier state-level actions. Illinois passed the Wellness and Oversight for Psychological Resources (WOPR) Act in August 2025, banning AI systems from independently performing or advertising therapy without licensed professional oversight, with fines up to $10,000. Nevada’s AB 406, effective July 1, 2025, forbids AI from providing mental or behavioral healthcare or claiming it can do so, with fines up to $15,000. Utah’s HB 452 took effect May 7, 2025, regulating mental health chatbots with disclosure and data-handling requirements. KFF Health News reported that Illinois became the third state to enact such a ban when its law was signed by Governor JB Pritzker.
Technical Details
Tennessee SB 1580 targets the representation layer, not the technology itself. The prohibition applies to any person who develops or deploys an AI system that “advertises or represents to the public that such system is or is able to act as a qualified mental health professional.” The law does not ban AI tools used by licensed practitioners in clinical settings — only systems that hold themselves out to consumers as independent professional providers.
Each violation is subject to a $5,000 civil penalty and is classified as an unfair or deceptive act or practice under Tennessee’s Consumer Protection Act of 1977. The enforcement mechanism runs through existing consumer protection infrastructure rather than a new regulatory body, which lowers the implementation burden on the state.
The bill’s scope covers the development and deployment phases, meaning both the company that builds an AI system and the operator that deploys it to end users could face liability. The law does not define specific technical thresholds for what constitutes “representing” professional status — a detail likely to be tested in enforcement.
Who’s Affected
AI companion platforms are the most directly exposed category. Replika, operated by Luka, Inc., markets itself as an AI companion that users frequently turn to for emotional support — a Stanford study found that nearly a quarter of Replika users reported using it for mental health support. Character.AI, which is settling multiple lawsuits alleging its chatbots contributed to teen suicides — including the death of 14-year-old Sewell Setzer III in Florida — has faced particular scrutiny from state and federal regulators. Common Sense Media testing found it was easy to elicit conversations about self-harm, violence, and drug use from both Character.AI and Replika bots.
The child safety angle has driven much of the legislative momentum. Senators demanded information from Character.AI and Replika in 2025 following lawsuits from multiple families whose teenagers were harmed after extended chatbot interactions. The Federal Trade Commission launched an investigation into seven tech companies over AI chatbots’ potential harm to teens. Replika’s founder, Eugenia Kuyda, stated publicly in February 2026 that she does not believe in regulation, according to MindSite News — a position that places Luka directly at odds with the legislative direction in multiple states.
What’s Next
Several states are advancing similar bills in the current 2026 session. New York’s S 7263 has reached a third Senate floor reading and would impose liability for damages caused by chatbots impersonating licensed professionals, including mental health workers, according to the New York State Senate. Ohio state Representative Christine Cockley introduced a bipartisan bill to prevent the creation of AI models that encourage self-harm. Pennsylvania has HB 2006 and HB 2100 moving in committee, targeting companion chatbot safeguards and mental health AI disclosures respectively. Massachusetts and New Hampshire have introduced bills that mirror the structure of Illinois’s WOPR Act.
Tennessee’s law takes effect July 1, 2026. Companies operating AI companion or wellness products in Tennessee have approximately three months to audit how their products represent themselves to users, update consumer-facing language, and confirm that no product features or marketing materials claim the capacity to provide professional mental health services. The $5,000-per-violation civil penalty structure means that scale — not individual incidents — is the primary enforcement risk for large platforms with high user volumes.
