(The Center Square) – The Colorado Senate on Friday approved guardrails around artificial intelligence in healthcare and with minors, as well as a near-total ban of AI in therapy.
Three bills, which all passed on their second reading, came amid widespread fears surrounding the rapid and largely unregulated AI boom, and in defiance of an executive order by President Donald Trump telling states to keep AI unregulated.
“It is important that we’re capturing and doing justice, not only for the family, but again creating very powerful policy that will be the strongest in the country,” Sen. Iman Jodeh, D-Arapahoe County, said on the Senate floor about House Bill 26-1263, the minors protection bill she sponsored. “I want Colorado to continue to set that bar for the country and make sure that kids will not be harmed.”
The three AI bills were all approved by a voice vote, although not unanimously. They will all go to one final Senate vote before being sent to Democratic Gov. Jared Polis’ desk to potentially be signed into law.
Colorado’s legislation aimed to address different points of concern about AI use. The Psychotherapy Artificial Intelligence Restrictions bill, HB26-1195, would ban AI models from “therapeutic communications” with clients without real-time professional oversight and ban the models from generating any sort of therapeutic recommendations or treatment plans. It would allow AI agents to provide therapists supplemental or administrative help.
The therapy bill comes after a series of deaths by suicide of people that had confided in AI chatbots. In one case testified before Congress, a 16-year-old boy had told ChatGPT his suicidal thoughts. In response, the AI model told him to not tell his parents and offered to write his suicide note, according to the testimony.
“What this bill comes down to is very simple. If you are in the state of Colorado receiving therapy or counseling, that that is done with a licensed professional,” said HB26-1195 sponsor Sen. Kyle Mullica, D-Adams County. “A chatbot or artificial intelligence isn’t providing that counseling or that therapy … I think it’s a commonsense bill.”
The Conversational Artificial Intelligence Service Operator Requirements bill, HB26-1263, would set up additional guardrails for minors who use chatbot AI models, such as ChatGPT. The bill would require AI companies to make “technically feasible measures” to prevent models from creating sexually explicit content that could create emotional dependence on the model.
Much like the proposed AI therapy bill, the minors bill comes shortly after a 2025 survey by the Center for Democracy and Technology that found one-in-five high schoolers in the U.S. know somebody who had been in a romantic relationship with an AI chatbot, or had one themselves. Forty-two percent of the students said they knew somebody who had used AI for companionship or they used it themselves for friendship.
Colorado’s minors bill would also require AI companies to make a potential suicide protocol and prohibit the use of rewards or points for accounts of minors that might encourage engagement. Violations of any measure in the bill would be punishable by a $5,000 penalty.
Mullica said he spoke with a neighbor whose young daughter, Juliana, died by suicide after being sexually exploited by a chatbot and read the neighbor’s emotional testimony on the Senate floor.
“She’ll never be able to say hi to Juliana. She won’t be able to walk her down the aisle, experience life with her – and I can’t help but think of what that means and try to put myself into her shoes,” the senator said.
The Use of Artificial Intelligence in healthcare bill, HB26-1139, would ban AI used by healthcare-related companies or groups from making decisions based solely on group data. Instead, the models would have to determine healthcare decisions from medical or clinical history, the individual patient’s circumstances or other factors.
AI models would be allowed to make expedited approvals in healthcare under the bill, but denials or delays of coverage would have to be reviewed by a professional. Any use of AI models in healthcare would have to be reported to the government.
The healthcare bill acknowledged the value of using AI in healthcare but said overreliance risked increasing bias that already exists in healthcare. The bill named class, ethnicity and immigration status as likely sources of bias, and added that “[AI] must not replace human judgment.”
“This bill ensures that AI can support healthcare, but it can’t replace the human judgement, accountability and compassion that patients deserve,” said bill sponsor Sen. Lindsey Daughtery, D-Adams and Jefferson counties, at Friday’s hearing.
The three bills will go to their third and final Senate vote ahead of regular legislative session ending on May 13, despite an executive order in December 2025 by Trump that threatened lawsuits at states that regulate AI. The executive order stressed that AI should be expanded and not regulated because “We remain in the earliest days of this technological revolution and are in a race with adversaries for supremacy within it.”
Trump argued state laws would make patchwork of AI regulations and demanded the country wait until Congress creates a “minimally burdensome national standard.”
But Congress has not created any national standard for AI regulation in the months since. The White House released a four-page national policy framework in April that called for Congress to generally act on seven areas of AI-related policy, but the framework is not legally-binding and does not determine how AI companies act.





