Lamont proposes bill to protect kids from AI ‘chatbots,’ create AI development ‘sandbox’

Gov. Ned Lamont has proposed legislation to set new safeguards for artificial intelligence while urging a regional approach to AI oversight, arguing that states should not be barred from acting in the absence of federal standards.

Senate Bill 86, part of Lamont’s 2026 legislative agenda that was released on Thursday, would impose safety requirements on AI companion chatbots and expand state efforts to promote AI development in key industries, including insurance, finance and health services.

The proposal comes after President Donald Trump in December issued an executive order aimed at limiting states’ ability to regulate artificial intelligence.

During an interview with Hartford Business Journal in January, Lamont said he prefers national standards but believes states should not be prohibited from taking action.

ADVERTISEMENT

“I don’t want 50 states all doing their own thing, and certainly not little Connecticut,” Lamont said, noting that the state represents just about 1% of the U.S. population. “That would discourage some new services from coming here, and it would discourage some developers from being here.”

He added that the federal government should establish some federal guardrails. “Not heavy-handed, but guardrails,” he said.

It makes little sense for the federal government to step aside while also blocking states from addressing risks tied to rapidly advancing AI technology, he said.

Rather than acting alone, Lamont said Connecticut is in discussions with neighboring states, including California and Massachusetts, to explore a coordinated regional approach if federal action stalls.

ADVERTISEMENT

“I don’t want Connecticut to do something by itself,” he said. “But let’s see what we should do as a region.”

His legislation highlights growing concerns about AI companion tools, particularly among teenagers. A 2025 survey by digital safety nonprofit Common Sense Media found that 72% of teens have used AI companions at least once. The bill cites reports from across the country of chatbots encouraging self-harm.

Under SB 86, developers of AI companion models would be required to implement protocols to recognize signs of mental health distress and refer users to crisis services when appropriate. The models would also be required to remind users at least every two hours that they are interacting with an artificial intelligence system, not a person.

Lamont said protections for children should be the starting point for any AI guardrails.

ADVERTISEMENT

“When it comes to protecting people from the excesses of AI or social media, start with the kids,” he said.

The bill also emphasizes workforce development and economic opportunity. It would expand Connecticut’s Open Data Portal by directing state agencies to release AI-ready datasets, while maintaining existing data disclosure laws. The bill also calls for developing an AI regulatory “sandbox” to attract companies to the state.

Chris Davis, vice president for public policy for the Connecticut Business & Industry Association, said creating an AI regulatory sandbox will allow and encourage innovators in the state to test and develop new products and services that “will help drive economic growth and greater productivity for residents and employers alike.”

Focusing the sandbox on major state economic sectors like insurance, finance and health care also will “unlock opportunity for these industry clusters to not only grow but become global leaders” in adopting and applying AI, Davis said.

Lamont said AI is also increasingly being used for education and job training.

“AI will be integrated in everything,” he said. “I think it’s going to be a second language for people.”

The bill has been referred to the legislature’s General Law Committee.