OpenAI and Microsoft team up with state law enforcers on AI safety task force
By Clare Duffy, CNN
New York (CNN) — Artificial intelligence is playing a greater role in everything from homework to jobs and even romantic companionship — all without much oversight to guarantee the tech is being developed and used safely.
Now, a pair of state attorneys general are teaming up with two of the biggest companies in tech to change that.
North Carolina Attorney General Jeff Jackson, a Democrat, and Utah Attorney General Derek Brown, a Republican, announced on Monday the formation of the AI Task Force. OpenAI and Microsoft have already signed on to the effort, and the attorneys general expect other state regulators and AI companies to join, too. The group will work to develop “basic safeguards” that AI developers should implement to prevent harm to users, especially children, and to identify new risks as the technology develops.
There is no overarching federal law regulating AI, and some federal lawmakers have even sought to restrict regulation of the technology. Jackson and Brown were among the 40 attorneys general who earlier this year successfully pushed for the removal of an AI regulation moratorium from Republicans’ sweeping tax and spending cuts package that could have blocked enforcement of state laws for a decade. (One federal AI law did pass this year: the Take It Down Act, which specifically cracks down on non-consensual deepfake pornography.)
Concerns about AI safety risks have only escalated in recent months, amid a growing string of reports about the technology causing delusions or contributing to self-harm among users. Companies like OpenAI and Facebook-parent Meta have also been scrambling to block young people from accessing adult content.
Jackson said he’s not hopeful Congress will move quickly to regulate AI.
“They did nothing with respect to social media, nothing with respect for internet privacy, not even for kids, and they came very close to moving in the wrong direction on AI by handcuffing states from doing anything real,” Jackson told CNN in an exclusive interview ahead of the task force announcement.
Some of the leading AI companies have begun to diverge in their approaches to safety. For example, OpenAI CEO Sam Altman said last month that the company’s investments in child safety protections would enable it to “treat adults like adults,” including allowing verified adult users to engage in erotic conversations. Shortly after, Microsoft’s AI CEO Mustafa Suleyman told CNN his company would not allow sexual or romantic conversations, even for adults, and that he wanted to make “an AI you trust your kids to use” without a separate, young user experience.
“This effort reflects a shared commitment to harness the benefits of artificial intelligence while working collaboratively with stakeholders to understand and mitigate unintended consequences,” Kia Floyd, Microsoft’s general manager of state government affairs, said in a statement on joining the task force. “By partnering with state leaders and industry peers, we can promote innovation and consumer protection to ensure AI serves the public good.”
Whatever guardrails the task force develops will technically be voluntary. But the group will have another benefit, too: bringing states’ top law enforcement officers together to track AI developments and risks, potentially making it easier for them to take joint legal action if tech companies harm consumers. That makes this effort different from “a group of think tanks coming together” to create AI safety principles, he said.
Jackson would still like Congress to pass more AI legislation. But, he said, “Congress has left a vacuum and I think it makes sense for AGs to try to fill it.”
The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.