States Win Freedom to Protect Kids From AI
While legislators scramble to regulate AI technology, a major victory for children’s safety has emerged from the Senate floor. In a significant move, senators voted to strike down a provision that would have blocked states from regulating artificial intelligence for five years. The provision—initially proposed as a ten-year moratorium—faced fierce opposition from parent advocates and tech policy think tanks who weren’t buying the corporate-first approach.
Let’s be real. The tech industry wanted a free pass. They didn’t get it. States can now continue crafting their own AI rules without federal interference. This matters because states are actually getting things done while Washington debates. Recent studies show data breach concerns are justified, with 40% of organizations experiencing AI-related security incidents.
Maryland already requires AI developers to disclose what data they’re using to train their systems. New York makes sure people know when they’re talking to a bot instead of a human. Novel concept, right? Tell people when they’re interacting with a machine. California‘s pushing legislation to regulate AI in hiring decisions, with requirements for notice and appeals processes. Illinois passed similar protections.
The Senate’s decision represents a significant win for child safety advocates. The “Protecting Our Children in an AI World Act of 2025” aims to address AI risks to kids, but without state-level backup, federal protections often move at glacial speed. Groups like Common Sense Media championed the fight against the moratorium, arguing that children need protection now, not after years of corporate experimentation.
California’s continued ability to regulate AI is particularly vital. Silicon Valley sits in its backyard. When California regulates tech, companies listen. They have to.
The localized approach makes sense. Different states have different priorities. What works for New York might not work for Wyoming. One-size-fits-all federal regulations often miss the mark.
Generative AI poses unique challenges that demand immediate attention. States are focusing on transparency and disclosure requirements. Users should know when content is AI-generated. Seems obvious, but without regulations, companies often prefer to blur the lines.
The employment angle can’t be ignored. AI increasingly determines who gets hired, fired, or promoted. States like Illinois and New York require employers to disclose when they’re using algorithms to make these life-changing decisions. Organizations like the IAPP continue to provide valuable resources for privacy professionals navigating these complex regulations across different states.
The message from the Senate is clear: states can and should protect their citizens from AI harms, especially children. The Senate voted 99-1 to remove the controversial provision, showing overwhelming bipartisan support for allowing state-level AI regulation. Washington may eventually catch up. Until then, states are leading the charge. That’s probably for the best.