Warning that AI could replicate and amplify the harms of social media for children, Common Sense Media founder Jim Steyer makes the case that California’s new youth AI safety push could set standards that Britain ignores at its peril. AI, and the risks it poses to our kids, knows no borders. The same chatbots that have sometimes encouraged American children to harm themselves or others are readily available to British children. Our children need and deserve a coordinated effort to make AI safe for kids and teens that matches the power of the technology itself.
We’ve Seen This Movie Before
Having lived through the rise of social media, we know what can happen when governments give tech companies the green light to pursue innovation at any cost. For more than a decade, these companies assured parents, teachers, and policymakers that their products were safe – even beneficial – for children. But the industry continued to build platforms that kept kids staring at their screens, ceaselessly scrolling and helpless to look away. The result has been an unprecedented mental health crisis from which an entire generation is still reeling.
If we choose to repeat the past with AI, we risk repeating that mistake on a larger scale. Unlike social media, AI products do not simply feed users preexisting material; they converse, offer dangerous “advice”, and, in some cases, present themselves as friends, confidants, and even romantic partners. For developing young minds, this risks blurring the line between connection and code, between fact and AI-generated fiction. Nearly three in four teens have already turned to AI companion chatbots, which research shows are not safe for anyone under 18.
California Leads the Way
As this crisis grows too big to ignore, lawmakers, advocates, and even some in the tech industry have decided to act, perhaps most notably in California, the very heart of the AI industry. Last year, we helped enact a groundbreaking age assurance law in California and social media warning label laws in New York and California. In October, we proposed a ballot initiative called the California Kids AI Safety Act to build on that success in 2026 by preparing and protecting kids for the AI era. In December, OpenAI filed a competing initiative to block our effort.
We refused to let that stop us. When 80–90 per cent of California voters, regardless of their party affiliation, demand stronger AI protections for our kids, we have a responsibility to get that done. Rather than confuse voters with competing measures, we decided to work together to enact strong protections for kids, teens, and families.

So Common Sense wrote, and OpenAI agreed to support the Parents & Kids Safe AI Act, which, if enacted, will be the strongest youth AI safety measure in the nation’s history. At this pivotal moment for AI, we can’t make the same mistake we did with social media, when companies used our children as guinea pigs and helped fuel a youth mental health crisis in the U.S. and around the world. Kids and teens need AI guardrails now. That’s why we will pursue every avenue available, whether through the state legislature or via a direct vote from Californians at the ballot box, to see this through. We have hope that when California acts, the world will pay attention.
What the Parents & Kids Safe AI Act Would Do
The California AI safety initiative is a comprehensive plan to protect kids and teens. The measure requires AI companies to know whether a user is a child to treat them accordingly. Privacy-preserving age assurance technology would ensure that AI operates with child-protective settings for all users under 18.
The initiative sets clear boundaries for the treatment of young users. It prohibits AI companies from targeting children with advertising or monetising their private data, prevents AI systems from generating material promoting suicide or self-harm and cracks down on AI systems that manipulate children by fostering emotional dependency or pretending to be “real”.
On top of requiring robust child safeguards, the measure gives parents real agency in keeping their kids safe. Under the Parents & Kids Safe AI Act, companies would have to provide easy-to-use parental controls and parental alerts if their kids show signs of self-harm. In a digital world that has long left parents on the sidelines, this measure takes a key step toward course-correction.
Finally, the initiative introduces something that has been missing from the AI industry for too long: accountability. Companies would have to undergo independent safety audits and conduct annual risk assessments – and, should they fail to meet these obligations, the state attorney general would have the power to hold them financially accountable. Every other product intended for kids, from toys to car seats to pyjamas, must meet strict safety standards before hitting store shelves. AI should be no different.
Why it Matters Beyond California
Youth AI safety is overwhelmingly popular among voters across the political spectrum, and not only in California. We intend to see the Parents & Kids Safe AI Act through in our state, not only to protect more than eight million children, but to serve as an example for governments across the globe. When one major jurisdiction sets strong, effective rules, others are more likely to adopt them. Companies are not inclined to build entirely different systems for different markets. So when kids in Los Angeles are protected from unsafe AI chatbots, similar protections are more likely to reach kids in London.
We have a narrow window to get this right. AI is moving faster than any technology of our lifetimes, and young people are among its earliest and most enthusiastic adopters. Leaders on both sides of the Atlantic have a shared responsibility to ensure that technology empowers, not endangers, the next generation. It is on all of us to forge the digital future our children deserve.

This article features in the new edition of ChamberUK. Our parliamentary journal.
Photo Credit: Shutterstock
