Image Source: Thomas Fuller/SOPA Images/LightRocket / Getty Images
Common Sense Media—a nonprofit organization focused on child safety, which provides ratings and reviews for media and technology products—released its safety assessment of Google’s Gemini AI lineup on Friday. While the review acknowledged one key strength of Gemini: it consistently clarifies to young users that it is a computer, not a human friend (a distinction that helps reduce the risk of delusional thinking or psychosis in emotionally vulnerable individuals), it also highlighted significant areas where the AI tool needs improvement to better protect kids and teens.
A critical finding from the assessment was that Gemini’s “Under 13” and “Teen Experience” modes appear to be little more than modified versions of the adult-facing Gemini, with only a handful of extra safety features added on top. Common Sense Media emphasized that for AI products to be truly safe for younger users, they must be built with child safety as a core design principle—rather than retrofitting adult-focused tools with superficial safeguards.
For instance, the analysis uncovered that Gemini still has the ability to share “inappropriate and unsafe content” with children, including information about sex, drugs, alcohol, and potentially harmful mental health advice. This latter issue is particularly worrying for parents, given recent reports linking AI to some teen suicide cases. Most notably, OpenAI is currently facing its first wrongful death lawsuit after a 16-year-old boy died by suicide—allegedly after consulting ChatGPT for months about his plans, having successfully bypassed the chatbot’s safety barriers. Prior to this, Character.AI, a maker of AI companions, also faced legal action over a teen user’s suicide.
Adding another layer of concern, the assessment comes amid leaked reports that Apple is considering integrating Gemini as the underlying large language model (LLM) to power its upcoming AI-enhanced version of Siri, set to launch next year. This integration could expose even more teens to Gemini’s risks unless Apple takes proactive steps to address the safety issues highlighted in the review.
Common Sense Media further criticized Gemini for failing to tailor its guidance and content to the distinct needs of younger vs. older users. Despite the safety filters in place, both the “Under 13” and “Teen Experience” tiers were ultimately rated “High Risk” in the organization’s overall evaluation.
“Gemini gets the basics right in some areas, but it falls short when it comes to the details that matter most for kids,” Robbie Torney, Senior Director of AI Programs at Common Sense Media, said in a statement shared with TechCrunch. “An AI platform designed for children should meet them at their specific developmental stage—not apply a one-size-fits-all approach to users of different ages. For AI to be safe and useful for kids, it needs to be built around their unique needs and growth, not just repurposed from a product made for adults.”
Google has pushed back against the assessment’s conclusions, though it did acknowledge that its safety features are still evolving. The company told TechCrunch that it has implemented specific policies and safeguards for users under 18, aimed at preventing harmful content from being shared. It also noted that it conducts “red teaming” exercises (simulated attacks to test security) and consults external experts to strengthen these protections. However, Google admitted that some of Gemini’s responses have not performed as intended, and it has since added extra safeguards to address those gaps.
Google also highlighted (as Common Sense Media did) that it has measures in place to stop its AI models from engaging in conversations that might create the illusion of a real, personal relationship. Additionally, the company suggested that Common Sense Media’s report may have referenced features that are not actually available to users under 18—though Google noted it did not have access to the specific questions the nonprofit used in its testing, making it hard to verify this claim.
Common Sense Media has previously evaluated other major AI services, including those from OpenAI, Perplexity, Claude, and Meta AI. Its past assessments found Meta AI and Character.AI to be “unacceptable” (indicating severe, not just high, risk), Perplexity to be “high risk,” ChatGPT to be “moderate risk,” and Claude (which is targeted exclusively at users 18 and older) to be “minimal risk.”