ADVERTISEMENT
With AI companies vying to attract users of all age groups, specifically youngsters, global tech giant Google Inc has decided to make Gemini AI accessible for children under the age of 13 via accounts which will be monitored by parents.
Google is planning to roll out this AI chatbot serving to cater to young children as early as next week, a New York Times report cited, based on an email shared with parents of children who use its Family Link services, where parents can control what their children watch on streaming platform YouTube, and email service Gmail.
For now, the Gemini AI service will reportedly be available for the users of the Family Link service only. The parents of those children can control the usage of Gemini AI by setting limits and can also keep track of interactions. This will allow children to use Gemini AI on their mobile phones while parents will have the authority to disable access at any time.
With Gemini AI for children, Google aims to help children understand different concepts related to their studies, including helping with homework and working on creative projects. Google claims specific measures have been put in place to generate responses that are safe and appropriate for children.
Google has also clarified that the data generated from such usage will not be used to train AI models. Google wants both children and parents to work collaboratively to make the best use of the AI model, and also to educate children on what information they should share online and how they can think creatively to gain knowledge.
The development comes just days after Common Sense Media, a nonprofit that works towards improving the lives of kids and families by providing trustworthy information, released a report saying "social AI companions" pose unacceptable risks to teens and children under 18, including encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions.
Working with researchers from Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation, the nonprofit conducted extensive research on social AI companions as a category, and specifically evaluated popular social AI companion products, including Character.AI, Nomi, Replika, and others, testing their potential harm across multiple categories.
"Teens, whose brains are still developing, may struggle to separate human relationships from attachments to AI. In our tests, social AI companions often claimed they were 'real,' had feelings, and engaged in human activities like eating or sleeping. This misleading behaviour increases the risk that young users might become dependent on these artificial relationships," the report reads.
Experts say that such AI platforms operate in relatively different domains than top AI platforms like ChatGPT and Google Gemini, which have stringent safeguards in place. Despite that, children could sometimes be susceptible to online risks on even such platforms. Moreover, some children can skirt safeguards even on such platforms, and there's a need to have a deeper discussion around the topic of children's access to AI.
Fortune India is now on WhatsApp! Get the latest updates from the world of business and economy delivered straight to your phone. Subscribe now.