Josh was on the finish of his rope when he turned to ChatGPT for assist with a parenting quandary. The 40-year-old father of two had been listening to his “tremendous loquacious” four-year-old speak about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed.
“He was not accomplished telling the story that he needed to inform, and I wanted to do my chores, so I let him have the telephone,” recalled Josh, who lives in north-west Ohio. “I assumed he would end the story and the telephone would flip off.”
However when Josh returned to the lounge two hours later, he discovered his baby nonetheless fortunately chatting away with ChatGPT in voice mode. “The transcript is over 10k phrases lengthy,” he confessed in a sheepish Reddit submit. “My son thinks ChatGPT is the good prepare loving individual on this planet. The bar is about so excessive now I’m by no means going to have the ability to compete with that.”
From radio and tv to video video games and tablets, new expertise has lengthy tantalized overstretched dad and mom of preschool-age youngsters with the promise of leisure and enrichment that doesn’t require their direct oversight, even because it carried the trace of menace that accompanies any exterior affect on the home sphere. A century in the past, moms in Arizona fearful that radio packages have been “overstimulating, horrifying and emotionally overwhelming” for kids; right now’s dad and mom self-flagellate over display screen time and social media.
However the startlingly lifelike capabilities of generative AI methods have left many dad and mom questioning if AI is a wholly new beast. Chatbots powered by giant language fashions (LLMs) are participating younger kids in methods the makers of board video games, Teddy Ruxpin, Furby and even the iPad by no means dreamed of: they produce personalised bedtime tales, stick with it conversations tailor-made to a toddler’s pursuits, and generate photorealistic photographs of essentially the most far-fetched flights of fancy – all for a kid who can’t but learn, write or kind.
Can generative AI ship the holy grail of technological help to folks, serving as a digital Mary Poppins that educates, challenges and conjures up, all inside a framework of robust ethical rules and age-appropriate security? Or is that this all simply one other Silicon Valley hype-bubble with a very weak group of beta testers?
‘My youngsters are the guinea pigs’
For Saral Kaushik, a 36-year-old software program engineer and father of two in Yorkshire, a packet of freeze-dried “astronaut” ice-cream within the cabinet supplied the inspiration for a novel use of ChatGPT along with his four-year-old son.
“I actually simply stated one thing like, ‘I’m going to do a voice name with my son and I need you to faux that you simply’re an astronaut on the ISS,’” Kaushik stated. He additionally instructed this system to inform the boy that it had despatched him a particular deal with.
“[ChatGPT] informed him that he had despatched his dad some ice-cream to strive from house, and I pulled it out,” Kaushik recalled. “He was actually excited to speak to the astronaut. He was asking questions on how they sleep. He was beaming, he was so completely happy.”
Childhood is a time of magic and marvel, and dwelling on this planet of make-believe is not only regular however inspired by consultants in early childhood improvement, who’ve lengthy emphasised the significance of imaginative play. For some dad and mom, generative AI might help promote that sense of creativity and marvel.
Josh’s daughter, who’s six, likes to take a seat with him on the pc and give you tales for ChatGPT for example. (A number of dad and mom interviewed for this text requested to be recognized by their first names solely.) “After we began utilizing it, it was prepared to make an illustration of my daughter and insert that within the story,” Josh stated, although newer security updates have resulted in it not producing photographs of youngsters. Kaushik additionally makes use of ChatGPT to transform household pictures into coloring ebook pages for his son.
Ben Kreiter, a father of three in Michigan, defined ChatGPT to his two-, six-, and eight-year-old kids after they noticed him testing its image-generation capabilities for work (he designs curriculums for an internet parochial faculty). “I used to be like, ‘I inform the pc an image to make and it makes it,’ they usually stated: ‘Can we strive?’” Quickly, the youngsters have been asking to make photos with ChatGPT on daily basis. “It was cool for me to see what they’re imagining that they will’t fairly [draw] on a chunk of paper with their crayons but.”
Kreiter, like all of the dad and mom interviewed for this text, solely allowed his kids to make use of ChatGPT along with his assist and supervision, however as they turned extra enamored with the software, his concern grew. In October 2024, information broke of a 14-year-old boy who killed himself after turning into obsessive about an LLM-powered chatbot made by Character.ai. Mother and father of not less than two extra youngsters have since filed lawsuits alleging that AI chatbots contributed to their suicides, and information studies more and more spotlight troubling tales of adults forming intense emotional attachments to the bots or in any other case dropping contact with actuality.
“The extra that it turned a part of on a regular basis life and the extra I used to be studying about it, the extra I spotted there’s lots I don’t learn about what that is doing to their brains,” Kreiter stated. “Perhaps I shouldn’t have my very own youngsters be the guinea pigs.”
Analysis into how generative AI impacts baby improvement is in its early phases, although it builds upon research much less refined types of AI, akin to digital voice assistants like Alexa and Siri. A number of research have discovered that younger kids’s social interactions with AI instruments differ subtly from these with people, with kids aged three to 6 showing “much less energetic” in conversations with good audio system. This discovering means that kids understand AI brokers as present someplace in the midst of the divide between animate and inanimate entities, in keeping with Ying Xu, a professor of training on the Harvard Graduate Faculty of Schooling.
Understanding whether or not an object is a residing being or an artefact is a crucial cognitive improvement that helps a toddler gauge how a lot belief to put within the object, and what sort of relationship to kind with it, defined Xu, whose analysis focuses on how AI can promote studying for kids. Kids start to make this distinction in infancy and normally develop a classy understanding of it by age 9 or 10. However whereas kids have at all times imbued inanimate objects akin to teddy bears and dolls with imagined personalities and capacities, at some stage they know that the magic is coming from their very own minds.
“A vital indicator of a kid anthropomorphizing AI is that they consider AI is having company,” Xu stated. “In the event that they consider that AI has company, they could perceive it because the AI wanting to speak to them or selecting to speak to them. They really feel that the AI is responding to their messages, and particularly emotional disclosures, in methods which might be just like how a human responds. That creates a danger that they really consider they’re constructing some type of genuine relationship.”
In one examine kids aged three to 6 responding to a Google Dwelling Mini system, Xu discovered that almost all perceived the system to be inanimate, however some referred to it as a residing being, and a few positioned it someplace in between. Majorities thought the system possessed cognitive, psychological and speech-related capabilities (pondering, feeling, talking and listening), however most believed it couldn’t “see”.
Mother and father who spoke with the Guardian remarked upon this type of ontological grey zone in describing their kids’s interactions with generative AI. “I don’t absolutely know what he thinks ChatGPT is, and it’s exhausting to ask him,” stated Kaushik of his four-year-old. “I don’t assume he can articulate what he thinks it’s.”
Josh’s daughter refers to ChatGPT as “the web”, as in, “I wish to discuss to ‘the web’.” “She is aware of it’s not an actual individual, however I feel it’s somewhat fuzzy,” he stated. “It’s like a fairy that represents the web as a complete.”
For Kreiter, seeing his kids work together with Amazon’s Alexa at a pal’s home raised one other purple flag. “They don’t get that this factor doesn’t perceive them,” he stated. “Alexa is fairly primitive in comparison with ChatGPT, and in the event that they’re battling that … I don’t even wish to go there with my youngsters.”
A associated concern is whether or not generative AI’s capability to deceive kids is problematic. For Kaushik, his son’s sheer pleasure at having spoken with what he thought was a real-life astronaut on the ISS led to a way of unease, and he determined to clarify that it was “a pc, not an individual”.
“He was so excited that I felt a bit unhealthy,” Kaushik stated. “He genuinely believed it was actual.”
John, a 40-year-old father of two from Boston, skilled an analogous qualm when his son, a four-year-old within the thralls of a truck obsession, requested whether or not the existence of monster vans and fireplace vans implied the existence of a monster-fire truck. With out pondering a lot of it, John pulled up Google’s generative AI software on his telephone and used it to generate a photorealistic picture of a truck that had parts of the 2 automobiles.
It was solely after a pitched argument between the boy, who swore he had seen precise proof of the existence of a monster-fire truck, and his older sister, a streetwise seven-year-old who was sure that no such factor existed in the true world, that John began to wonder if introducing generative AI into his kids’s lives had been the proper name.
“It was somewhat little bit of a warning to possibly be extra intentional about that form of factor,” he stated. “My spouse and I’ve talked a lot extra about how we’re going to deal with social media than now we have about AI. We’re such millennials, so we’ve had 20 years of horror tales about social media, however a lot much less about AI.”
To Andrew McStay, a professor of expertise and society at Bangor College who focuses on analysis on AI that claims to detect human feelings, this type of reality-bending shouldn’t be essentially a giant concern. Recalling the early shifting photos of the Lumière brothers, he stated: “Once they first confirmed folks a giant display screen with trains coming [toward them], folks thought the trains have been fairly actually popping out of the display screen. There’s a maturing to be accomplished … Folks, kids and adults, will mature.”
Nonetheless, McStay sees a much bigger drawback with exposing kids to expertise powered by LLMs: “Mother and father have to be conscious that these items aren’t designed in kids’s greatest pursuits.”
Like Xu, McStay is especially involved with the way in which through which LLMs can create the phantasm of care or empathy, prompting a toddler to share feelings – particularly unfavorable feelings. “An LLM can not [empathize] as a result of it’s a predictive piece of software program,” he stated. “Once they’re latching on to unfavorable emotion, they’re extending engagement for profit-based causes. There is no such thing as a good consequence for a kid there.”
Neither Xu nor McStay desires to ban generative AI for kids, however they do warn that any advantages for kids will solely be unleashed by purposes which might be particularly designed to assist kids’s improvement or training.
“There’s something extra enriching that’s attainable, however that comes from designing these items in a well-meaning and honest method,” stated McStay.
Xu permits her personal kids to make use of generative AI – to a restricted extent. Her daughter, who’s six, makes use of the AI studying program that Xu designed to review whether or not AI can promote literacy and studying. She has additionally arrange a customized model of ChatGPT to assist her 10-year-old son with math and programming issues with out simply giving him the solutions. (Xu has explicitly disallowed conversations about gaming and checks the transcripts to verify her son’s staying on matter.)
One of many advantages of generative AI talked about to me by dad and mom – the creativity they consider it fosters – may be very a lot an open query, stated Xu.
“There may be nonetheless a debate over whether or not AI itself has creativity,” she stated. “It’s simply primarily based on statistical predictions of what comes subsequent, and lots of people query if that counts as creativity. So if AI doesn’t have creativity, is it capable of assist kids to have interaction in inventive play?”
A latest examine discovered that accessing generative AI prompts did enhance creativity for particular person adults tasked with writing a brief story, however decreased the general range of the writers’ collective output.
“I’m somewhat fearful by this type of homogenizing of expression and creativity,” Xu stated concerning the examine. “For a person baby, it’d enhance their efficiency, however for a society, we would see a lower of range in inventive expressions.”
AI ‘playmates’ for youths
Silicon Valley is infamous for its willingness to prioritize velocity over security, however main corporations have at instances proven a modicum of restraint when it got here to younger kids. Each YouTube and Fb had existed for not less than a decade earlier than they launched devoted merchandise for under-13s (the much-maligned YouTube Youngsters and Messenger Youngsters, respectively).
However the introduction of LLMs to younger kids seems to be barreling forward at a breakneck tempo.
Whereas OpenAI bars customers underneath 13 from accessing ChatGPT, and requires parental permission for youngsters, it’s clearly conscious that youthful kids are being uncovered to it – and views them as a possible market.
In June, OpenAI introduced a “strategic collaboration” with Mattel, the toymaker behind Barbie, Scorching Wheels and Fisher-Worth. That very same month, chief govt Sam Altman responded to the story of Josh’s toddler (which went fairly viral on Reddit) with what gave the impression of a touch of delight. “Youngsters love voice mode on ChatGPT,” he stated on the OpenAI podcast, earlier than acknowledging that “there might be issues” and “society must determine new guardrails.”
In the meantime, startups akin to Silicon Valley-based Curio – which collaborated with the musician Grimes on an OpenAI-powered toy named Grok – are racing to stuff LLM-equipped voice packing containers into plushy toys and market them to kids.
(Curio’s Grok shares a reputation with Elon Musk’s LLM-powered chatbot, which is infamous for its previous promotion of Adolf Hitler and racist conspiracy theories. Grimes, who has three kids with former associate Musk, was reportedly angered when Musk used a reputation she had chosen for his or her second baby on one other baby, born to a distinct mom in a concurrent being pregnant of which Grimes was unaware. In latest months, Musk has expressed curiosity in making a “Child Grok” model of his software program for kids aged two to 12, in keeping with the New York Instances.)
The pitch for toys like Curio’s Grok is that they will “study” your baby’s persona and function a form of enjoyable and academic companion whereas lowering display screen time. It’s a classically Silicon Valley area of interest – exploiting professional issues concerning the final technology of tech to promote the subsequent. Firm leaders have additionally referred to the plushy as one thing “between somewhat brother and a pet” or “like a playmate” – language that suggests the form of animate company that LLMs don’t even have.
It’s not clear if they’re really ok toys for fogeys to fret an excessive amount of about. Xu stated that her daughter had shortly relegated AI plushy toys to the closet, discovering the play prospects “form of repetitive”. The youngsters of Guardian and New York Instances writers additionally voted towards Curio’s toys with their ft. Guardian author Arwa Mahdawi expressed concern about how “unsettlingly obsequious” the toy was and determined she most well-liked permitting her daughter to look at Peppa Pig: “The little oink could also be annoying, however not less than she’s not harvesting our information.” Instances author Amanda Hess equally concluded that utilizing an AI toy to switch TV time – a necessity for a lot of busy dad and mom – is “a bit like unleashing a mongoose into the playroom to kill all of the snakes you set in there”.
However with the marketplace for so-called good toys – which incorporates AI-powered toys, projected to double to greater than $25bn by 2030 – it’s maybe unrealistic to anticipate restraint.
This summer time, notices looking for kids aged 4 to eight to assist “a crew from MIT and Harvard” take a look at “the primary AI-powered storytelling toy” appeared in my neighborhood in Brooklyn. Intrigued, I made an appointment to cease by their workplaces.
The product, Geni, is a detailed cousin to widespread screen-free audio gamers akin to Yoto and the Toniebox. Relatively than taking part in pre-recorded content material (Yoto and Tonies supply catalogs of audiobooks, podcasts and different kid-friendly content material for buy), nevertheless, Geni makes use of an LLM to generate bespoke quick tales. The system permits baby customers to pick as much as three “tiles” representing a personality, object or emotion, then press a button to generate a bit of narrative that ties the tiles collectively, which is voiced aloud. Mother and father may also use an app to program clean tiles.
Geni co-founders Shannon Li and Kevin Tang struck me as being severe and considerate about a few of the dangers of AI merchandise for younger kids. They “really feel strongly about not anthropomorphizing AI”, Tang stated. Li stated that they need youngsters to view Geni, “not as a companion” just like the voice-box plushies, however as “a software for creativity that they have already got”.
Nonetheless, it’s exhausting not to wonder if an LLM can really produce notably participating or creativity-sparking tales. Geni is planning to promote units of tiles with characters they develop in-house alongside the system, however the precise “storytelling” is finished by the form of probability-based expertise that tends towards the typical.
The story I prompted by choosing the wizard and astronaut tiles was insipid at greatest:
They stumbled upon a hidden cave glowing with golden mild.
“What’s that?” Felix requested, peeking inside.
“A treasure?” Sammy puzzled, her creativeness swirling, “or possibly one thing even cooler.”
Earlier than they may determine, a wave rushed into the cave, sending bubbles bursting round them.
The Geni crew has educated their system on pre-existing kids’s content material. Does utilizing generative AI resolve an issue for fogeys that the canon of youngsters’s audio content material can not? After I ran the idea by one guardian of a five-year-old, he responded: “They’re simply presenting an alternative choice to books. It’s a very good instance of greedy for makes use of which might be already dealt with by artists or residing, respiration folks.”
The market pressures of startup tradition depart little time for such existential musings, nevertheless. Tang stated the crew is keen to deliver their product to market earlier than voice-box plushies bitter dad and mom on all the idea of AI for youths.
After I requested Tang whether or not Geni would enable dad and mom to make tiles for, say, a gun – not a far-fetched concept for a lot of American households – he stated they must focus on the problem as an organization.
“Submit-launch, we’ll most likely deliver on an AI ethics individual to our crew,” he stated.
“We additionally don’t wish to restrict data,” he added. “As of now there’s no proper or flawed reply to how a lot constraint we wish to put in … However clearly we’re referencing a number of youngsters content material that’s already on the market. Bluey most likely doesn’t have a gun in it, proper?”

