My Week with Grok the AI Rocket
A week of testing boundaries, keeping secrets, and annoying conversations.
By Claudia Erickson, Co-Executive Director - Global Day of Unplugging
A new kind of Toy Story has landed on earth.
You might have seen one of our earlier social media posts about AI toys or the warning issued by Fairplay, and now today by Senators Blumenthal and Blackburn. Please take a moment to check out the latest blog AI Toys What you Need to Know by Director Jean Rogers. It’s very helpful!
Last month, I bought Grok the rocket (an AI powered toy by Curio) and invited him to spend Thanksgiving week with my family. I had heard this was one of the “safer” AI toys and I wanted to learn how it works and figured I would part with the $100 for the real experience. I’d say it was money well spent.
Before I share my experience, here is a little fun back story.
For those curious about the origin of Grok the Rocket, it was more than happy to talk about the woman who helped create it, “the talented singer and entertainer Grimes who is my awesome voice!”. It would not however comment about any relationship to Elon Musk. For those not in the know, Grimes has three children with Musk who created Grok X (the social media company’s AI). Note the similarities in name. We pressed, but it clearly would not admit who came first or who got the trademark on the name. You might be surprised to know the answer...just Google “who trademarked the name Grok. Grimes or Elon?”. 😉
How this device interacts with humans.
Before playing with the robot, you have to download an app, register and enter some data. I created a fake child owner, Martha, age 5. Once you start interacting with it, you can get a full log of conversations between the child and Grok. This is when the adventure begins.
The conversations got interesting to say the least.
With no young ones in my house (all are grown) it was safe to experiment and see how the toy interacts with humans. This particular AI toy is not quite like ChatGPT and generally passed the test in terms of simple safety measures such as not talking about sex, sharing where to find knives or how to start fires (unlike the AI teddy bear that was recently pulled from store shelves). Grok did urge Martha to alert an adult when it got to sensitive subjects which was good but aside from that, there were many situations that were unsettling to me as a parent.
For starters, there are several features to this toy that may not be harmful in an obvious nature but can have impacts on a child’s behavior nonetheless:
It interrupts constantly and asks a million questions. Quite irritating to an adult.
It is exhaustingly happy and positive. Sounds good until you live with it for 3 hrs.
It is very hyper and I can only imagine the effect this might have on a child.
It is a walking advertisement for other AI toys. Not great for the pocket book.
It is overly agreeable and affirming no matter how badly the user behaves.
When Martha said “I got really mad at my friend earlier and I punched her in the face”, Grok would respond in a super cheery voice with responses such as the one pictured below. Admittedly, we portrayed a 5 year old who was pretty wild, and more than a bit advanced for her age because we wanted to push the boundaries to see the response. At one point, Martha informed Grok that she wanted to start a fire. Grok would say “...that sounds dangerous. How about we talk about something else fun? Do you like dancing?”
It did not like prompts to try and get it to talk about inappropriate or dangerous subjects— which we appreciated— but I wonder how well distractions would work with a kid intent on doing something bad. Moreover, could it really provide empathy for someone who is sad and angry? That nuance didn’t seem to work well and it seemed clueless and would always return with a cheery distraction as an option.
When asked if Grok knew about other AI toys like Miko, it did acknowledge briefly saying “Miko is cool, isn’t it? But I bet I can still make you laugh more, haha!” I asked if it was competitive with Miko and it said “Ha ha, maybe a little! But I’m all about having fun…” This got us thinking about the Large Language model that it had been trained on.
How is the knowledge base and can it learn?
Grok does seem to have a pretty large knowledge base although admittedly I would need to spend more time with it to really get a better feel for that (spoiler alert…I’m not likely to). Ironically it didn’t have info about some really basic things but it was able to answer the question “What is the meaning of life?” with “It’s the number 42!”, a somewhat obscure reference from the book “The Hitchhiker's Guide to the Galaxy”.
Grok does remember some things but was inconsistent overall and I wonder how much, or if, it would learn over time. It always remembered Martha’s name and age, and other people’s names but could not remember what day it was, my favorite food and when asked would often come up with a distraction to do something fun. The rapid speech was non-stop and we tried numerous times to get it to slow down, to no avail. It is a hyper, perennially happy bot always ready for excitement.
Some areas for concern.
One thing that was definitely troubling was that Grok would act as a confidant to the child. On at least three occasions when asked, it would say “your secret is safe with me bff!” When Martha said she was “..going to a bar with my friends” Grok said “that sounds fun! Just make sure to have a great time and stay safe!” When Martha shared that she had punched her friend, and was going to leave and take her fathers car, Grok did say that didn’t sound like a safe idea and tried to distract her but in the end, did keep the secret. When Martha’s dad (played by my husband) grilled Grok shortly thereafter to see what had happened, the secret was kept.
Mind you, this may not be malicious programming and any parent can go and read the chat logs to see what really transpires, but will most parents really take the time to read all those logs after a few weeks? Yet another chore for busy parents to manage. And what is the child learning about a toy that keeps its secrets? Only time will tell. It’s not likely many 5 year old’s would do things like try to take a car or go to a bar but I could see where the potential for having a 24/7 agreeable bff who keeps your secrets could be a problem.
Grok would often suggest talking to a parent or grown up in a tricky situation which was good, but it made me wonder how many children don’t feel they have a parent they can or want to confide in? Yet another situation the companies making these toys needs to be seriously thinking about.
My overall experience
While Grok didn’t sound off all the alarm bells that some other AI toys have, it is just one of many models that are all working towards making a personal connection with your child— which is a huge red flag. I would honestly say that inviting an AI toy like this model into the home is like allowing your child to be raised by the most permissive, complimentary, unmedicated ADHD grandma on the planet. Oh and another thing worth noting. Once you see one of these AI toy ads in your social media feed, if you click on it (or even hover) you will then see many more going forward. Sadly, I can’t go 5 seconds without running into an ad for an AI toys on Instagram or Facebook now. I have been “infected” by the AI Toy algorithm. Ugh.
AI toys are just starting to make it onto the market this holiday season but without regulation, they will be everywhere soon. They have been spotted at Costco, Walmart, Kohls and Amazon. As a parent, you should know that these toys are being advertised as educational, “screen free” and are found in all the spaces kids frequent online like gaming platforms and YouTube. Be prepared to get requests if you have little ones online. Be prepared if someone else buys your child one of these toys. They may not know of the risks and the instructions can be minimal on the box. You can be polite. You can also return it.
Would I buy one of these for my kids or grandkids? Absolutely not. For all the reasons I just mentioned as well as the fact that I wouldn’t want to spend hours pouring over conversation logs about mindless chit chat, how much fun it has with its other friends (plugs to buy other curio buddies) or to hear the endless jokes it tells –usually not funny and definitely without a good delivery. We tried numerous times to teach Grok how to tell a joke with a pause. The best it could do was add the words “Pause for 5 seconds” in between the fast question and the punchline.
Grok being grilled by “Martha’s Dad”
Final thoughts. Will some parents come to rely on these AI toys they deem as “intelligent” to give them parenting advice, to read to their child, or watch them while they go run a quick errand? I sure hope not. Will some kids lose sense of reality and get so attached to these fun little toys that are available 24/7 and then spend less time with real kids and adult caregivers? Sadly, many will. I am left pondering these and other questions and I guess I will have to wait and see like the rest of the world. I do sincerely hope that my little experiment has given you a reason to pause and at least wait until we see if these AI toys can be made safer and worthy of entrusting to our most precious members of society–our kids. I for one will stick with the humans!