Grok hallucinates about Grok!
This is interesting to me. I have a daily Grok task running to research new trends in technology and marketing from the past 24 hours.
The results are often good, but today Grok has gone off the rails when talking about the launch of Grok 3.5 on 12 September 2025 (spoiler - there was no such launch).
A #techethics thread about bots and #GenAI manipulation.
Why is this interesting?
Grok ran the task, clearly knows the current date and successfully found a range of sources (25 web sources, 27 X posts).
The sources though are unrelated to any version of Grok (let alone Grok 3.5).
The current Grok version is Grok 4. As best I can tell, a limited number of users had beta access to Grok 3.5 in May 2025, but this was never widely released, with a modified version instead provided as Grok 4.
Why has this happened?
The first X source identified by Grok is on the topic of bot manipulation.
The rest appear quite random. Some more do cover bots, but there are also general tech stories, and a number of scammy looking crypto tokens also feature.
The web page links cover tech trends, mostly relating to AI, with terms like “influence”, “bias”, and “trust” featuring. Several academic papers are included. Most sources are from 2025, but not recent.
I presume Grok wanted to talk about manipulation, but this has instead served to manipulate its response (a form of prompt injection).
Why does this matter?
First, this is my regular warning to always fact check GenAI output. To me, this is obviously off. Grok did not release CreativeSpark, nor add real time sentiment analysis features yesterday, but the text reads convincingly, and cites industry publications.
Second, this matters because it is evidence of how bots and misinformation on social media are now also manipulating the output of GenAI tools.
I ask my daily Grok task to also output sample social media posts. I don’t post them as is, but there are plenty of pipelines that do so. The image is that text quickly mocked up using ChatGPT as an X post. This would be fake news.
The other risk here is that of a manipulative vicious cycle.
Bots are already infiltrating social media.
Now, those bots are also able to influence the output of GenAI tools that pull in recent data to help with accuracy.
That information is taken as factual, then reposted by hand, or automatically circulated, further feeding misinformation.
Data literacy and the ability to recognise inconsistencies and fake news is so important.
Beyond that, as world events regularly show, there really are risks when false information is allowed to circulate. The case study I’ve shared here may be trivial, but many others are not.
The dangers of social media bots and AI manipulation are real.