Grok chatbot relies on Musk’s views instead of staying neutral
When faced with sensitive topics, Grok tends to lean on Musk’s personal opinions, citing him repeatedly in its reasoning process.

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.
Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.
Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.
Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.
While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.
Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!