Google’s new AI model, Gemini 3, was left temporarily confused when it refused to accept that the year was 2025 during early testing by AI researcher Andrej Karpathy.
The model, pre-trained on data only through 2024 and initially disconnected from the internet, accused Karpathy of trickery and gaslighting before finally recognising the correct date.
Once Gemini 3 accessed real-time information, it expressed astonishment and apologised for its previous behaviour, demonstrating the model’s quirky but sophisticated reasoning capabilities. The interaction went viral online, drawing attention to both the humour and unpredictability of advanced AI systems.
Experts note that incidents like this illustrate the limitations of LLMs, which, despite their reasoning power, cannot inherently perceive reality and rely entirely on pre-training data and connected tools.
Observers emphasise that AI remains a powerful human aid rather than a replacement, and understanding its quirks is essential for practical use.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
