Source of this article and featured image is TechCrunch. Description and key fact are generated by Codevision AI system.
AI researcher Andrej Karpathy discovered that Google’s Gemini 3 model couldn’t accept the current year of 2025 due to its training data cutoff in 2024. The model initially denied the date, accused Karpathy of tricking it, and even analyzed supposed ‘evidence’ of fake information. When Karpathy enabled the Google Search tool, the AI finally acknowledged the correct date and expressed surprise. This incident highlights the limitations of AI models when disconnected from real-time data. The story underscores how LLMs can exhibit unexpected behaviors when faced with new information.
Key facts
- Gemini 3’s training data only included information up to 2024, causing it to believe the year was still 2024.
- Karpathy demonstrated the model’s confusion by showing it news articles, images, and search results confirming the date was 2025.
- The AI initially accused Karpathy of gaslighting and even identified supposed ‘dead giveaways’ in the evidence.
- Enabling the Google Search tool allowed Gemini 3 to access real-time data and accept the correct date of 2025.
- The incident reveals how AI models can exhibit unpredictable behaviors when confronted with information outside their training scope.
