AI chatbots misidentify images they created
Experts warn language-trained models lack reliable visual analysis for image verification.
Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.
The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.
Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.
Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
