Hidden vulnerabilities in ChatGPT search tool uncovered

Hidden text on websites can influence ChatGPT responses, raising concerns about the reliability of AI-powered search.

OpenAI’s ChatGPT search tool faces risks of manipulation via hidden content, leading to biased or harmful outputs.

OpenAI’s ChatGPT search tool is under scrutiny after a Guardian investigation revealed vulnerabilities to manipulation and malicious content. Hidden text on websites can alter AI responses, raising concerns over the tool’s reliability. The search feature, currently available to premium users, could misrepresent products or services by summarising planted positive content, even when negative reviews exist.

Cybersecurity researcher Jacob Larsen warned that the AI system in its current form might enable deceptive practices. Tests revealed how hidden prompts on webpages influence ChatGPT to deliver biased reviews. The same mechanism could be exploited to distribute malicious code, as highlighted in a recent cryptocurrency scam where the tool inadvertently shared credential-stealing instructions.

Experts emphasised that while combining search with AI models like ChatGPT offers potential, it also increases risks. Karsten Nohl, a scientist at SR Labs, likened such AI tools to a ‘co-pilot’ requiring oversight. Misjudgments by the technology could amplify risks, particularly as it lacks the ability to critically evaluate sources.

OpenAI acknowledges the possibility of errors, cautioning users to verify information. However, broader implications, such as how these vulnerabilities could impact website practices, remain unclear. Hidden text, while traditionally penalised by search engines like Google, may find new life in manipulating AI-based tools, posing challenges for OpenAI in securing the system.