British cyber watchdog cautions about AI chatbots cyber risks
National Cyber Security Centre (NCSC) draws attention to the evolving security concerns associated with algorithms capable of generating human-like interactions.
British cyber watchdog cautions organisations about integrating AI-driven chatbots powered into their operations. They highlight the increasing evidence that these chatbots can be manipulated into carrying out harmful actions.
In recent blog posts, the United Kingdom’s National Cyber Security Centre (NCSC) draws attention to the evolving security concerns associated with algorithms capable of generating human-like interactions, often referred to as large language models (LLMs). These AI-powered tools are gaining early traction as chatbots, potentially supplanting not only traditional internet searches but also roles in customer service and sales.
What are its risks?
However, the NCSC warns of inherent risks, particularly when these models are integrated into broader organisational workflows. Researchers have demonstrated how chatbots can be deceived through rogue commands or enticed to bypass their safety protocols. For instance, an AI-powered chatbot used by a bank might be manipulated into executing an unauthorised transaction if a hacker crafts their query strategically.
What does the watchdog recommend?
The NCSC advises organisations to exercise caution when deploying services reliant on LLMs, akin to their approach when dealing with experimental software or beta products. They recommend not entrusting LLMs with critical transactions on behalf of customers and advocating for a similar level of prudence.
Why does it matter?
This warning mirrors a global trend as authorities worldwide grapple with the proliferation of LLMs, such as OpenAI’s ChatGPT. Businesses are increasingly integrating these models into diverse services, including sales and customer support. Simultaneously, the security implications of AI, including potential misuse by malicious actors, are becoming more evident, with US and Canadian authorities noting instances of hackers leveraging AI technology.