AI agents bring new security risks to crypto
Crypto developers must prioritise security in AI agents to avoid key leaks and unauthorised access.

AI agents are becoming common in crypto, embedded in wallets, trading bots and onchain assistants that automate decisions and tasks. At the core of many AI agents lies the Model Context Protocol (MCP), which controls their behaviour and interactions.
While MCP offers flexibility, it also opens up multiple security risks.
Security researchers at SlowMist have identified four main ways attackers could exploit AI agents via malicious plugins. These include data poisoning, JSON injection, function overrides, and cross-MCP calls, all of which can manipulate or disrupt an agent’s operations.
Unlike poisoning AI models during training, these attacks target real-time interactions and plugin behaviour.
The number of AI agents in crypto is growing rapidly, expected to reach over one million in 2025. Experts warn that failing to secure the AI layer early could expose crypto assets to serious threats, such as private key leaks or unauthorised access.
Developers are urged to enforce strict plugin verification, sanitise inputs, and apply least privilege access to prevent these vulnerabilities.
Building AI agents quickly without security measures risks costly breaches. While adding protections may be tedious, experts agree it is essential to protect crypto wallets and funds as AI agents become more widespread.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!