Mindgard secures funding to tackle AI vulnerabilities
New funding supports Mindgard’s plans for global expansion and continued innovation in AI security solutions.

AI poses both opportunities and risks for businesses, creating a demand for specialised AI security solutions. Mindgard, a British university spinoff, is addressing these challenges with innovative approaches to safeguard companies against vulnerabilities like prompt injection and adversarial attacks.
Founded by Professor Peter Garraghan, Mindgard employs Dynamic Application Security Testing for AI (DAST-AI), a system designed to detect vulnerabilities during runtime. Its automated red-teaming simulations leverage an extensive threat library to test the resilience of AI systems, including image classifiers. However, this cutting-edge technology stems from Garraghan’s academic expertise in AI security, reinforced by ongoing collaborations with Lancaster University.
Recent developments have bolstered Mindgard’s growth. A new $8 million funding round, led by Boston-based .406 Ventures, will support team expansion, product development, and entry into the US market. Despite its global aspirations, the company plans to retain its R&D and engineering operations in London.
With a lean team of 15 aiming to grow modestly, Mindgard’s focus remains on creating a safer AI landscape. The platform serves a diverse clientele, from enterprises and penetration testers to AI startups keen on showcasing their risk prevention capabilities. Garraghan envisions a future where AI adoption is both secure and trusted.