Codex Security expands OpenAI’s push into cybersecurity tools
Developers gain a new AI tool as Codex Security helps detect complex software vulnerabilities and recommend fixes.
OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.
The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.
Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.
The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.
Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
