Microsoft expands software security lifecycle for AI-driven platforms

New AI safeguards by Microsoft combine research, policy, and engineering to protect non-deterministic systems from emerging cyberthreats.

Microsoft logo representing the company’s AI security and secure development initiatives

AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has updated its Secure Development Lifecycle to address AI-specific threats across design, deployment and monitoring.

The updated approach reflects how AI can blur trust boundaries by combining data, tools, APIs and agents in one workflow. New attack paths include prompts, plugins, retrieved content and model updates, raising risks such as prompt injection and data poisoning.

Microsoft says policy alone cannot manage non-deterministic systems and fast iteration cycles. Guidance now centres on practical engineering patterns, tight feedback loops and cross-team collaboration between research, governance and development.

Its SDL for AI is organised around six pillars: threat research, adaptive policy, shared standards, workforce enablement, cross-functional collaboration and continuous improvement. Microsoft says the aim is to embed security into every stage of AI development.

The company also highlights new safeguards, including AI-specific threat modelling, observability, memory protections and stronger identity controls for agent workflows. Microsoft says more detailed guidance will follow in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!