The AI Governance Alliance of the World Economic Forum unveiled the Presidio AI Framework

The framework is designed to promote the responsible use and regulation of AI, particularly focusing on generative AI models

 Art, Graphics, Light, Advertisement


The AI Governance Alliance of the World Economic Forum (WEF) unveiled the ‘Presidio AI Framework‘ as part of its AI Governance initiative. This framework, presented in the first of three briefing papers, addresses critical challenges associated with the development and deployment of generative AI models. The overarching aim is to introduce risk mitigation strategies throughout the entire life cycle of these models, covering creation, adaptation, and eventual retirement.

One of the primary concerns highlighted in the ‘Presidio AI Framework’ is the issue of fragmentation. According to the briefing paper, a lack of a comprehensive perspective spanning the entire life cycle of generative AI models leads to fragmented perceptions of associated risks. This fragmentation occurs from the initial design phase through deployment and use.

Vague definitions pose another challenge identified by the Alliance. The paper points to existing ambiguities and a lack of common understanding regarding safety, risks (such as traceability), and general safety measures (such as red teaming) at the forefront of model development.

The ambiguity surrounding guardrails is also addressed in the framework. There is a need for clarity regarding the phases at which guardrails or risk mitigation strategies should be implemented. Additionally, the effectiveness, applicability, and limitations of these guardrails require further research.

Addressing the aspect of model access, the ‘Presidio AI Framework’ recognizes the tension between open access driving innovation and reducing the effectiveness of guardrails. The Alliance advocates for a graded approach, suggesting that AI models be accessible at varying levels, ranging from fully closed to fully open-sourced.

However, amidst these challenges, the Presidio AI Framework identifies opportunities for progress. Standardization is proposed as a means to link technical aspects of design, development, and release with corresponding risks and mitigations, fostering shared terminology and best practices. The framework also emphasizes the importance of stakeholder trust and empowerment, advocating for clarity on expected risk mitigation strategies and accountability for implementation.

The framework outlines the roles of key actors in the AI model ecosystem, emphasizing a streamlined approach to risk regulation and information transfer. The primary actors include AI model creators, responsible for end-to-end design and development; AI model adapters, tailoring models for specific tasks; AI model users, interacting directly with models; and AI application users, interacting indirectly through applications or APIs.

Moreover, the ‘Presidio AI Framework’ introduces three main elements: Expanded AI Lifecycle, addressing the comprehensive life cycle of AI models; Expanded risk-guardrails, emphasizing the need for clear and effective risk mitigation strategies; and Shift-left methodology, advocating for early integration of risk considerations in the development process.

Read more: Medianama