Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.

“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.

The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.

The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.

“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.

Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Japan approves APPI amendment bill on personal data, AI training, and fines

Japan’s Cabinet has approved a bill to amend the Act on the Protection of Personal Information, or APPI, for submission to parliament.

The proposed amendments combine stricter enforcement with regulatory easing. They would introduce an administrative fine system, strengthen protections for children’s data and certain biometric data, and allow broader use of personal data for AI training. The bill would also ease some data-breach notification requirements.

Digital Minister of Japan, Hisashi Matsumoto, said enabling the use of sensitive personal data without consent is important for developing domestic AI models. He said the bill seeks to balance that objective with stronger protections for children’s data and facial-recognition data, as well as the introduction of administrative fines.

The fine mechanism would be introduced in a limited form. Provisions to impose fines for large-scale data breaches resulting from inadequate security measures were removed. Instead, the bill would target improper acquisition or use of personal data, unlawful provision of data to third parties, and misuse of sensitive data beyond stated statistical purposes, including transfers to third parties.

According to the proposal, fines would apply in large-scale cases involving more than 1,000 affected individuals, with amounts linked to profits derived from unlawful data handling. During drafting, the Personal Information Protection Commission also dropped plans to introduce consumer class actions for legal redress, while saying it would continue studying the issue.

The Personal Information Protection Commission is seeking passage during the current parliamentary session. The proposal follows a lengthy amendment process, during which earlier plans faced opposition from business and technology groups.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Consultation opens on measuring AI energy consumption and emissions in the EU

The European Commission has launched a targeted consultation on measuring the energy consumption and emissions of AI models and systems as part of a broader study on energy-efficient, low-emission AI in the European Union. The consultation seeks stakeholder input on the energy consumption and energy efficiency of general-purpose AI models.

Responses will help refine the study and contribute to a measurement framework for the AI Act’s energy-related objectives, while also supporting the design of a potential AI energy and emissions label, according to the consultation page.

The consultation targets companies ranging from start-ups and small and medium-sized enterprises to large enterprises, as well as other organisations that develop and deploy general-purpose AI models or AI systems, alongside their component and service suppliers.

Background information published by the Commission states that the EU AI Act includes provisions on energy consumption and transparency. Providers of general-purpose AI models are required to document the known or estimated energy consumption of their models as part of their technical documentation obligations under Annex XI of the AI Act.

Input is also sought on the accessibility of data needed to assess energy consumption during both training and inference, as well as on the suitability of different AI performance indicators. The Commission says its goal is to develop a robust, industry-informed framework for measuring AI energy consumption and efficiency.

Registered participants will receive an anonymous online questionnaire, and the AI Office will publish a summary of the results based on aggregated data. Respondents will not be directly quoted.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Government Digital Service and DSIT publish Digital and Data Benefits framework

The Department for Science, Innovation & Technology (DSIT) and the Government Digital Service have published the ‘Digital and Data Benefits framework‘, a policy paper that provides evidence and analytical methodologies for use in business cases and other associated products for digital and data projects across government. The document says it should be used alongside HM Treasury’s Green Book.

The framework covers AI, service transformation, data, capability, technology, cyber, and interoperability. It says its scope is the articulation and monetisation of digital and data benefits only, and that it is not stand-alone business-case guidance.

In the AI section, the framework states that recent Government Digital Service analysis found £6.3 billion in potential annual savings across the Civil Service, including £1.1 billion in potential cost reductions and £5.2 billion in productivity gains. It says the analysis used a large language model to review 200,000 Civil Service job descriptions, identify more than 1.5 million individual job tasks, and score each task for its potential for augmentation or automation by current AI tools.

The framework also states that a Government Digital Service trial involving 20,000 civil servants using Microsoft Copilot found average time savings of 26 minutes per day. It says more than 70% of users in the trial cohort spent less time searching for information and performing mundane tasks, and more time on higher-value tasks, innovation, or public service impact.

Beyond AI, the document sets out appraisal approaches for service transformation, data, capability, technology, cyber, and interoperability. It also states that sensitivity analysis is essential and that benefits identified in one theme should not be double-counted in other areas.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore to update cybersecurity standards and vendor obligations amid AI-enabled threats

Singapore’s Ministry of Digital Development and Information said the government will review and update cybersecurity standards and obligations as part of its response to evolving cyber threats, including AI-enabled attacks.

In a written parliamentary reply, the ministry said Singapore’s position as a major financial hub and digital economy makes it an attractive target for malicious actors. It added that the Cyber Security Agency of Singapore regularly updates the public on cybersecurity threats through SingCERT advisories and the Singapore Cyber Landscape publication.

The ministry said critical systems are already subject to higher cybersecurity standards and obligations under the Cybersecurity Act. It also said the government has invested in capability development, citing initiatives such as the Cybersecurity Development Programme and national exercises including Exercise Cyber Star.

As the threat evolves, so must the response, the ministry said. It stated that the Cyber Security Agency of Singapore will review and update cybersecurity standards and obligations to strengthen security controls, and that the government will help owners of critical systems better detect threats, including those from advanced threat actors and AI-enabled threats, through proprietary threat detection systems.

For government systems, the ministry said GovTech has internal guidelines to safeguard systems that hold sensitive data and provide important government services. It added that GovTech will introduce more stringent cybersecurity and data protection obligations for government vendors, including requiring vendors that manage critical systems and sensitive government data to meet Cyber Trust Mark requirements.

The reply also pointed to measures for businesses and consumers. It said the Cyber Security Agency of Singapore has rolled out initiatives, including its CISO-as-a-Service programme for small and medium enterprises, while mandatory cybersecurity requirements for gateway devices such as home routers have already been introduced.

The ministry added that standards for home routers will be raised further and that Singapore will explore introducing similar standards for IP cameras.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission consultation closes on draft AI Act procedure rules

The European Commission is closing its consultation on a draft implementing regulation on detailed arrangements for certain proceedings under the AI Act.

The draft states that it lays down detailed arrangements and conditions for the evaluation of general-purpose AI models under Article 92, including procedures for involving independent experts and selecting them. It also lays down detailed arrangements and procedural safeguards for proceedings in view of the possible adoption of decisions under Article 101 of Regulation (EU) 2024/1689.

Under Article 2, a European Commission decision requesting access to a general-purpose AI model would have to specify the technical means, components, and conditions by which the provider must provide that access. The draft states that access may include APIs, internal access, source code, model weights, access to the infrastructure used to host the model, access to inspect and modify system state, and all levels of access granted to the provider’s own employees.

The draft also states that the European Commission may require a provider to disable and remove logging measures that could track or record the Commission’s access, to the extent necessary to ensure the integrity and confidentiality of the evaluation process. Providers who requested access would have to provide it in a timely and effective manner.

Regarding independent experts, the draft states that the European Commission must take into account factors such as shared ownership, governance, management, personnel, resources, and contractual relationships when assessing independence. It also states that appointed experts must remain independent throughout their appointment and that the confidentiality, integrity, and availability of sensitive information must be protected.

For proceedings that may lead to fines, the draft states that the European Commission may initiate proceedings against relevant conduct by providers of general-purpose AI models. It also states that the Commission may, by decision, order interim measures on grounds of urgency due to a risk of serious damage to health, safety requirements, or other grounds relating to the public interest covered by Regulation (EU) 2024/1689, including preventing a general-purpose AI model from being made available on the market, based on a prima facie finding of an infringement.

Procedural safeguards include written observations on preliminary findings, with a time limit of at least 14 days set by the European Commission, and rules governing access to the file. The draft states that the addressee may obtain access to documents mentioned in the preliminary findings, subject to redactions protecting business secrets and other confidential information, while broader access may be granted under terms of disclosure set by the Commission.

The annex sets format and length requirements for written observations submitted under Article 7. It states that observations must be submitted in a format that allows electronic processing, digitisation, and character recognition, and sets requirements for page format, font, spacing, margins, and numbering. Written observations must not exceed 50 pages, while annexes do not count towards that limit if they have a purely evidential and instrumental function and are proportionate in number and length.

The draft also lays down limitation periods for the imposition and enforcement of penalties, rules on the beginning and setting of time periods, and provisions on the transmission and receipt of information. It states that documents transmitted by digital means must use at least one qualified electronic signature and that, for real-time or near real-time information shared through APIs or equivalent means, the European Commission will define the methods and duration of that sharing.

The regulation states that it would enter into force on the twentieth day following its publication in the Official Journal of the European Union.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Corning and Meta start construction on North Carolina AI cable facility

Corning Incorporated and Meta Platforms have begun construction on a major expansion of Corning’s optical cable manufacturing facility in Hickory, North Carolina. The project will support advanced AI data centres using US-developed technology.

The initiative is part of a multiyear, up to $6 billion agreement between the two companies to accelerate the deployment of high-performance data centres. Under the agreement, Corning will supply Meta with new optical fibre, cable, and connectivity solutions.

Meta will act as the anchor customer for the Hickory expansion, which will produce optical cable critical for AI infrastructure. The expansion is expected to strengthen domestic manufacturing and create additional skilled jobs in North Carolina.

Corning currently employs more than 5,000 people in the state and plans to increase its workforce by 15 to 20 percent. Executives emphasised the partnership’s role in advancing US innovation and supporting the next generation of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The implementation of the EU AI Act with a focus on general-purpose AI models

Transition from legislation to implementation

The European Union has entered a new phase in the governance of AI, moving from the legislative adoption of the Artificial Intelligence Act (AI Act) towards its practical implementation. This particular phase places particular emphasis on obligations of providers of general-purpose AI (GPAI) models, reflecting the increasing role of such systems in the broader digital ecosystem.

The AI Act, adopted in 2024, establishes a comprehensive legal framework for AI within the EU. It introduces a risk-based approach that classifies AI systems into categories ranging from minimal risk to unacceptable risk, with corresponding regulatory requirements.

According to the official text of the regulation, the framework is designed to ensure that AI systems placed on the market in the Union are ‘safe and respect existing law on fundamental rights and Union values.’

While earlier discussions around the Act focused on its legislative negotiation and scope, the current phase centres on how its provisions will be applied in practice.

General-purpose AI models within the AI Act

A key element of this implementation phase concerns general-purpose AI models. These models, which can be integrated into a wide range of downstream applications, occupy a distinct position within the regulatory framework.

The AI Act defines general-purpose AI models as systems that can be used across multiple tasks and contexts and may ‘serve a variety of purposes, both for direct use and for integration into other AI systems.’

That positioning reflects the broad applicability of these models, particularly in areas such as natural language processing, content generation, and data analysis.

The Act also recognises that the widespread deployment of such models may have implications beyond individual use cases, particularly when integrated into high-risk systems.

Obligations for providers of GPAI models

The European Commission, together with the European AI Office, has begun outlining expectations for compliance with provisions related to general-purpose AI.

According to official EU materials, providers of GPAI models are required to ensure that technical documentation is drawn up and kept up to date.

European Union
Image via Freepik

The regulation specifies that providers should ‘draw up and keep up-to-date technical documentation of the model,’ ensuring that relevant information is accessible for compliance and oversight purposes. In addition, transparency obligations require providers to make certain information available to downstream deployers.

The intention of this is to support the responsible integration of GPAI models into other systems.

Distinction between GPAI and systemic-risk models

The AI Act introduces a distinction between general-purpose AI models and those considered to pose systemic risk.

Models that meet specific criteria, such as scale, capability, or deployment level, may be classified as having a systemic impact.

For such models, additional obligations apply, including requirements related to evaluation, risk mitigation, and reporting. The European Commission has indicated that further guidance will clarify how systemic risk thresholds are determined, including through delegated acts and technical standards.

Role of the European AI Office in implementation

The European AI Office, established within the European Commission, plays a central role in supporting the implementation of the AI Act.

Its responsibilities include contributing to the consistent application of the regulation, coordinating with national authorities, and supporting the development of methodologies for compliance.

European AI Office
Source: digital-strategy.ec.europa.eu/en/policies/ai-office

According to the European Commission, the AI Office is tasked with ‘ensuring the coherent implementation of the AI Act across the Union.’ The Office is also expected to contribute to the development of benchmarks, testing frameworks, and guidance documents that support both regulators and providers.

Phased implementation timeline

The implementation of the AI Act is structured as a phased process, with different provisions becoming applicable over time.

That phased approach allows stakeholders to adapt to the regulatory requirements while enabling authorities to establish enforcement mechanisms.

Provisions related to general-purpose AI models are among the earlier elements to be operationalised, reflecting their central role in the current AI landscape.

The European Commission has indicated that additional implementing acts and guidance documents will be issued as part of this process.

Coordination with national authorities

While the European AI Office plays a coordinating role at the EU level, enforcement remains the responsibility of national authorities within member states.

The AI Act establishes mechanisms for cooperation and information-sharing to support a harmonised approach across the European Union.

National authorities are expected to work closely with the AI Office and the European Commission to oversee compliance and address emerging challenges.

Stakeholder engagement and technical guidance

The implementation phase also involves engagement with a range of stakeholders, including industry actors, civil society organisations, and technical experts.

Also, the European Commission has initiated consultations and workshops to gather input on practical aspects of implementation, such as documentation standards and risk assessment methodologies.

The following process supports the development of operational guidance applicable across sectors and use cases.

Interaction with the EU digital regulatory framework

The AI Act forms part of a broader EU digital policy framework that includes instruments such as the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the Digital Markets Act (DMA).

These frameworks address different aspects of the digital ecosystem, including data protection, platform governance, and market competition.

The relationship between the AI Act and these instruments is expected to be clarified further during implementation.

International context: OECD and UN approaches

The governance of general-purpose AI models is also being addressed at the international level.

The OECD AI Principles state that AI systems should be ‘robust, secure and safe throughout their entire lifecycle,’ and emphasise accountability for their functioning.

 Logo, Disk, Astronomy, Outer Space

At the UN level, the Global Digital Compact process addresses issues related to transparency, accountability, and oversight of digital technologies, including AI.

The listed initiatives provide non-binding guidance, in contrast to the legally binding framework established by the EU AI Act.

Ongoing development of technical standards

The development of technical standards is an important component of the implementation process.

The European Commission has indicated that it will work with standardisation organisations to develop specifications related to documentation, evaluation, and risk management.

These standards are expected to support the practical application of the AI Act’s provisions.

From regulatory framework to regulatory practice

The current phase of the EU AI Act marks a transition from legislative design to regulatory practice.

For providers of general-purpose AI models, this involves preparing to meet obligations related to documentation, transparency, and risk management. For regulators, the focus is on ensuring consistent application of the rules across member states, supported by coordination mechanisms and guidance from the AI Office.

The implementation process is expected to evolve as further guidance is issued.

Conclusion

The European Union’s AI Act is entering its implementation phase, with a particular focus on general-purpose AI models.

That phase involves translating the regulation’s legal provisions into operational requirements, supported by guidance from the European Commission and the AI Office.

The development of technical standards, coordination mechanisms, and compliance frameworks will play a central role in this process. As implementation progresses, further clarification is expected through additional guidance and regulatory measures, contributing to the operationalisation of the EU’s approach to AI governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!