Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.

“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.

The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.

The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.

“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.

Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

French data protection authority sets out 2026 GDPR and AI guidance agenda

The French data protection authority, the Commission nationale de l’informatique et des libertés (CNIL), has outlined the main guidance, consultations, and resources it plans to publish in 2026 to support compliance with the General Data Protection Regulation and certain provisions of the AI Act.

According to the CNIL, the programme is intended to help public and private sector actors prepare for upcoming consultations and anticipate regulatory developments. It says the programme is indicative and may evolve in response to current events.

The CNIL says it will begin work on ‘multi-property’ consent, covering the conditions for obtaining a single consent across several sites or media, particularly where they belong to the same group. It also says it will finalise work on the use of AI in the workplace and in health, including bias risks and safeguards to protect the rights of employees and patients.

The authority also plans to work on transcription and automated analysis tools used in call centres and videoconferencing software, operational content for data protection officers, and clarification of how the GDPR applies to non-anonymous AI models.

In the health sector, it says it will update research reference methodologies, publish its position on how people should be informed when data are reused for research, and issue a consolidated document on the electronic patient record.

On security, the CNIL says it will continue publishing recommendations to improve personal data security, publish the final updated version of its recommendation on remote electronic voting systems, and open public consultations on recommendations covering the security of personal data exchanges, remote identity verification, and end point detection and response services. It also says it will publish a recommendation on web filtering gateways.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government reviews regulatory options for enterprise connected devices

The UK government has said it will update and streamline its proposed code of practice for enterprise connected device security and assess further policy options, including regulation, certification, and other assurance mechanisms, following its call for views on enterprise connected device security.

The response, published by the Department for Science, Innovation and Technology, says enterprise-connected devices are often critical to business operations but can lack adequate security measures. It also states that the UK government’s call for views showed strong support for intervention to improve the cybersecurity of such devices, with 95% of respondents agreeing that the government should do more.

According to the response, 76% of respondents agreed or strongly agreed that the risks posed by enterprise-connected devices are sufficiently distinct from those of other connected devices to warrant an independent code of practice.

The UK government also reports that 78% agreed or strongly agreed with creating new legislation imposing obligations on manufacturers, while 71% agreed or strongly agreed with creating a new global standard based on the code of practice.

The UK government says it will ask manufacturers to use the National Cyber Security Centre’s existing device security principles while this work continues. It also says it will finalise the security principles, make them modular within the broader set of secure-by-design codes of practice, and explore the feasibility of a certification scheme for manufacturers.

The response also states that the UK government will assess options for regulatory measures, following feedback that it needs to go beyond voluntary adoption and include some form of assurance or enforcement mechanism. It adds that the government will review whether the scope of this work should be expanded beyond enterprise-connected devices as part of its broader analysis of technology security.

The document says the UK government will seek to align this work, where possible and necessary, with international developments, including European Union standards processes under the Cyber Resilience Act. It also notes repeated calls from respondents for implementation guides and clearer alignment with existing legislation and standards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle expands AI options for US government agencies

The US government is set to gain expanded AI capabilities through new infrastructure and model deployment options in Oracle Cloud.

These developments aim to improve agencies’ ability to manage critical tasks, from situational awareness to cybersecurity, while maintaining strict security and compliance standards.

High-performance GPUs and AI models will support faster, more reliable inference and training, helping agencies respond more effectively to public needs.

The focus is on enabling secure deployment in environments with sensitive data and complex regulatory requirements, ensuring AI use aligns with public interest and safety.

Such an expansion builds on existing government AI frameworks, offering capabilities for retrieval-augmented generation, secure inference, and operational analytics.

By integrating AI in a controlled, compliant environment, US agencies can improve efficiency, decision-making, and public service delivery without compromising security.

Ultimately, these advancements by Oracle aim to ensure that government AI adoption benefits citizens directly, supporting transparency, accountability, and effective public administration in high-stakes contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Malwarebytes highlights Microsoft findings on WhatsApp attachments used in Windows attacks

Malwarebytes has reported on findings from Microsoft researchers about a campaign that uses WhatsApp attachments to trick Windows users into launching a malicious script that grants attackers remote access to the machine.

According to the Malwarebytes report, Microsoft researchers said the attack does not rely on a software flaw in WhatsApp itself. Instead, it depends on social engineering. Victims receive what appears to be a harmless attachment through WhatsApp, but the file is actually a .vbs script that Windows can execute.

Once opened, the script copies built-in Windows tools into a hidden folder and renames them to appear less suspicious. Microsoft’s analysis, as cited by Malwarebytes, says legitimate system tools are then abused to download additional malware, using a living-off-the-land approach that avoids introducing obvious malicious binaries.

The infection chain is also designed to blend in with normal activity. Further scripts are fetched from mainstream cloud providers, making network traffic appear to be accessing services such as AWS, Tencent Cloud, or Backblaze rather than a clearly suspicious server.

Attempts to gain administrator privileges are part of the process as well. The malware reportedly attempts to alter User Account Control behaviour and registry settings to make system-level changes more quietly and remain active after a reboot.

At the final stage, an unsigned MSI installer deploys remote-access software and other payloads, allowing the attacker to maintain access to the compromised device and its data.

Malwarebytes also highlighted practical safety steps for home users and small businesses, including avoiding unsolicited attachments, enabling file extensions in Windows Explorer so misleading filenames are easier to spot, using up-to-date anti-malware tools, downloading software only from official vendor sites, and treating unexpected UAC prompts or sudden system changes as warning signs. Keeping Windows and other applications updated also remains important.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack on Hasbro exposes vulnerabilities in large enterprise systems

Hasbro has confirmed a cyberattack that disrupted internal systems and may take several weeks to resolve. The company detected unauthorised access on 28 March and responded by shutting down parts of its infrastructure to contain the incident.

Operations continue under contingency measures, allowing order processing and product distribution despite system disruptions.

However, ongoing security efforts suggest the threat may not yet be fully contained, while external cybersecurity specialists have been engaged to support the investigation.

The company has not disclosed the nature of the attack, and it remains unclear whether data has been exfiltrated. Public statements indicate that the full scope and impact of the breach are still under assessment, with uncertainty over potential financial or operational consequences.

The incident reflects a broader trend of cyberattacks targeting large corporations to disrupt operations and extract value.

Previous cases, including disruptions at Jaguar Land Rover, highlight the potential for prolonged economic impact and the increasing importance of resilience in corporate cybersecurity strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Healthcare data breach raises concerns over cloud security

A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.

A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.

The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.

However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.

Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.

Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.

The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.

As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!