Uganda to host Digital Government Africa 2026 summit

Uganda has announced that it will host the 2026 Digital Government Africa conference, presenting the event as a platform for continental dialogue on digital transformation, public service modernisation, and government innovation.

The announcement was made at a press conference in Kampala by the Ministry of ICT and National Guidance, the National Information Technology Authority of Uganda, and representatives of African Brains Global.

According to the organisers, the summit will bring together ministers, regulators, cybersecurity experts, cloud and data centre providers, digital finance institutions, investors, innovators, and development partners from across Africa and beyond. The event is scheduled to take place in Kampala from 6 to 8 October 2026.

Uganda’s Minister of ICT and National Guidance, Chris Baryomunsi, said the conference reflects growing confidence in the country’s digital transformation efforts and offers an opportunity to showcase how ICT is shaping service delivery and national development. The government linked the summit to Uganda’s wider Digital Transformation Roadmap, which focuses on digital infrastructure, e-government services, cybersecurity resilience, digital skills, and innovation.

Officials also pointed to Uganda’s expanding digital infrastructure. According to the ministry, the National Backbone Infrastructure now exceeds 5,000 kilometres of fibre-optic cable, connecting government institutions, districts, and urban centres, while more than 1,500 government sites use high-speed internet to support systems such as financial management, e-procurement, and online tax services.

The government also cited broader indicators of digital growth, including more than 44.3 million active mobile connections, expanding internet access through 4G and emerging 5G trials, and an ICT sector contributing more than 9% to GDP. Officials said hosting the summit should strengthen engagement between policymakers and innovators and raise Uganda’s profile as an ICT investment destination.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity and AI safety in focus at European Parliament discussion

Members of the European Parliament’s Committee on the Internal Market and Consumer Protection are set to discuss the safety of AI systems that could pose serious security risks.

According to the event description, the discussion will examine how existing EU legislation applies in practice, particularly the AI Act and the Cybersecurity Act. It will focus on how advanced AI systems are developed and managed when they may present security risks, and on how companies are implementing the EU rules and the challenges they face.

Experts from ENISA, the European Union Agency for Cybersecurity, and the European Commission are expected to take part. They will explain how the relevant legal and regulatory frameworks operate in practice across the EU, including the rules governing AI systems.

The discussion also comes as the European Commission has proposed changes to the Cybersecurity Act. In the European Parliament, the Committee on Industry, Research and Energy is leading work on the file, while IMCO is contributing an opinion focused on internal market and consumer protection aspects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches national AI security lab for certification and cyber resilience

The UAE Cyber Security Council, Cisco and Open Innovation AI have launched the UAE’s National AI Test and Validation Lab, creating a national platform designed to assess the security, safety and trustworthiness of AI systems.

Hosted in Abu Dhabi, the facility will evaluate AI models, autonomous agents and applications before deployment across government and private sector environments. The initiative forms part of the UAE’s wider strategy to strengthen sovereign AI capabilities and reinforce cybersecurity protections as AI adoption accelerates across critical infrastructure and public services.

According to UAE Cyber Security Council Head Dr Mohamed Al Kuwaiti, the laboratory aims to ensure AI systems deployed across the country remain aligned with national cybersecurity policies and trusted governance standards.

The facility will conduct assessments covering model robustness, prompt injection threats, jailbreak vulnerabilities, privacy risks, data leakage, supply chain integrity and autonomous agent behaviour.

Systems meeting the required standards will receive a national certification mark intended to provide assurance for regulators, businesses and citizens. Evaluations will also measure compliance against international frameworks, including ISO 42001, MITRE ATLAS, NIST AI RMF and OWASP standards for large language models and AI agents.

The lab combines Cisco AI-ready infrastructure powered by NVIDIA GPUs with Open Innovation AI orchestration and automated security testing platforms.

UAE authorities expect the centre to scale to analysing tens of thousands of AI agents annually, supporting sectors including finance, healthcare, telecommunications, energy and critical national infrastructure as the country expands its adoption of agentic AI technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cyberattack disrupts systems across San Diego Community College District

San Diego Community College District reported a cyberattack affecting more than 90,000 students, prompting the shutdown of several systems across its campuses. District officials described the incident as a ‘failed attack’, saying that no data appears to have been compromised.

As a precaution, internet access and key systems, including websites, email, web-based phones, and student registration platforms, were taken offline. The disruption began over the weekend and affected San Diego Miramar College, San Diego Mesa College, San Diego City College, and continuing education campuses across the district.

Classes continued in some locations without internet access, while services such as cafeterias and bookstores were closed. Students also reported relying on personal hotspots and facing difficulties accessing online course materials and classes.

District officials said the cyberattack may have originated from a sophisticated overseas operation. No ransom demand had been reported at the time of publication, and it remained unclear when all systems would be fully restored.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia and Japan expand cooperation on AI, supply chains and resilience

Australia and Japan have issued a joint declaration on economic security cooperation, stating that economic and technological resilience are central to national security and setting out a broad agenda for closer bilateral coordination across supply chains, critical technologies, and Indo-Pacific connectivity.

The declaration states that economic resilience is foundational to both countries’ security and that the framework is intended to strengthen strategic autonomy, indispensability, and regional resilience.

Furthermore, the declaration commits the two governments to closer policy alignment through existing bilateral mechanisms and to consultation on economic security contingencies linked to geopolitical tensions, economic coercion, and major market disruptions.

A major focus is on supply chain security in strategically significant sectors. Australia and Japan reaffirmed their partnership on minerals, energy, food, and industrial goods, while expressing concern over economic coercion, harmful overcapacity, and export restrictions, particularly in critical minerals.

The declaration also highlights cooperation on critical minerals projects, domestic smelting and metals processing, and coordination among government-backed finance institutions to support investment and supply chain resilience.

The text also emphasises critical and emerging technologies. Australia and Japan say they will deepen cooperation on research security and integrity, while promoting trusted collaboration between governments, national laboratories, industry, and academia in areas including AI, data centres, quantum, biotechnology, space, undersea cables, and telecommunications. The declaration also links advanced technologies to defence industry cooperation and supply chain collaboration.

In the Indo-Pacific, the two countries say they will work together to foster a safe, secure, and trustworthy AI and digital ecosystem, including through the Hiroshima AI Process and cooperation on digital infrastructure such as telecommunications, undersea cables, data centres, and all-photonics networks. The declaration also commits them to stronger coordination on secure undersea cables, describing them as vital regional infrastructure.

More broadly, Australia and Japan reaffirm support for a rules-based international economic order centred on the World Trade Organization, while also backing further work through the The Comprehensive and Progressive Agreement for Trans-Pacific Partnership, the Asia-Pacific Economic Cooperation, the Quad, the Asia Zero Emission Community, and other regional initiatives.

The declaration presents economic security cooperation not only as a bilateral priority but as part of a wider effort to strengthen resilience, secure connectivity, and trusted technology governance across the Indo-Pacific.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report calls for shift towards cyber resilience amid global threats

A World Economic Forum report states that the growing complexity of global cyber threats requires a shift from traditional cybersecurity approaches towards a broader model of cyber resilience.

The report notes that with nearly 70% of the global population online, digital infrastructure underpins critical sectors including healthcare, finance and public services. While interconnected systems deliver significant benefits, they also create cascading risks that can spread rapidly across borders and industries.

Recent cyber incidents have demonstrated how local breaches can escalate into global disruptions, exposing vulnerabilities in highly interconnected systems, the report notes. At the same time, the rise of state-linked cyber activity and large-scale cybercrime adds further complexity to the threat landscape.

The report by the WEF highlights fragmentation as a major barrier to effective response. Differences in political priorities, regulatory frameworks and technical capabilities create gaps that attackers can exploit, while limiting the ability of governments and organisations to coordinate effectively.

Emerging technologies such as AI and quantum computing are expanding both capabilities and risks, the report states.

The WEF report calls for a more coordinated global approach, including implementation of international norms, stronger capacity-building efforts and enhanced cooperation between governments, industry and civil society.

Why does it matter?

The WEF report is important because it reframes cyber threats as systemic, cross-border risks instead of isolated incidents, showing that fragmented regulation, uneven capabilities and weak cooperation can allow a single breach to cascade across critical infrastructure, economies and public services. Emerging technologies like AI are accelerating both the scale and sophistication of attacks, making coordinated international resilience a necessary condition for maintaining stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Peacebuilding and AI in focus at UNSSC webinar series

The United Nations System Staff College has highlighted growing interest across the UN and the wider peacebuilding community in how artificial intelligence is shaping conflict prevention, arguing that the technology can support peace efforts but cannot replace human judgement, diplomacy, and oversight.

The reflection draws on a three-part webinar series launched by UNSSC to examine AI governance, field use, and ethical risks in peacebuilding. According to the text, one message ran across all three discussions: AI may offer real value for conflict prevention, but its role should remain supportive rather than substitutive.

The piece argues that AI is already being used across the UN peace and security pillar and should be introduced only where it improves effectiveness, such as by handling repetitive tasks and allowing staff to focus on analysis, leadership, and political judgement. It also stresses that principles long associated with peacebuilding, including trust and ‘do no harm’, should apply across the full AI stack, from data and infrastructure to model design and deployment.

Examples cited from the webinar series include the use of augmented intelligence in early warning systems, where machine learning is combined with human contextual knowledge, and an AI-enabled WhatsApp chatbot used in Yemen to broaden participation in mediation, particularly among women and young people. The text presents these cases as evidence that AI can extend the reach of peacebuilding tools without replacing practitioners.

The final part of the reflection focuses on governance and ethics. It argues that while ethical AI principles are widely discussed, they need to be translated into practical, context-specific safeguards, especially in conflict settings. It also notes that risks differ across use cases such as early warning, social media monitoring, and mediation support, and says meaningful governance requires input from diplomats, researchers, mediators, and the private sector.

UNSSC says the webinar series drew between 300 and 500 registrants per session, which it presents as evidence of strong demand for more targeted learning on AI and peacebuilding. The college argues that its role should extend beyond convening discussion to turning those debates into practical knowledge for UN practitioners working at the intersection of AI and conflict prevention.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft reports large-scale phishing campaign targeting organisations across sectors

Microsoft has disclosed a phishing campaign aimed at stealing credentials from more than 35,000 users across 26 countries. The attack, detected in April 2026, targeted over 13,000 organisations, with a heavy concentration in healthcare, financial services, professional services, and technology sectors.

Microsoft said the campaign used email templates designed to mimic internal corporate communications, often framed as code of conduct or compliance-related notices.

Attackers created a sense of urgency through time-sensitive prompts and attached PDFs that redirected victims to credential-harvesting pages hosted on attacker-controlled infrastructure, Microsoft added.

The attack chain included multiple verification steps, such as CAPTCHA screens and intermediate landing pages intended to bypass automated defences and increase legitimacy.

Ultimately, victims were directed to fake sign-in portals using adversary-in-the-middle techniques, enabling real-time capture of credentials and authentication tokens, including multi-factor authentication bypass.

The disclosure comes amid a wider surge in phishing activity, with Microsoft reporting billions of attempts and a rapid rise in QR code-based attacks and CAPTCHA-gated phishing flows.

Why does it matter? 

The campaign shows phishing evolving into highly convincing, enterprise-style attacks that are harder to detect and increasingly scalable. By bypassing both human judgment and security controls like multi-factor authentication, it significantly raises the risk of large-scale account compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report highlights growing role of AI in cybersecurity operations

A World Economic Forum white paper (Empowering Defenders: AI for Cybersecurity), developed with KPMG, states that AI is becoming a core capability for modern cybersecurity. The report notes that attackers are using AI to increase speed, scale and sophistication, while defenders are also adopting AI to improve detection, response and resilience.

The report describes how AI is being used across the cybersecurity lifecycle, from cyber governance and risk identification to threat detection, incident response and recovery. Case studies from major organisations highlight applications in phishing detection, vulnerability management, malware analysis, threat intelligence and automated security reviews.

WEF report also states that effective adoption depends on more than technology investment. Organisations need executive support, reliable data, skilled teams, mature infrastructure and clear governance before deploying AI in critical security operations.

The report also highlights the rise of agentic AI, where autonomous systems can detect, coordinate and respond to threats with limited human intervention. It adds that while these systems could help defenders act faster, they may also introduce risks related to accountability, unintended behaviour and over-reliance on automation.

Why does it matter?

The central message of the report is that AI can strengthen cyber defence only when paired with human judgement, structured pilots, continuous monitoring and clear safeguards. Without these foundations, organisations risk creating fragile systems instead of resilient ones.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybercrime communities face skills gap despite rise of AI tools

A major study by researchers from the universities of Cambridge, Edinburgh, and Strathclyde, published by the Centre for Emerging Technology and Security at the Alan Turing Institute, suggests cybercriminals are still struggling to use AI effectively in their operations despite widespread attention around tools such as ChatGPT.

Researchers analysed more than 100 million posts from underground and dark web forums to assess how AI is being adopted within cybercrime communities.

The research, carried out by the universities of Edinburgh, Strathclyde, and Cambridge using the CrimeBB database, found that most offenders lack the technical skills and resources needed to integrate AI into criminal activity. Rather than lowering barriers to entry, AI tools benefit already skilled actors far more than inexperienced ones.

The analysis shows AI is used most successfully in already highly automated areas, such as social media bots linked to harassment and fraud, as well as in efforts to mask patterns that cybersecurity systems might otherwise detect. While experimentation is increasing, the researchers found little sign that AI is delivering a broad or transformative boost to overall cybercriminal capability. Mainstream chatbot guardrails were also found to be limiting harmful use in practice.

The researchers argue that the more immediate concern for industry is not dramatic AI-enabled innovation among cybercriminals, but insecure adoption of AI within legitimate organisations. They point to risks from poorly secured agentic AI systems and from AI-generated ‘vibecoded’ software being deployed without adequate safeguards.

Why does it matter?

The findings challenge a common assumption that generative AI is already giving cybercriminals a major operational advantage. Instead, the more immediate and scalable risk may come from companies deploying insecure AI systems faster than they can secure them. That shifts attention away from worst-case speculation about criminal innovation and towards a more practical cyber policy question: whether organisations are introducing new AI-enabled vulnerabilities into mainstream digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!