NCSC issues new guidance for EU cybersecurity rules

The National Cyber Security Centre (NCSC) has published new guidance to assist organisations in meeting the upcoming EU Network and Information Security Directive (NIS2) requirements.

Ireland missed the October 2024 deadline but is expected to adopt the directive soon.

NIS2 broadens the scope of covered sectors and introduces stricter cybersecurity obligations, including heavier fines and legal consequences for non-compliance. The directive aims to improve security across supply chains in both the public and private sectors.

To help businesses comply, the NCSC unveiled Risk Management Measures. It also launched Cyber Fundamentals, a practical framework designed for organisations of varying sizes and risk levels.

Joseph Stephens, NCSC’s Director of Resilience, noted the challenge of broad application and praised cooperation with Belgium and Romania on a solution for the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare blocks the largest DDoS attack in internet history

Cloudflare has blocked what it describes as the largest distributed denial-of-service (DDoS) attack ever recorded after nearly 38 terabytes of data were unleashed in just 45 seconds.

The onslaught generated a peak traffic rate of 7.3 terabits per second and targeted nearly 22,000 destination ports on a single IP address managed by an undisclosed hosting provider.

Instead of relying on a mix of tactics, the attackers primarily used UDP packet floods, which accounted for almost all attacks. A small fraction employed outdated diagnostic tools and methods such as reflection and amplification to intensify the network overload.

These techniques exploit how some systems automatically respond to ping requests, causing massive data feedback loops when scaled.

Originating from 161 countries, the attack saw nearly half its traffic come from IPs in Brazil and Vietnam, with the remainder traced to Taiwan, China, Indonesia, and the US.

Despite appearing globally orchestrated, most traffic came from compromised devices—often everyday items infected with malware and turned into bots without their owners’ knowledge.

To manage the unprecedented data surge, Cloudflare used a decentralised approach. Traffic was rerouted to data centres close to its origin, while advanced detection systems identified and blocked harmful packets without disturbing legitimate data flows.

The incident highlights the scale of modern cyberattacks and the growing sophistication of defences needed to stop them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Banks and tech firms create open-source AI standards

A group of leading banks and technology firms has joined forces to create standardised open-source controls for AI within the financial sector.

The initiative, led by the Fintech Open Source Foundation (FINOS), includes financial institutions such as Citi, BMO, RBC, and Morgan Stanley, working alongside major cloud providers like Microsoft, Google Cloud, and Amazon Web Services.

Known as the Common Controls for AI Services project, the effort seeks to build neutral, industry-wide standards for AI use in financial services.

The framework will be tailored to regulatory environments, offering peer-reviewed governance models and live validation tools to support real-time compliance. It extends FINOS’s earlier Common Cloud Controls framework, which originated with contributions from Citi.

Gabriele Columbro, Executive Director of FINOS, described the moment as critical for AI in finance. He emphasised the role of open source in encouraging early collaboration between financial firms and third-party providers on shared security and compliance goals.

Instead of isolated standards, the project promotes unified approaches that reduce fragmentation across regulated markets.

The project remains open for further contributions from financial organisations, AI vendors, regulators, and technology companies.

As part of the Linux Foundation, FINOS provides a neutral space for competitors to co-develop tools that enhance AI adoption’s safety, transparency, and efficiency in finance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU and Australia to begin negotiations on security and defence partnership

Brussels and Canberra begin negotiations on a Security and Defence Partnership (SDP). The announcement follows a meeting between European Commission President Ursula von der Leyen, European Council President António Costa, and Australian Prime Minister Anthony Albanese.

The proposed SDP aims to establish a formal framework for cooperation in a range of security-related areas.

These include defence industry collaboration, counter-terrorism and cyber threats, maritime security, non-proliferation and disarmament, space security, economic security, and responses to hybrid threats.

SDPs are non-binding agreements facilitating enhanced political and operational cooperation between the EU and external partners. They do not include provisions for military deployment.

The European Union maintains SDPs with seven other countries: Albania, Japan, Moldova, North Macedonia, Norway, South Korea, and the United Kingdom. The forthcoming negotiations with Australia would expand this network, potentially increasing coordination on global and regional security issues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea’s SK Group and AWS team up on AI infrastructure

South Korean conglomerate SK Group has joined forces with Amazon Web Services (AWS) to invest 7 trillion won (approximately $5.1 billion) in building a large-scale AI data centre in Ulsan, South Korea. The project aims to bolster the country’s AI infrastructure over the next 15 years.

According to South Korea’s Ministry of Science and ICT, the facility will begin construction in September 2025 and is expected to become fully operational by early 2029. Once complete, the Ulsan Centre will have a power capacity exceeding 100 megawatts. AWS will contribute $4 billion to the project.

SK Group stated on Sunday that the data centre will support Korea’s AI ambitions by integrating high-speed networks, advanced semiconductors, and efficient energy systems. In a LinkedIn post, SK Group chairman Chey Tae-won said the company is ‘uniquely positioned’ to drive AI innovation.

They highlighted the role of several SK affiliates in the project, including SK Hynix for high-bandwidth memory, SK Telecom and SK Broadband for network operations, and SK Gas and SK Multi Utility for infrastructure and energy.

The initiative is part of SK Group’s broader commitment to AI investment. In 2023, the company pledged to invest 82 trillion won by 2026 in HBM chip development, data centres, and AI-powered services.

The group has also backed AI startups such as Perplexity, Twelve Labs, and Korean LLM developer Upstage. Its chip unit, Sapeon, merged with rival Rebellions last year, creating a company valued at 1.3 trillion won.

Other major Korean players are also ramping up AI efforts. Tech giant Kakao recently announced plans to invest 600 billion won in an AI data centre and partnered with OpenAI to incorporate ChatGPT technology into its services.

The tech industry in South Korea continues to race towards AI dominance, with domestic firms making substantial investments to secure future leadership in AI infrastructure and applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lawmakers at IGF 2025 call for global digital safeguards

At the Internet Governance Forum (IGF) 2025 in Norway, a high‑level parliamentary roundtable convened global lawmakers to tackle the pressing challenge of digital threats to democracy. Led by moderator Nikolis Smith, the discussion included Martin Chungong, Secretary‑General of the Inter‑Parliamentary Union (via video), and MPs from Norway, Kenya, California, Barbados, and Tajikistan. The central concern was how AI, disinformation, deepfakes, and digital inequality jeopardise truth, electoral integrity, and public trust.

Grunde Almeland, Member of the Norwegian Parliament, warned: ‘Truth is becoming less relevant … it’s hard and harder to pierce [confirmation‑bias] bubbles with factual debate and … facts.’ He championed strong, independent media, noting Norway’s success as “number one on the press freedom index” due to its editorial independence and extensive public funding. Almeland emphasised that legislation exists, but practical implementation and international coordination are key.

Kenyan Senator Catherine Mumma described a comprehensive legal framework—including cybercrime, data protection, and media acts—but admitted gaps in tackling misinformation. ‘We don’t have a law that specifically addresses misinformation and disinformation,’ she said, adding that social‑media rumours ‘[sometimes escalate] to violence’ especially around elections. Mumma called for balanced regulation that safeguards innovation, human rights, and investment in digital infrastructure and inclusion.

California Assembly Member Rebecca Bauer‑Kahn outlined her state’s trailblazing privacy and AI regulations. She highlighted a new law mandating watermarking of AI‑generated content and requiring political‑advert disclosures, although these face legal challenges as potentially ‘forced speech.’ Bauer‑Kahn stressed the need for ‘technology for good,’ including funding universities to develop watermarking and authentication tools—like Adobe’s system for verifying official content—emphasising that visual transparency restores trust.

Barbados MP Marsha Caddle recounted a recent deepfake falsely attributed to her prime minister, saying it risked ‘put[ting] at risk … global engagement.’ She promoted democratic literacy and transparency, explaining that parliamentary meetings are broadcast live to encourage public trust. She also praised local tech platforms such as Zindi in Africa, saying they foster home‑grown solutions to combat disinformation.

Tajikistan MP Zafar Alizoda highlighted regional disparities in data protections, noting that while EU citizens benefit from GDPR, users in Central Asia remain vulnerable. He urged platforms to adopt uniform global privacy standards: ‘Global platforms … must improve their policies for all users, regardless of the country of the user.’

Several participants—including John K.J. Kiarie, MP from Kenya—raised the crucial issue of ‘technological dumping,’ whereby wealthy nations and tech giants export harmful practices to vulnerable regions. Kiarie warned: ‘My people will be condemned to digital plantations… just like … slave trade.’ The consensus called for global digital governance treaties akin to nuclear or climate accords, alongside enforceable codes of conduct for Big Tech.

Despite challenges—such as balancing child protection, privacy, and platform regulation—parliamentarians reaffirmed shared goals: strengthening independent media, implementing watermarking and authentication technologies, increasing public literacy, ensuring equitable data protections, and fostering global cooperation. As Grunde Almeland put it: ‘We need to find spaces where we work together internationally… to find this common ground, a common set of rules.’ Their unified message: safeguarding democracy in the digital age demands national resilience and collective global action.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

FC Barcelona documents leaked in ransomware breach

A recent cyberattack on French insurer SMABTP’s Spanish subsidiary, Asefa, has led to the leak of over 200GB of sensitive data, including documents related to FC Barcelona.

The ransomware group Qilin has claimed responsibility for the breach, highlighting the growing threat posed by such actors. With high-profile victims now in the spotlight, the reputational damage could be substantial for Asefa and its clients.

The incident comes amid growing concern among UK small and medium-sized enterprises (SMEs) about cyber threats. According to GlobalData’s UK SME Insurance Survey 2025, more than a quarter of SMEs have been influenced by media reports of cyberattacks when purchasing cyber insurance.

Meanwhile, nearly one in five cited a competitor’s victimisation as a motivating factor.

Over 300 organisations have fallen victim to Qilin in the past year alone, reflecting a broader trend in the rise of AI-enabled cybercrime.

AI allows cybercriminals to refine their methods, making attacks more effective and challenging to detect. As a result, companies are increasingly recognising the importance of robust cybersecurity measures.

With threats escalating, there is an urgent call for insurers to offer more tailored cyber coverage and proactive services. The breach involving FC Barcelona is a stark reminder that no organisation is immune and that better risk assessment and resilience planning are now business essentials.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers gain control of tesla charger Through firmware downgrade

Tesla’s popular Wall Connector home EV charger was compromised at the January 2025 Pwn2Own Automotive competition, revealing how attackers could gain full control via the charging cable.

The Tesla Wall Connector Gen 3, a widely deployed residential AC charger delivering up to 22 kW, was exploited through a novel attack that used the physical charging connector as the main entry point.

The vulnerability allowed researchers to execute arbitrary code, potentially giving access to private networks in homes, hotels, or businesses.

Researchers from Synacktiv discovered that Tesla vehicles can update the Wall Connector’s firmware via the charging cable using a proprietary, undocumented protocol.

By simulating a Tesla car and exploiting Single-Wire CAN (SWCAN) communications over the Control Pilot line, the team downgraded the firmware to an older version with exposed debug features.

Using a custom USB-CAN adapter and a Raspberry Pi to emulate vehicle behaviour, they accessed the device’s setup Wi-Fi credentials and triggered a buffer overflow in the debug shell, ultimately gaining remote code execution.

The demonstration ended with a visual cue — the charger’s LED blinking — but the broader implication is access to internal networks and potential lateral movement across connected systems.

Tesla has since addressed the vulnerability by introducing anti-downgrade measures in newer firmware versions. The Pwn2Own event remains instrumental in exposing critical flaws in automotive and EV infrastructure, pushing manufacturers toward stronger security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!