Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Network failure hits EE, BT and affects other UK providers

Thousands of EE and BT customers across the UK encountered widespread network failures on 24 July, primarily affecting voice services.

The outage, lasting over 24 hours, disrupted mobile and landline calls. Over 2,600 EE users reported issues with Downdetector at peak volume around 2:15 p.m. BST. Despite repair efforts, residual outages were still being logged the following day.

Although Vodafone and Three initially confirmed their networks were stable, users who recently switched carriers or ported numbers from EE experienced failures when making or receiving calls. However, this suggests cross-network routing issues burdened by EE’s technical fault.

Emergency services were briefly impacted; some users could not reach 999, though voice functionality has resumed. BT and EE apologised and said they were working urgently to restore reliable service.

Given statutory obligations around service resilience, Ofcom has opened inquiries into scale and causes. Affected MVNO operators using EE infrastructure, like 1pMobile, reported customer disruptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big companies grapple with AI’s legal, security, and reputational threats

A recent Quartz investigation reveals that concerns over AI are increasingly overshadowing corporate enthusiasm, especially among Fortune 500 companies.

More than 69% now reference generative AI in their annual reports as a risk factor, while only about 30% highlight its benefits, a dramatic shift toward caution in corporate discourse.

These risks range from cybersecurity threats, such as AI-generated phishing, model poisoning, and adversarial attacks, to operational and reputational dangers stemming from opaque AI decision-making, including hallucinations and biassed outputs.

Privacy exposure, legal liability, task misalignment, and overpromising AI capabilities, so-called ‘AI washing’, compound corporate exposure, particularly for boards and senior leadership facing directors’ and officers’ liability risks.

Other structural risks include vendor lock-in, disproportionate market influence by dominant AI providers, and supply chain dependencies that constrain flexibility and resilience.

Notably, even cybersecurity experts warn of emerging threats from AI agents, autonomous systems capable of executing actions that complicate legal accountability and oversight.

Companies are advised to adopt comprehensive AI risk-management strategies to navigate this evolving landscape.

Essential elements include establishing formal governance frameworks, conducting bias and privacy audits, documenting risk assessments, ensuring human-in-the-loop oversight, revising vendor contracts, and embedding AI ethics into policy and training, particularly at the board level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft hacking campaign expands into ransomware attacks

A state-aligned cyber-espionage campaign exploiting Microsoft server software vulnerabilities has escalated to ransomware deployment, according to a Microsoft blog post published late Wednesday.

The group, dubbed ‘Storm-2603’ by Microsoft, is now using the SharePoint vulnerability to spread ransomware that can lock down systems and demand digital payments. This shift suggests a move from espionage to broader disruption.

according to Eye Security, a cybersecurity firm from the Netherlands, the number of known victims has surged from 100 to over 400, with the possibility that the true figure is likely much higher.

‘There are many more, because not all attack vectors have left artefacts that we could scan for,’ said Eye Security’s chief hacker, Vaisha Bernard.

One confirmed victim is the US National Institutes of Health, which isolated affected servers as a precaution. Reports also indicate that the Department of Homeland Security and several other agencies have been impacted.

The breach stems from an incomplete fix to Microsoft’s SharePoint software vulnerability. Both Microsoft and Google-owner Alphabet have linked the activity to Chinese hackers—a claim Beijing denies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmaker proposes to train young Americans in AI for cyberwarfare

In a Washington Post opinion piece, Rep. Elise Stefanik and Stephen Prince, CEO of TFG Asset Management, argue that the United States is already engaged in a new form of warfare — cyberwarfare — waged by adversaries like China, Russia, and Iran using tools such as malware, phishing, and zero-day exploits. They assert that the US is not adequately prepared to defend against these threats due to a significant shortage of cyber talent, especially within the military and government.

To address this gap, the authors propose the creation of the United States Advanced Technology Academy (USATA) — a tuition-free, government-supported institution that would train a new generation of Americans in cybersecurity, AI, and quantum computing. Modelled after military academies, USATA would be located in upstate New York and require a five-year public service commitment from graduates.

The goal is to rapidly develop a pipeline of skilled cyber defenders, close the Pentagon’s estimated 30,000-person cyber personnel shortfall, and maintain US leadership in strategic technologies. Stefanik and Prince argue that while investing in AI tools and infrastructure is essential, equally critical is the cultivation of human expertise to operate, secure, and ethically deploy these tools. They position USATA not just as an educational institution but as a national security imperative.

The article places the academy within a broader effort to outpace rivals like China, which is also actively investing in STEM education and tech capacity. The authors call on the President to establish USATA via executive order or bipartisan congressional support, framing it as a decisive and forward-looking response to 21st-century threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU and Japan deepen AI cooperation under new digital pact

In May 2025, the European Union and Japan formally reaffirmed their long-standing EU‑Japan Digital Partnership during the third Digital Partnership Council in Tokyo. Delegations agreed to deepen collaboration in pivotal digital technologies, most notably artificial intelligence, quantum computing, 5G/6G networks, semiconductors, cloud, and cybersecurity.

A joint statement committed to signing an administrative agreement on AI, aligned with principles from the Hiroshima AI Process. Shared initiatives include a €4 million EU-supported quantum R&D project named Q‑NEKO and the 6G MIRAI‑HARMONY research effort.

Both parties pledge to enhance data governance, digital identity interoperability, regulatory coordination across platforms, and secure connectivity via submarine cables and Arctic routes. The accord builds on the Strategic Partnership Agreement activated in January 2025, reinforcing their mutual platform for rules-based, value-driven digital and innovation cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK proposes mandatory ransomware reporting and seeks to ban payments by public sector

The UK government has unveiled a new proposal to strengthen its response to ransomware threats by requiring victims to report breaches, enabling law enforcement to disrupt cybercriminal operations more effectively.

Published by the Home Office as part of an ongoing policy consultation, the proposal outlines key measures:

  • Mandatory breach reporting to equip law enforcement with actionable intelligence for identifying and disrupting ransomware groups.
  • A ban on ransom payments by public sector and critical infrastructure entities.
  • A notification requirement for other organisations intending to pay a ransom, allowing the government to assess and respond accordingly.

According to the proposal, these steps would help the UK government carry out ‘targeted disruptions’ in response to evolving ransomware threats, while also improving support for victims.

Cybersecurity experts have largely welcomed the initiative. Allan Liska of Recorded Future noted the plan reflects a growing recognition that many ransomware actors are within reach of law enforcement. Arda Büyükkaya of EclecticIQ praised the effort to formalise response protocols, viewing the proposed payment ban and proactive enforcement as meaningful deterrents.

This announcement follows a consultation process that began in January 2025. While the proposals signal a significant policy shift, they have not yet been enacted into law. The potential ban on ransom payments remains particularly contentious, with critics warning that, in some cases—such as hospital systems—paying a ransom may be the only option to restore essential services quickly.

The UK’s proposal follows similar international efforts, including Australia’s recent mandate for victims to disclose ransom payments, though Australia has stopped short of banning them outright.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!