Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Network failure hits EE, BT and affects other UK providers

Thousands of EE and BT customers across the UK encountered widespread network failures on 24 July, primarily affecting voice services.

The outage, lasting over 24 hours, disrupted mobile and landline calls. Over 2,600 EE users reported issues with Downdetector at peak volume around 2:15 p.m. BST. Despite repair efforts, residual outages were still being logged the following day.

Although Vodafone and Three initially confirmed their networks were stable, users who recently switched carriers or ported numbers from EE experienced failures when making or receiving calls. However, this suggests cross-network routing issues burdened by EE’s technical fault.

Emergency services were briefly impacted; some users could not reach 999, though voice functionality has resumed. BT and EE apologised and said they were working urgently to restore reliable service.

Given statutory obligations around service resilience, Ofcom has opened inquiries into scale and causes. Affected MVNO operators using EE infrastructure, like 1pMobile, reported customer disruptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto hacks soar in 2025 as security gaps widen

According to Hacken’s latest research, the crypto sector has already recorded more than $3.1 billion in losses during the first half of 2025. That figure already exceeds 2024, mainly due to access control flaws, phishing, and AI-driven exploits.

Access control remains the most significant weakness, responsible for almost 60% of recorded losses. The most severe breach was the Bybit attack, where North Korean hackers exploited a wallet signer vulnerability to steal $1.46 billion.

Other incidents include UPCX’s $70 million loss, a manipulated price oracle exploit on KiloEx, and insider fraud involving the Roar staking contract.

Phishing and social engineering continue to evolve, accounting for nearly $600 million in stolen funds. One victim reportedly lost $330 million in Bitcoin, while fake Coinbase support calls drained over $100 million from user wallets.

Meanwhile, AI-related hacks have exploded in volume, increasing by more than 1,000% compared to last year. Most of these incidents stem from insecure APIs and flaws in large language model integrations.

Experts warn that smarter attackers and Web3’s fragmented security practices demand a stronger approach. Hacken advises combining blockchain standards with off-chain protections and better training to stay ahead of threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big companies grapple with AI’s legal, security, and reputational threats

A recent Quartz investigation reveals that concerns over AI are increasingly overshadowing corporate enthusiasm, especially among Fortune 500 companies.

More than 69% now reference generative AI in their annual reports as a risk factor, while only about 30% highlight its benefits, a dramatic shift toward caution in corporate discourse.

These risks range from cybersecurity threats, such as AI-generated phishing, model poisoning, and adversarial attacks, to operational and reputational dangers stemming from opaque AI decision-making, including hallucinations and biassed outputs.

Privacy exposure, legal liability, task misalignment, and overpromising AI capabilities, so-called ‘AI washing’, compound corporate exposure, particularly for boards and senior leadership facing directors’ and officers’ liability risks.

Other structural risks include vendor lock-in, disproportionate market influence by dominant AI providers, and supply chain dependencies that constrain flexibility and resilience.

Notably, even cybersecurity experts warn of emerging threats from AI agents, autonomous systems capable of executing actions that complicate legal accountability and oversight.

Companies are advised to adopt comprehensive AI risk-management strategies to navigate this evolving landscape.

Essential elements include establishing formal governance frameworks, conducting bias and privacy audits, documenting risk assessments, ensuring human-in-the-loop oversight, revising vendor contracts, and embedding AI ethics into policy and training, particularly at the board level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft hacking campaign expands into ransomware attacks

A state-aligned cyber-espionage campaign exploiting Microsoft server software vulnerabilities has escalated to ransomware deployment, according to a Microsoft blog post published late Wednesday.

The group, dubbed ‘Storm-2603’ by Microsoft, is now using the SharePoint vulnerability to spread ransomware that can lock down systems and demand digital payments. This shift suggests a move from espionage to broader disruption.

according to Eye Security, a cybersecurity firm from the Netherlands, the number of known victims has surged from 100 to over 400, with the possibility that the true figure is likely much higher.

‘There are many more, because not all attack vectors have left artefacts that we could scan for,’ said Eye Security’s chief hacker, Vaisha Bernard.

One confirmed victim is the US National Institutes of Health, which isolated affected servers as a precaution. Reports also indicate that the Department of Homeland Security and several other agencies have been impacted.

The breach stems from an incomplete fix to Microsoft’s SharePoint software vulnerability. Both Microsoft and Google-owner Alphabet have linked the activity to Chinese hackers—a claim Beijing denies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmaker proposes to train young Americans in AI for cyberwarfare

In a Washington Post opinion piece, Rep. Elise Stefanik and Stephen Prince, CEO of TFG Asset Management, argue that the United States is already engaged in a new form of warfare — cyberwarfare — waged by adversaries like China, Russia, and Iran using tools such as malware, phishing, and zero-day exploits. They assert that the US is not adequately prepared to defend against these threats due to a significant shortage of cyber talent, especially within the military and government.

To address this gap, the authors propose the creation of the United States Advanced Technology Academy (USATA) — a tuition-free, government-supported institution that would train a new generation of Americans in cybersecurity, AI, and quantum computing. Modelled after military academies, USATA would be located in upstate New York and require a five-year public service commitment from graduates.

The goal is to rapidly develop a pipeline of skilled cyber defenders, close the Pentagon’s estimated 30,000-person cyber personnel shortfall, and maintain US leadership in strategic technologies. Stefanik and Prince argue that while investing in AI tools and infrastructure is essential, equally critical is the cultivation of human expertise to operate, secure, and ethically deploy these tools. They position USATA not just as an educational institution but as a national security imperative.

The article places the academy within a broader effort to outpace rivals like China, which is also actively investing in STEM education and tech capacity. The authors call on the President to establish USATA via executive order or bipartisan congressional support, framing it as a decisive and forward-looking response to 21st-century threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!