Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big companies grapple with AI’s legal, security, and reputational threats

A recent Quartz investigation reveals that concerns over AI are increasingly overshadowing corporate enthusiasm, especially among Fortune 500 companies.

More than 69% now reference generative AI in their annual reports as a risk factor, while only about 30% highlight its benefits, a dramatic shift toward caution in corporate discourse.

These risks range from cybersecurity threats, such as AI-generated phishing, model poisoning, and adversarial attacks, to operational and reputational dangers stemming from opaque AI decision-making, including hallucinations and biassed outputs.

Privacy exposure, legal liability, task misalignment, and overpromising AI capabilities, so-called ‘AI washing’, compound corporate exposure, particularly for boards and senior leadership facing directors’ and officers’ liability risks.

Other structural risks include vendor lock-in, disproportionate market influence by dominant AI providers, and supply chain dependencies that constrain flexibility and resilience.

Notably, even cybersecurity experts warn of emerging threats from AI agents, autonomous systems capable of executing actions that complicate legal accountability and oversight.

Companies are advised to adopt comprehensive AI risk-management strategies to navigate this evolving landscape.

Essential elements include establishing formal governance frameworks, conducting bias and privacy audits, documenting risk assessments, ensuring human-in-the-loop oversight, revising vendor contracts, and embedding AI ethics into policy and training, particularly at the board level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft hacking campaign expands into ransomware attacks

A state-aligned cyber-espionage campaign exploiting Microsoft server software vulnerabilities has escalated to ransomware deployment, according to a Microsoft blog post published late Wednesday.

The group, dubbed ‘Storm-2603’ by Microsoft, is now using the SharePoint vulnerability to spread ransomware that can lock down systems and demand digital payments. This shift suggests a move from espionage to broader disruption.

according to Eye Security, a cybersecurity firm from the Netherlands, the number of known victims has surged from 100 to over 400, with the possibility that the true figure is likely much higher.

‘There are many more, because not all attack vectors have left artefacts that we could scan for,’ said Eye Security’s chief hacker, Vaisha Bernard.

One confirmed victim is the US National Institutes of Health, which isolated affected servers as a precaution. Reports also indicate that the Department of Homeland Security and several other agencies have been impacted.

The breach stems from an incomplete fix to Microsoft’s SharePoint software vulnerability. Both Microsoft and Google-owner Alphabet have linked the activity to Chinese hackers—a claim Beijing denies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmaker proposes to train young Americans in AI for cyberwarfare

In a Washington Post opinion piece, Rep. Elise Stefanik and Stephen Prince, CEO of TFG Asset Management, argue that the United States is already engaged in a new form of warfare — cyberwarfare — waged by adversaries like China, Russia, and Iran using tools such as malware, phishing, and zero-day exploits. They assert that the US is not adequately prepared to defend against these threats due to a significant shortage of cyber talent, especially within the military and government.

To address this gap, the authors propose the creation of the United States Advanced Technology Academy (USATA) — a tuition-free, government-supported institution that would train a new generation of Americans in cybersecurity, AI, and quantum computing. Modelled after military academies, USATA would be located in upstate New York and require a five-year public service commitment from graduates.

The goal is to rapidly develop a pipeline of skilled cyber defenders, close the Pentagon’s estimated 30,000-person cyber personnel shortfall, and maintain US leadership in strategic technologies. Stefanik and Prince argue that while investing in AI tools and infrastructure is essential, equally critical is the cultivation of human expertise to operate, secure, and ethically deploy these tools. They position USATA not just as an educational institution but as a national security imperative.

The article places the academy within a broader effort to outpace rivals like China, which is also actively investing in STEM education and tech capacity. The authors call on the President to establish USATA via executive order or bipartisan congressional support, framing it as a decisive and forward-looking response to 21st-century threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU and Japan deepen AI cooperation under new digital pact

In May 2025, the European Union and Japan formally reaffirmed their long-standing EU‑Japan Digital Partnership during the third Digital Partnership Council in Tokyo. Delegations agreed to deepen collaboration in pivotal digital technologies, most notably artificial intelligence, quantum computing, 5G/6G networks, semiconductors, cloud, and cybersecurity.

A joint statement committed to signing an administrative agreement on AI, aligned with principles from the Hiroshima AI Process. Shared initiatives include a €4 million EU-supported quantum R&D project named Q‑NEKO and the 6G MIRAI‑HARMONY research effort.

Both parties pledge to enhance data governance, digital identity interoperability, regulatory coordination across platforms, and secure connectivity via submarine cables and Arctic routes. The accord builds on the Strategic Partnership Agreement activated in January 2025, reinforcing their mutual platform for rules-based, value-driven digital and innovation cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK proposes mandatory ransomware reporting and seeks to ban payments by public sector

The UK government has unveiled a new proposal to strengthen its response to ransomware threats by requiring victims to report breaches, enabling law enforcement to disrupt cybercriminal operations more effectively.

Published by the Home Office as part of an ongoing policy consultation, the proposal outlines key measures:

  • Mandatory breach reporting to equip law enforcement with actionable intelligence for identifying and disrupting ransomware groups.
  • A ban on ransom payments by public sector and critical infrastructure entities.
  • A notification requirement for other organisations intending to pay a ransom, allowing the government to assess and respond accordingly.

According to the proposal, these steps would help the UK government carry out ‘targeted disruptions’ in response to evolving ransomware threats, while also improving support for victims.

Cybersecurity experts have largely welcomed the initiative. Allan Liska of Recorded Future noted the plan reflects a growing recognition that many ransomware actors are within reach of law enforcement. Arda Büyükkaya of EclecticIQ praised the effort to formalise response protocols, viewing the proposed payment ban and proactive enforcement as meaningful deterrents.

This announcement follows a consultation process that began in January 2025. While the proposals signal a significant policy shift, they have not yet been enacted into law. The potential ban on ransom payments remains particularly contentious, with critics warning that, in some cases—such as hospital systems—paying a ransom may be the only option to restore essential services quickly.

The UK’s proposal follows similar international efforts, including Australia’s recent mandate for victims to disclose ransom payments, though Australia has stopped short of banning them outright.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!