Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Democratising inheritance: AI tool handles estate administration

Lauren Kolodny, who early backed Chime and earned a spot on the Forbes Midas list, is leading a $20 million Series A funding round into Alix, a San Francisco-based startup using AI to revolutionise estate settlement. Founder Alexandra Mysoor conceived the idea after spending nearly 1,000 hours over 18 months managing a friend’s family estate, highlighting a widespread, emotionally taxing administrative gap.

Using AI agents, Alix automates tedious elements of the estate process, including scanning documents, extracting data, pre-populating legal forms, and liaising with financial institutions. This contrasts sharply with the traditional, costly probate system. The startup’s pricing model charges around 1% of estate value, translating to approximately $9,000–$12,000 for smaller estates.

Kolodny sees Alix as part of a new wave of startups harnessing AI to democratise services once accessible only to high-net-worth individuals. As trillions of dollars transfer to millennials and Gen Z in the coming decades, Alix aims to simplify one of the most complex and emotionally fraught administrative tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmaker proposes to train young Americans in AI for cyberwarfare

In a Washington Post opinion piece, Rep. Elise Stefanik and Stephen Prince, CEO of TFG Asset Management, argue that the United States is already engaged in a new form of warfare — cyberwarfare — waged by adversaries like China, Russia, and Iran using tools such as malware, phishing, and zero-day exploits. They assert that the US is not adequately prepared to defend against these threats due to a significant shortage of cyber talent, especially within the military and government.

To address this gap, the authors propose the creation of the United States Advanced Technology Academy (USATA) — a tuition-free, government-supported institution that would train a new generation of Americans in cybersecurity, AI, and quantum computing. Modelled after military academies, USATA would be located in upstate New York and require a five-year public service commitment from graduates.

The goal is to rapidly develop a pipeline of skilled cyber defenders, close the Pentagon’s estimated 30,000-person cyber personnel shortfall, and maintain US leadership in strategic technologies. Stefanik and Prince argue that while investing in AI tools and infrastructure is essential, equally critical is the cultivation of human expertise to operate, secure, and ethically deploy these tools. They position USATA not just as an educational institution but as a national security imperative.

The article places the academy within a broader effort to outpace rivals like China, which is also actively investing in STEM education and tech capacity. The authors call on the President to establish USATA via executive order or bipartisan congressional support, framing it as a decisive and forward-looking response to 21st-century threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK proposes mandatory ransomware reporting and seeks to ban payments by public sector

The UK government has unveiled a new proposal to strengthen its response to ransomware threats by requiring victims to report breaches, enabling law enforcement to disrupt cybercriminal operations more effectively.

Published by the Home Office as part of an ongoing policy consultation, the proposal outlines key measures:

  • Mandatory breach reporting to equip law enforcement with actionable intelligence for identifying and disrupting ransomware groups.
  • A ban on ransom payments by public sector and critical infrastructure entities.
  • A notification requirement for other organisations intending to pay a ransom, allowing the government to assess and respond accordingly.

According to the proposal, these steps would help the UK government carry out ‘targeted disruptions’ in response to evolving ransomware threats, while also improving support for victims.

Cybersecurity experts have largely welcomed the initiative. Allan Liska of Recorded Future noted the plan reflects a growing recognition that many ransomware actors are within reach of law enforcement. Arda Büyükkaya of EclecticIQ praised the effort to formalise response protocols, viewing the proposed payment ban and proactive enforcement as meaningful deterrents.

This announcement follows a consultation process that began in January 2025. While the proposals signal a significant policy shift, they have not yet been enacted into law. The potential ban on ransom payments remains particularly contentious, with critics warning that, in some cases—such as hospital systems—paying a ransom may be the only option to restore essential services quickly.

The UK’s proposal follows similar international efforts, including Australia’s recent mandate for victims to disclose ransom payments, though Australia has stopped short of banning them outright.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump AI strategy targets China and cuts red tape

The Trump administration has revealed a sweeping new AI strategy to cement US dominance in the global AI race, particularly against China.

The 25-page ‘America’s AI Action Plan’ proposes 90 policy initiatives, including building new data centres nationwide, easing regulations, and expanding exports of AI tools to international allies.

White House officials stated the plan will boost AI development by scrapping federal rules seen as restrictive and speeding up construction permits for data infrastructure.

A key element involves monitoring Chinese AI models for alignment with Communist Party narratives, while promoting ‘ideologically neutral’ systems within the US. Critics argue the approach undermines efforts to reduce bias and favours politically motivated AI regulation.

The action plan also supports increased access to federal land for AI-related construction and seeks to reverse key environmental protections. Analysts have raised concerns over energy consumption and rising emissions linked to AI data centres.

While the White House claims AI will complement jobs rather than replace them, recent mass layoffs at Indeed and Salesforce suggest otherwise.

Despite the controversy, the announcement drew optimism from investors. AI stocks saw mixed trading, with NVIDIA, Palantir and Oracle gaining, while Alphabet slipped slightly. Analysts described the move as a ‘watershed moment’ for US tech, signalling an aggressive stance in the global AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous vehicles fuel surge in 5G adoption

The global 5G automotive market is expected to grow sharply from $2.58 billion in 2024 to $31.18 billion by 2034, fuelled by the rapid adoption of connected and self-driving vehicles.

A compound annual growth rate of over 28% reflects the strong momentum behind the transition to smarter mobility and safer road networks.

Vehicle-to-everything communication is predicted to lead adoption, as it allows vehicles to exchange real-time data with other cars, infrastructure and even pedestrians.

In-car entertainment systems are also growing fast, with consumers demanding smoother connectivity and on-the-go access to apps and media.

Autonomous driving, advanced driver-assistance features and real-time navigation all benefit from 5G’s low latency and high-speed capabilities. Automakers such as BMW have already begun integrating 5G into electric models to support automated functions.

Meanwhile, the US government has pledged $1.5 billion to build smart transport networks that rely on 5G-powered communication.

North America remains ahead due to early 5G rollouts and strong manufacturing bases, but Asia Pacific is catching up fast through smart city investment and infrastructure development.

Regulatory barriers and patchy rural coverage continue to pose challenges, particularly in regions with strict data privacy laws or limited 5G networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!