Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big companies grapple with AI’s legal, security, and reputational threats

A recent Quartz investigation reveals that concerns over AI are increasingly overshadowing corporate enthusiasm, especially among Fortune 500 companies.

More than 69% now reference generative AI in their annual reports as a risk factor, while only about 30% highlight its benefits, a dramatic shift toward caution in corporate discourse.

These risks range from cybersecurity threats, such as AI-generated phishing, model poisoning, and adversarial attacks, to operational and reputational dangers stemming from opaque AI decision-making, including hallucinations and biassed outputs.

Privacy exposure, legal liability, task misalignment, and overpromising AI capabilities, so-called ‘AI washing’, compound corporate exposure, particularly for boards and senior leadership facing directors’ and officers’ liability risks.

Other structural risks include vendor lock-in, disproportionate market influence by dominant AI providers, and supply chain dependencies that constrain flexibility and resilience.

Notably, even cybersecurity experts warn of emerging threats from AI agents, autonomous systems capable of executing actions that complicate legal accountability and oversight.

Companies are advised to adopt comprehensive AI risk-management strategies to navigate this evolving landscape.

Essential elements include establishing formal governance frameworks, conducting bias and privacy audits, documenting risk assessments, ensuring human-in-the-loop oversight, revising vendor contracts, and embedding AI ethics into policy and training, particularly at the board level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nigeria opens door to stablecoin businesses

Nigeria’s top markets regulator is open to stablecoin ventures to revive the digital asset sector after last year’s Binance crackdown. At the Nigeria Stablecoin Summit, SEC Director-General Emomotimi Agama highlighted support for firms following evolving regulations.

He envisions Nigeria becoming a stablecoin hub in the global south, powering cross-border trade across Africa within five years.

The SEC has already onboarded stablecoin-focused companies through its regulatory sandbox, reflecting a broader strategy to lead digital innovation. Agama acknowledged stablecoins as vital to the crypto ecosystem but warned of national security risks, underscoring the need for careful regulation.

His comments come following the high-profile arrest and release of Binance executive Tigran Gambaryan amid Nigeria’s previous crypto crackdown.

While the new stance suggests a regulatory easing, experts caution that rebuilding trust with global firms will take time. Industry leaders stress the need for clear legal frameworks, reliable fiat access, and consistent enforcement to attract investment and restore liquidity.

Nigeria’s path to becoming a stablecoin hub hinges on sustained policy stability and meaningful re-engagement with crypto players.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Democratising inheritance: AI tool handles estate administration

Lauren Kolodny, who early backed Chime and earned a spot on the Forbes Midas list, is leading a $20 million Series A funding round into Alix, a San Francisco-based startup using AI to revolutionise estate settlement. Founder Alexandra Mysoor conceived the idea after spending nearly 1,000 hours over 18 months managing a friend’s family estate, highlighting a widespread, emotionally taxing administrative gap.

Using AI agents, Alix automates tedious elements of the estate process, including scanning documents, extracting data, pre-populating legal forms, and liaising with financial institutions. This contrasts sharply with the traditional, costly probate system. The startup’s pricing model charges around 1% of estate value, translating to approximately $9,000–$12,000 for smaller estates.

Kolodny sees Alix as part of a new wave of startups harnessing AI to democratise services once accessible only to high-net-worth individuals. As trillions of dollars transfer to millennials and Gen Z in the coming decades, Alix aims to simplify one of the most complex and emotionally fraught administrative tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon exit highlights deepening AI divide between US and China

Amazon’s quiet wind-down of its Shanghai AI lab underscores a broader shift in global research dynamics, as escalating tensions between the US and China reshape how tech giants operate across borders.

Instead of expanding innovation hubs in China, major American firms are increasingly dismantling them.

The AWS lab, once central to Amazon’s AI research, produced tools said to have generated nearly $1bn in revenue and over 100 academic papers.

Yet its dissolution reflects a growing push from Washington to curb China’s access to cutting-edge technology, including restrictions on advanced chips and cloud services.

As IBM and Microsoft have also scaled back operations or relocated talent away from mainland China, a pattern is emerging: strategic retreat. Rather than risking compliance issues or regulatory scrutiny, US tech companies are choosing to restructure globally and reduce local presence in China altogether.

With Amazon already having exited its Chinese ebook and ecommerce markets, the shuttering of its AI lab signals more than a single closure — it reflects a retreat from joint innovation and a widening technological divide that may shape the future of AI competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G traffic surges under growing AI usage

AI-driven applications are reshaping mobile data norms, and 5G networks are feeling the pressure. Analysts warn that uplink demand generated by tools like virtual assistants and AR platforms could exceed the current 5G capacity by around 2027. Traditional networks are built to handle heavier downlink traffic, leaving them under stress as AI flows increase in the opposite direction.

At the same time, artificial intelligence is playing a constructive role by helping optimise these strained networks. AI techniques, such as predictive traffic forecasting, dynamic spectrum allocation, beamforming, and energy management, are improving efficiency and reducing operational costs. Networks are becoming smarter in detecting congestion and self-adjusting to maintain performance.

Industry discussions point to 5G‑Advanced, also known as 5.5G, as a key evolution that embeds AI and machine learning into network architecture. These upgrades promise higher uplink speeds, tighter latency control, and built‑in intelligence for optimisation and automation. Edge computing is set to play a central role by bringing AI decision‑making closer to users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!