Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek and others gain traction in US and EU

A recent survey has found that most US and the EU users are open to using Chinese large language models, even amid ongoing political and cybersecurity scrutiny.

According to the report, 71 percent of respondents in the US and 87 percent in the EU would consider adopting models developed in China.

The findings highlight increasing international curiosity about the capabilities of Chinese AI firms such as DeepSeek, which have recently attracted global attention.

While the technology is gaining credibility, many Western users remain cautious about data privacy and infrastructure control.

More than half of those surveyed said they would only use Chinese AI models if hosted outside China. However, this suggests that while trust in the models’ performance is growing, concerns over data governance remain a significant barrier to adoption.

The results come amid heightened global competition in the AI race, with Chinese developers rapidly advancing to challenge US-based leaders. DeepSeek and similar firms now face balancing global outreach with geopolitical limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon exit highlights deepening AI divide between US and China

Amazon’s quiet wind-down of its Shanghai AI lab underscores a broader shift in global research dynamics, as escalating tensions between the US and China reshape how tech giants operate across borders.

Instead of expanding innovation hubs in China, major American firms are increasingly dismantling them.

The AWS lab, once central to Amazon’s AI research, produced tools said to have generated nearly $1bn in revenue and over 100 academic papers.

Yet its dissolution reflects a growing push from Washington to curb China’s access to cutting-edge technology, including restrictions on advanced chips and cloud services.

As IBM and Microsoft have also scaled back operations or relocated talent away from mainland China, a pattern is emerging: strategic retreat. Rather than risking compliance issues or regulatory scrutiny, US tech companies are choosing to restructure globally and reduce local presence in China altogether.

With Amazon already having exited its Chinese ebook and ecommerce markets, the shuttering of its AI lab signals more than a single closure — it reflects a retreat from joint innovation and a widening technological divide that may shape the future of AI competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube Shorts brings image-to-video AI tool

Google has rolled out new AI features for YouTube Shorts, including an image-to-video tool powered by its Veo 2 model. The update lets users convert still images into six-second animated clips, such as turning a static group photo into a dynamic scene.

Creators can also experiment with immersive AI effects that stylise selfies or simple drawings into themed short videos. These features aim to enhance creative expression and are currently available in the US, Canada, Australia and New Zealand, with global rollout expected later this year.

A new AI Playground hub has also been launched to house all generative tools, including video effects and inspiration prompts. Users can find the hub by tapping the Shorts camera’s ‘create’ button and then the sparkle icon in the top corner.

Google plans to introduce even more advanced tools with the upcoming Veo 3 model, which will support synchronised audio generation. The company is positioning YouTube Shorts as a key platform for AI-driven creativity in the video content space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music tools arrive for YouTube creators

YouTube is trialling two new features to improve user engagement and content creation. One enhances comment readability, while the other helps creators produce music using AI for Shorts.

A new threaded layout is being tested to organise comment replies under the original post, allowing more explicit and focused conversations. Currently, this feature is limited to a small group of Premium users on mobile.

YouTube also expands Dream Track, an AI-powered tool that creates 30-second music clips from simple text prompts. Creators can generate sounds matching moods like ‘chill piano melody’ or ‘energetic pop beat’, with the option to include AI-generated vocals styled after popular artists.

Both features are available only in the US during the testing phase, with no set date for international release. YouTube’s gradual updates reflect a shift toward more intuitive user experiences and creative flexibility on the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon closes AI research lab in Shanghai as global focus shifts

Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.

The company said the decision was part of a global streamlining effort rather than a response to AI concerns.

A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.

Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.

The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.

Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.

Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian researchers expose watermark flaws

A team at the University of Maryland found that adversarial attacks easily strip most watermarking technologies designed to label AI‑generated images. Their study reveals that even visible watermarks fail to indicate content provenance reliably.

The US researchers tested low‑perturbation invisible watermarks and more robust visible ones, demonstrating that adversaries can easily remove or forge marks. Lead author Soheil Feizi noted the technology is far from foolproof, warning that ‘we broke all of them’.

Despite these concerns, experts argue that watermarking can still be helpful in a broader detection strategy. UC Berkeley professor Hany Farid said robust watermarking is ‘part of the solution’ when combined with other forensic methods.

Tech giants and researchers continue to develop watermarking tools like Google DeepMind’s SynthID, though such systems are not considered infallible. The consensus emerging from recent tests is that watermarking alone cannot be relied upon to counter deepfake threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies warn of rising Interlock ransomware threat targeting healthcare sector


US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.

The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.

Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.

High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.

Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.

The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.

Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.

They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!