Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Library cuts across Massachusetts deepen digital divide

Massachusetts libraries face sweeping service reductions as federal funding cuts threaten critical educational and digital access programmes. Local and major libraries are bracing for the loss of key resources including summer reading initiatives, online research tools, and English language classes.

The Massachusetts Board of Library Commissioners (MBLC) said it has already lost access to 30 of 34 databases it once offered. Resources such as newspaper archives, literacy support for the blind and incarcerated, and citizenship classes have also been cancelled due to a $3.6 million shortfall.

Communities unable to replace federal grants with local funds will be disproportionately affected. With over 800 library applications for mobile internet hot spots now frozen, officials warn that students and jobseekers may lose vital lifelines to online learning, healthcare and employment.

The cuts are part of broader efforts by the Trump administration to shrink federal institutions, targeting what it deems anti-American programming. Legislators and library leaders say the result will widen the digital divide and undercut libraries’ role as essential pillars of equitable access

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China deploys new malware tool for border phone searches

Chinese authorities reportedly use a powerful new malware tool called Massistant to extract data from seized Android phones. Developed by Xiamen Meiya Pico, the tool enables police to access messages, photos, locations, and app data once they have physical access to a device.

Cybersecurity firm Lookout revealed that Massistant operates via a desktop-connected tower, requiring unlocked devices but no advanced hacking techniques. Researchers said affected users include Chinese citizens and international travellers whose phones may be searched at borders.

The malware leaves traces on compromised phones, allowing for post-infection removal, but authorities already have the data by then. Forums in China have shown increasing user complaints about malware following police interactions.

Massistant is seen as the successor to an older tool, MSSocket, with Meiya Pico now controlling 40% of China’s digital forensics market. They previously sanctioned the firm for its surveillance tech links to the Chinese government’s use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parliamentarians step up as key players in shaping the digital future

At the 2025 WSIS+20 High-Level Event in Geneva, lawmakers from Egypt, Uruguay, Tanzania, and Thailand united to call for a transformative shift in how parliaments approach digital governance. Hosted by ITU and the IPU, the session emphasised that legislators are no longer passive observers but essential drivers of digital policy.

While digital innovation presents opportunities for growth and inclusion, it also brings serious challenges, chief among them the digital divide, online harms, and the risks posed by AI.

Speakers underscored a shared urgency to ensure digital policies are people-centred and grounded in human rights. Egypt’s Amira Saber spotlighted her country’s leap toward AI regulation and its rapid expansion of connectivity, but also expressed concerns over online censorship and inequality.

Uruguay’s Rodrigo Goñi warned that traditional, reactive policymaking won’t suffice in the fast-paced digital age, proposing a new paradigm of ‘political intelligence.’ Thailand’s Senator Nophadol In-na praised national digital progress but warned of growing gaps between urban and rural communities. Meanwhile, Tanzania’s Neema Lugangira pushed for more capacity-building, especially for female lawmakers, and direct dialogue between legislators and big tech companies.

Across the board, there was strong consensus – parliamentarians must be empowered with digital literacy and AI tools to legislate effectively. Both ITU and IPU committed to ramping up support through training, partnerships, and initiatives like the AI Skills Coalition. They also pledged to help parliaments engage directly with tech leaders and tackle issues such as online abuse, misinformation, and accessibility, particularly in the Global South.

The discussion ended with cautious optimism. While challenges are formidable, the collaborative spirit and concrete proposals laid out in Geneva point toward a digital future where democratic values and inclusivity remain central. As the December WSIS+20 review approaches, these commitments could start a new era in global digital governance, led not by technocrats alone but by informed, engaged, and forward-thinking parliamentarians.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Grok AI chatbot suspended in Turkey following court order

A Turkish court has issued a nationwide ban on Grok, the AI chatbot developed by Elon Musk’s company xAI, following recent developments involving the platform.

The ruling, delivered on Wednesday by a criminal court in Ankara, instructed Turkey’s telecommunications authority to block access to the chatbot across the country. The decision came after public filings under Turkey’s internet law prompted a judicial review.

Grok, which is integrated into the X platform (formerly Twitter), recently rolled out an update to make the system more open and responsive. The update has sparked broader global discussions about the challenges of moderating AI-generated content in diverse regulatory environments.

In a brief statement, X acknowledged the situation and confirmed that appropriate content moderation measures had been implemented in response. The ban places Turkey among many countries examining the role of generative AI tools and the standards that govern their deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO pushes for digital trust at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a timely session exploring how to strengthen global information ecosystems through responsible platform governance and smart technology use. The discussion, titled ‘Towards a Resilient Information Ecosystem’, brought together international regulators, academics, civil society leaders, and tech industry representatives to assess digital media’s role in shaping public discourse, especially in times of crisis.

UNESCO’s Assistant Director General Tawfik Jelassi emphasised the organisation’s longstanding mission to build peace through knowledge sharing, warning that digital platforms now risk becoming breeding grounds for misinformation, hate speech, and division. To counter this, he highlighted UNESCO’s ‘Internet for Trust’ initiative, which produced governance guidelines informed by over 10,000 global contributions.

Speakers called for a shift from viewing misinformation as an isolated problem to understanding the broader digital communication ecosystem, especially during crises such as wars or natural disasters. Professor Ingrid Volkmer stressed that global monopolies like Starlink, Amazon Web Services, and OpenAI dominate critical communication infrastructure, often without sufficient oversight.

She urged a paradigm shift that treats crisis communication as an interconnected system requiring tailored regulation and risk assessments. France’s digital regulator Frédéric Bokobza outlined the European Digital Services Act’s role in enhancing transparency and accountability, noting the importance of establishing direct cooperation with platforms, particularly during elections.

The panel also spotlighted ways to empower users. Google’s Nadja Blagojevic showcased initiatives like SynthID watermarking for AI-generated content and media literacy programs such as ‘Be Internet Awesome,’ which aim to build digital critical thinking skills across age groups.

Meanwhile, Maria Paz Canales from Global Partners Digital offered a civil society perspective, sharing how AI tools protect protestors’ identities, preserve historical memory, and amplify marginalised voices, even amid funding challenges. She also called for regulatory models distinguishing between traditional commercial media and true public interest journalism, particularly in underrepresented regions like Latin America.

The session concluded with a strong call for international collaboration among regulators and platforms, affirming that information should be treated as a public good. Participants underscored the need for inclusive, multistakeholder governance and sustainable support for independent media to protect democratic values in an increasingly digital world.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Preserving languages in a digital world: A call for inclusive action

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a powerful session on the critical need to protect multilingualism in the digital age. With over 8,000 languages spoken globally but fewer than 120 represented online, the panel warned of a growing digital divide that excludes billions and marginalises thousands of cultures.

Dr Tawfik Jelassi of UNESCO painted a vivid metaphor of the internet as a vast library where most languages have no books on the shelves, calling for urgent action to safeguard humanity’s linguistic and cultural diversity.

Speakers underscored that bridging this divide goes beyond creating language tools—it requires systemic change rooted in policy, education, and community empowerment. Guilherme Canela of UNESCO highlighted ongoing initiatives like the 2003 Recommendation on Multilingualism and the UN Decade of Indigenous Languages, which has already inspired 15 national action plans.

Panellists like Valts Ernstreits and Sofiya Zahova emphasised community-led efforts, citing examples from Latvia, Iceland, and Sámi institutions that show how native speakers and local institutions must lead digital inclusion efforts.

Africa’s case brought the urgency into sharp focus. David Waweru noted that despite hosting a third of the world’s languages, less than 0.1% of websites feature African language content. Yet, promising efforts like the African Storybook project and AI language models show how local storytelling and education can thrive in digital spaces.

Elena Plexida of ICANN revealed that only 26% of email servers accept non-Latin addresses, a stark reminder of the structural barriers to full digital participation.

The session concluded with a strong call for multistakeholder collaboration. Governments, tech companies, indigenous communities, and civil society must work together to make multilingualism the default, not the exception, in digital spaces. As Jelassi put it, ensuring every language has a place online is not just a technical challenge but a matter of cultural survival and digital justice.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.