Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20: Inclusive ICT policies urged to close global digital divide

At the WSIS+20 High-Level Event in Geneva, Dr Hakikur Rahman and Dr Ranojit Kumar Dutta presented a sobering picture of global digital inequality, revealing that more than 2.6 billion people remain offline. Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure, and a lack of digital literacy continue to block access, especially for marginalised communities.

The speakers proposed a structured three-pillar framework — inclusion, ethics, and sustainability- to ensure that no one is left behind in the digital age.

The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programs, citing India’s Digital India and Kenya’s Community Networks as examples of success. On ethics, they called for policies grounded in human rights, data privacy, and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.

The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.

Dr Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.

Encouraging trends, such as rising female enrollment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.

The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Parliamentarians step up as key players in shaping the digital future

At the 2025 WSIS+20 High-Level Event in Geneva, lawmakers from Egypt, Uruguay, Tanzania, and Thailand united to call for a transformative shift in how parliaments approach digital governance. Hosted by ITU and the IPU, the session emphasised that legislators are no longer passive observers but essential drivers of digital policy.

While digital innovation presents opportunities for growth and inclusion, it also brings serious challenges, chief among them the digital divide, online harms, and the risks posed by AI.

Speakers underscored a shared urgency to ensure digital policies are people-centred and grounded in human rights. Egypt’s Amira Saber spotlighted her country’s leap toward AI regulation and its rapid expansion of connectivity, but also expressed concerns over online censorship and inequality.

Uruguay’s Rodrigo Goñi warned that traditional, reactive policymaking won’t suffice in the fast-paced digital age, proposing a new paradigm of ‘political intelligence.’ Thailand’s Senator Nophadol In-na praised national digital progress but warned of growing gaps between urban and rural communities. Meanwhile, Tanzania’s Neema Lugangira pushed for more capacity-building, especially for female lawmakers, and direct dialogue between legislators and big tech companies.

Across the board, there was strong consensus – parliamentarians must be empowered with digital literacy and AI tools to legislate effectively. Both ITU and IPU committed to ramping up support through training, partnerships, and initiatives like the AI Skills Coalition. They also pledged to help parliaments engage directly with tech leaders and tackle issues such as online abuse, misinformation, and accessibility, particularly in the Global South.

The discussion ended with cautious optimism. While challenges are formidable, the collaborative spirit and concrete proposals laid out in Geneva point toward a digital future where democratic values and inclusivity remain central. As the December WSIS+20 review approaches, these commitments could start a new era in global digital governance, led not by technocrats alone but by informed, engaged, and forward-thinking parliamentarians.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Women researchers showcase accessibility breakthroughs at WSIS

At the WSIS+20 High-Level Event 2025 in Geneva, the session titled ‘Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results’ spotlighted how female academic experts are applying AI to make media and education more inclusive and accessible. Organised by the AXS-CAT network at Universitat Autònoma de Barcelona and moderated by Dr Anita Lamprecht from Diplo, the session showcased a range of innovative projects that translate university research into real-world impact.

One highlight was the ENACT project, presented by Professor Ana Matamala, which develops simplified news content to serve audiences such as migrants, people with intellectual disabilities, and language learners. While 13 European organisations already offer some easy-to-understand content, challenges remain in maintaining journalistic integrity while ensuring accessibility.

Meanwhile, Professor Pilar Orero unveiled three AI-driven projects: Mosaic, a searchable public broadcaster archive hub; Alfie, which tackles AI bias in media; and a climate change initiative focused on making scientific data more comprehensible to the public. Several education-centred projects also took the stage.

Dr Estella Oncins introduced the Inclusivity project, which uses virtual reality to engage neurodiverse students and promote inclusive teaching methods. Dr Mireia Farrus presented Scribal, a real-time AI-powered transcription and translation tool for university lectures, tailored to support Catalan language users and students with hearing impairments.

Additionally, Dr Mar Gutierrez Colon shared two accessibility tools: a gamified reading app for children in Kenya and an English language test adapted for students with special educational needs. During the Q&A, discussions turned to the challenges of teaching fast-evolving technologies like AI, especially given the scarcity of qualified educators.

The speakers emphasised that digital accessibility is not just a technical concern but a matter of educational justice, advocating for stronger collaboration between academia and industry to ensure inclusive learning opportunities for all.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Digital rights under threat: Global majority communities call for inclusive solutions at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a pivotal session hosted by Oxfam’s RECIPE Project shed light on the escalating digital rights challenges facing communities across the Global majority. Representatives from Vietnam, Bolivia, Cambodia, Somalia, and Palestine presented sobering findings based on research with over 1,000 respondents across nine countries.

Despite the diversity of regions, speakers echoed similar concerns: digital literacy is dangerously low, access to safe and inclusive online spaces remains unequal, and legal protections for digital rights are often absent or underdeveloped.

The human cost of digital inequality was made clear from Bolivia to Palestine. In Bolivia, over three-quarters of respondents had experienced digital security incidents, and many reported targeted violence linked to their roles as human rights defenders.

In Somalia, where internet penetration is high, only a fraction understands how to protect their personal data. Palestine, meanwhile, faces systematic digital discrimination, marked by unequal infrastructure access and advanced surveillance technologies used against its population, exacerbated by ongoing occupation and political instability.

Yet amidst these challenges, the forum underscored a strong sense of resilience and innovation. Civil society organisations from Cambodia and Bolivia showcased bottom-up approaches, such as peer-led digital security training and feminist digital safety networks, which help communities protect themselves and influence policy.

Vietnam emphasised the need for genuine participation in policymaking, rather than formalistic consultations, as a path to more equitable digital governance. The session concluded with a shared call to action: digital governance must prioritise human rights and meaningful participation from the ground up.

Speakers and audience members highlighted the urgent need for multistakeholder cooperation—spanning civil society, government, and the tech industry—to counter misinformation and protect freedom of expression, especially in the face of expanding surveillance and online harm. As one participant from Zambia noted, digital safety must not come at the expense of digital freedom; the two must evolve together.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI governance efforts centre on human rights

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law.

Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation.

Ambassador Ernst Noorman of the Netherlands warned that human rights and security must be viewed as interconnected, stressing that unregulated AI use can destabilise societies rather than protect them. His remarks echoed the Netherlands’ own hard lessons from biassed welfare algorithms.

Other panellists, including Germany’s Cyber Ambassador Maria Adebahr, underlined how AI is being weaponised for transnational repression and emphasised Germany’s commitment by doubling funding for the FOC. Ghana’s cybersecurity chief, Divine Salese Agbeti, added that AI misuse is not exclusive to governments—citizens, too, have exploited the technology for manipulation and deception.

From the private sector, Microsoft’s Dr Erika Moret showcased the company’s multi-layered approach to embedding human rights in AI, from ethical design and impact assessments to rejecting high-risk applications like facial recognition in authoritarian contexts. She stressed the company’s alignment with UN guiding principles and the need for transparency, fairness, and inclusivity.

The discussion also highlighted binding global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, calling for their widespread adoption as vital tools in managing AI’s global impact. The session concluded with a shared call to action: governments must use regulatory tools and procurement power to enforce human rights standards in AI, while the private sector and civil society must push for accountability and inclusion.

The FOC’s statement remains open for new endorsements, standing as a foundational text in the ongoing effort to align the future of AI with the fundamental rights of all people.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.