Government AI investment grows while public trust falters

Rising investment in AI is reshaping public services worldwide, yet citizen satisfaction remains uneven. Research across 14 countries shows that nearly 45% of residents believe digital government services still require improvement.

Employee confidence is also weakening, with empowerment falling from 87% three years ago to 73% today. Only 35% of public bodies provide structured upskilling for AI-enabled roles, limiting workforce readiness.

Trust remains a growing concern for public authorities adopting AI. Only 47% of residents say they believe their government will use AI responsibly, exposing a persistent credibility gap.

The study highlights an ‘experience paradox’, in which the automation of legacy systems outpaces meaningful service redesign. Leading nations such as the UAE, Saudi Arabia and Singapore rank highly for proactive AI strategies, but researchers argue that leadership vision and structural reform, not funding alone, determine long-term credibility.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Half of xAI’s founding team has now left the company

Departures from Elon Musk’s AI startup xAI have reached a symbolic milestone, with two more co-founders announcing exits within days of each other. Yuhuai Tony Wu and Jimmy Ba both confirmed their decisions publicly, marking a turning point for the company’s leadership.

Losses now total six out of the original 12 founding members, signalling significant turnover in less than three years. Several prominent researchers had already moved on to competitors, launched new ventures, or stepped away for personal reasons.

Timing coincides with major developments, including SpaceX’s acquisition of xAI and preparations for a potential public listing. Financial opportunities and intense demand for AI expertise are encouraging senior talent to pursue independent projects or new roles.

Challenges surrounding the Grok chatbot, including technical issues and controversy over its harmful content, have added internal pressure. Growing competition from OpenAI and Anthropic means retaining skilled researchers will be vital to sustaining investor confidence and future growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia signals no immediate Google ban as Android dependence remains critical

Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.

Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.

A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.

Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.

The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.

Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Hybrid AI could reshape robotics and defence

Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.

Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.

London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.

Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Researchers tackle LLM regression with on policy training

Researchers at MIT, the Improbable AI Lab and ETH Zurich have proposed a fine tuning method to address catastrophic forgetting in large language models. The issue often causes models to lose earlier skills when trained on new tasks.

The technique, called self distillation fine tuning, allows a model to act as both teacher and student during training. In Cambridge and Zurich experiments, the approach preserved prior capabilities while improving accuracy on new tasks.

Enterprise teams often manage separate model variants to prevent regression, increasing operational complexity. The researchers argue that their method could reduce fragmentation and support continual learning, useful for AI, within a single production model.

However, the method requires around 2.5 times more computing power than standard supervised fine tuning. Analysts note that real world deployment will depend on governance controls, training costs and suitability for regulated industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European ombudsman opens probe into AI use in EU funding reviews

A formal inquiry has been opened into how AI is used in the evaluation of the EU funding proposals, marking the first investigation of its kind at the institutional level.

European Ombudsman Teresa Anjinho initiated the probe following allegations that external experts relied on AI systems when assessing applications.

Concerns emerged after a Polish company failed to secure support from the European Innovation Council Accelerator programme after submitting its bid before the November 2023 deadline. The complainant alleged that third-party AI use compromised fairness and influenced the assessment outcome.

Requests have been made for clearer governance standards, including explicit disclosure when AI systems are used in proposal reviews. Fears also emerged that sensitive commercial data could be exposed through external AI platforms.

Despite no grounds to reopen the case, a systemic probe into AI transparency and safeguards was launched. Document inspections are scheduled through March, followed by institutional meetings in April to determine whether regulatory or procedural changes are warranted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety leader quits Anthropic with global risk warning

A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.

Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.

His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.

Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young voices seek critical approach to AI in classrooms

In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.

Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.

Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.

After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot