Hybrid AI could reshape robotics and defence

Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.

Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.

London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.

Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Researchers tackle LLM regression with on policy training

Researchers at MIT, the Improbable AI Lab and ETH Zurich have proposed a fine tuning method to address catastrophic forgetting in large language models. The issue often causes models to lose earlier skills when trained on new tasks.

The technique, called self distillation fine tuning, allows a model to act as both teacher and student during training. In Cambridge and Zurich experiments, the approach preserved prior capabilities while improving accuracy on new tasks.

Enterprise teams often manage separate model variants to prevent regression, increasing operational complexity. The researchers argue that their method could reduce fragmentation and support continual learning, useful for AI, within a single production model.

However, the method requires around 2.5 times more computing power than standard supervised fine tuning. Analysts note that real world deployment will depend on governance controls, training costs and suitability for regulated industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European ombudsman opens probe into AI use in EU funding reviews

A formal inquiry has been opened into how AI is used in the evaluation of the EU funding proposals, marking the first investigation of its kind at the institutional level.

European Ombudsman Teresa Anjinho initiated the probe following allegations that external experts relied on AI systems when assessing applications.

Concerns emerged after a Polish company failed to secure support from the European Innovation Council Accelerator programme after submitting its bid before the November 2023 deadline. The complainant alleged that third-party AI use compromised fairness and influenced the assessment outcome.

Requests have been made for clearer governance standards, including explicit disclosure when AI systems are used in proposal reviews. Fears also emerged that sensitive commercial data could be exposed through external AI platforms.

Despite no grounds to reopen the case, a systemic probe into AI transparency and safeguards was launched. Document inspections are scheduled through March, followed by institutional meetings in April to determine whether regulatory or procedural changes are warranted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety leader quits Anthropic with global risk warning

A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.

Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.

His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.

Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young voices seek critical approach to AI in classrooms

In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.

Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.

Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.

After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-gen AI infrastructure boosted by Samsung HBM4

Samsung Electronics has commenced mass production and commercial shipments of its next-generation HBM4 memory, marking the first industry deployment of the advanced high-bandwidth solution.

The launch strengthens the company’s position in AI infrastructure hardware as demand for accelerated computing intensifies.

Built on sixth-generation 10nm-class DRAM and a 4nm logic base die, HBM4 delivers transfer speeds of 11.7Gbps, with performance scalable to 13Gbps. Bandwidth per stack has surged, reducing data bottlenecks as AI models and processing demands grow.

Engineering upgrades extend beyond raw speed. Enhanced stacking architecture, low-power design integration, and thermal optimisation have improved energy efficiency and heat dissipation, supporting large-scale data centre deployments and sustained GPU workloads.

Production scale-up is already in motion, backed by expanded manufacturing capacity and industry partnerships. Samsung expects HBM revenue growth to accelerate into 2026, with next-generation variants and custom configurations scheduled for future release cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI system forecasts mobility after joint replacement

AI is being deployed to forecast how well patients regain mobility after hip replacement surgery, offering new precision in orthopaedic recovery planning.

Researchers at the Karlsruhe Institute of Technology developed a model capable of analysing complex gait biomechanics to assess post-operative walking outcomes.

Hip osteoarthritis remains one of the leading drivers of joint replacement procedures, with around 200,000 artificial hips implanted in Germany in 2024 alone. Recovery varies widely, driving research into tools predicting post-surgery mobility and pain relief.

Movement data collected before and after operations were analysed using AI as part of a joint project with the Universitätsmedizin Frankfurt.

The system examined biomechanical indicators, including joint angles and loading patterns, enabling researchers to classify patients into three distinct gait recovery groups.

Results show the model can predict who regains near-normal walking and who needs intensive rehabilitation. Researchers say the framework could guide personalised therapy and expand to other joints and musculoskeletal disorders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot