EU decision regulates researcher access to data under the DSA

A document released by the Republican-led House Judiciary Committee revived claims that the EU digital rules amount to censorship. The document concerns a €120 million fine against X under the Digital Services Act and was framed as a ‘secret censorship ruling’, despite publication requirements.

The document provides insight into how the European Commission interprets Article 40 of the DSA, which governs researcher access to platform data. The rule requires huge online platforms to grant qualified researchers access to publicly accessible data needed to study systemic risks in the EU.

Investigators found that X failed to comply with Article 40.12, in force since 2023 and covering public data access. The Commission said X applied restrictive eligibility rules, delayed reviews, imposed tight quotas, and blocked independent researcher access, including scraping.

The decision confirms platforms cannot price access to restrict research, deny access based on affiliation or location, or ban scraping by contract. The European Commission also rejected X’s narrow reading of ‘systemic risk’, allowing broader research contexts.

The ruling also highlights weak internal processes and limited staffing for handling access requests. X must submit an action plan by mid-April 2026, with the decision expected to shape future enforcement of researcher access across major platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI anxiety strains the modern workforce

Mounting anxiety is reshaping the modern workplace as AI alters job expectations and career paths. Pew Research indicates more than a third of employees believe AI could harm their prospects, fuelling tension across teams.

Younger workers feel particular strain, with 92% of Gen Z saying it is vital to speak openly about mental health at work. Communicators and managers must now deliver reassurance while coping with their own pressure.

Leadership expert Anna Liotta points to generational intelligence as a practical way to reduce friction and improve trust. She highlights how tailored communication can reduce misunderstanding and conflict.

Her latest research connects neuroscience, including the role of the vagus nerve, with practical workplace strategies. By combining emotional regulation with thoughtful messaging, she suggests that organisations can calm anxiety and build more resilient teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hybrid AI could reshape robotics and defence

Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.

Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.

London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.

Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers tackle LLM regression with on policy training

Researchers at MIT, the Improbable AI Lab and ETH Zurich have proposed a fine tuning method to address catastrophic forgetting in large language models. The issue often causes models to lose earlier skills when trained on new tasks.

The technique, called self distillation fine tuning, allows a model to act as both teacher and student during training. In Cambridge and Zurich experiments, the approach preserved prior capabilities while improving accuracy on new tasks.

Enterprise teams often manage separate model variants to prevent regression, increasing operational complexity. The researchers argue that their method could reduce fragmentation and support continual learning, useful for AI, within a single production model.

However, the method requires around 2.5 times more computing power than standard supervised fine tuning. Analysts note that real world deployment will depend on governance controls, training costs and suitability for regulated industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Young voices seek critical approach to AI in classrooms

In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.

Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.

Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.

After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-gen AI infrastructure boosted by Samsung HBM4

Samsung Electronics has commenced mass production and commercial shipments of its next-generation HBM4 memory, marking the first industry deployment of the advanced high-bandwidth solution.

The launch strengthens the company’s position in AI infrastructure hardware as demand for accelerated computing intensifies.

Built on sixth-generation 10nm-class DRAM and a 4nm logic base die, HBM4 delivers transfer speeds of 11.7Gbps, with performance scalable to 13Gbps. Bandwidth per stack has surged, reducing data bottlenecks as AI models and processing demands grow.

Engineering upgrades extend beyond raw speed. Enhanced stacking architecture, low-power design integration, and thermal optimisation have improved energy efficiency and heat dissipation, supporting large-scale data centre deployments and sustained GPU workloads.

Production scale-up is already in motion, backed by expanded manufacturing capacity and industry partnerships. Samsung expects HBM revenue growth to accelerate into 2026, with next-generation variants and custom configurations scheduled for future release cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI pushes schools to rethink learning priorities

Students speaking at a major education technology conference said AI has revealed weaknesses in traditional learning. Heavy focus on memorisation is becoming less relevant in a world where digital tools provide instant answers.

AI helps learners summarise information and understand complex subjects more easily. Improved access to such tools has made studying more efficient and, in some cases, more engaging.

Teachers have responded by restricting technology use and returning to handwritten assignments. These measures aim to protect academic integrity but have created mixed reactions among students.

Participants supported guided AI use instead of banning it completely. Communication, collaboration and presentation skills were seen as more valuable and less vulnerable to AI shortcuts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!