EU decision regulates researcher access to data under the DSA

A document released by the Republican-led House Judiciary Committee revived claims that the EU digital rules amount to censorship. The document concerns a €120 million fine against X under the Digital Services Act and was framed as a ‘secret censorship ruling’, despite publication requirements.

The document provides insight into how the European Commission interprets Article 40 of the DSA, which governs researcher access to platform data. The rule requires huge online platforms to grant qualified researchers access to publicly accessible data needed to study systemic risks in the EU.

Investigators found that X failed to comply with Article 40.12, in force since 2023 and covering public data access. The Commission said X applied restrictive eligibility rules, delayed reviews, imposed tight quotas, and blocked independent researcher access, including scraping.

The decision confirms platforms cannot price access to restrict research, deny access based on affiliation or location, or ban scraping by contract. The European Commission also rejected X’s narrow reading of ‘systemic risk’, allowing broader research contexts.

The ruling also highlights weak internal processes and limited staffing for handling access requests. X must submit an action plan by mid-April 2026, with the decision expected to shape future enforcement of researcher access across major platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance becomes urgent for mortgage lenders

Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.

Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.

Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.

Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.

Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia signals no immediate Google ban as Android dependence remains critical

Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.

Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.

A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.

Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.

The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.

Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Hybrid AI could reshape robotics and defence

Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.

Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.

London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.

Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI model achieves accurate detection of placenta accreta spectrum in high-risk pregnancies

A new AI model has shown strong potential for detecting placenta accreta spectrum, a dangerous condition that often goes undiagnosed during pregnancy.

Researchers presented the findings at the annual meeting of the Society for Maternal-Fetal Medicine, highlighting that traditional screening identifies only about half of all cases.

Placenta accreta spectrum arises when the placenta attaches abnormally to the uterine wall, often after previous surgical procedures such as caesarean delivery.

The condition can trigger severe haemorrhage, organ failure, and death, yet many pregnancies with elevated risk receive inconclusive or incorrect assessments through standard ultrasound examinations.

A study that involved a retrospective review by specialists at the Baylor College of Medicine, who analysed 2D obstetric ultrasound images from 113 high-risk pregnancies managed at the Texas Children’s Hospital between 2018 and 2025.

The AI system detected every confirmed case of placenta accreta spectrum, produced two false positives, and generated no false negatives.

Researchers believe such technology could significantly improve early identification and clinical preparation.

They argue that AI screening, when used in addition to current methods, may reduce maternal complications and support safer outcomes for patients facing this increasingly common condition.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!