AIOLIA framework translates AI principles into system design

An EU-funded project, AIOLIA, is examining how Europe’s approach to trustworthy AI can be applied in practice. Principles such as transparency and accountability are embedded in the AI Act’s binding rules. Turning those principles into design choices remains difficult.

The project focuses on closing that gap by analysing how AI ethics is applied in real systems. Its work supports the implementation of AI Act requirements beyond legal text. Lessons are translated into practical training.

Project coordinator Alexei Grinbaum argues that ethical principles vary widely by context. Engineers are expected to follow them, but implications differ across systems. Bridging the gap requires concrete examples.

AIOLIA analyses ten use cases across multiple domains involving professionals and citizens. The project examines how organisations operationalise ethics under regulatory and organisational constraints. Findings highlight transferable practices without a single model.

Training is central to the initiative, particularly for EU ethics evaluators and researchers working under the AI Act framework. As AI becomes more persuasive, risks around manipulation grow. AIOLIA aims to align ethical language with daily decisions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Florida moves ahead with new AI Bill of Rights

Florida lawmakers are preparing a sweeping AI Bill of Rights as political debates intensify. Senator Tom Leek introduced a proposal to provide residents with clearer safeguards while regulating how firms utilise advanced systems across the state.

The plan outlines parental control over minors’ interactions with AI and requires disclosure when people engage with automated systems. It also sets boundaries on political advertising created with AI and restricts state contracts with suppliers linked to countries of concern.

Governor Ron DeSantis maintains Florida can advance its agenda despite federal attempts to curb state-level AI rules. He argues the state has the authority to defend consumers while managing the rising costs of new data centre developments.

Democratic lawmakers have raised concerns about young users forming harmful online bonds with AI companions, prompting calls for stronger protections. The legislation now forms part of a broader clash over online safety, privacy rights and fast-growing AI industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber incident hits France’s postal and banking networks

France’s national postal service, La Poste, suffered a cyber incident days before Christmas that disrupted websites, mobile applications and parts of its delivery network.

The organisation confirmed a distributed denial of service attack temporarily knocked key digital systems offline, slowing parcel distribution during the busiest period of the year.

A disruption that also affected La Banque Postale, with customers reporting limited access to online banking and mobile services. Card payments in stores, ATM withdrawals, and authenticated online payments continued to function, easing concerns over wider financial instability.

La Poste stated there was no evidence of customer data exposure, although several post offices in France operated at reduced capacity. Staff were deployed to restore services while maintaining in-person banking and postal transactions where possible.

The incident added to growing anxiety over digital resilience in critical public services, particularly following a separate data breach disclosed at France’s Interior Ministry last week. Authorities have yet to identify those responsible for the attack on La Poste.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia seeks China market access as US eases AI chip restrictions

The US tech giant NVIDIA has largely remained shut out of China’s market for advanced AI chips, as US export controls have restricted sales due to national security concerns.

High-performance processors such as the H100 and H200 were barred, forcing NVIDIA to develop downgraded alternatives tailored for Chinese customers instead of flagship products.

A shift in policy emerged after President Donald Trump announced that H200 chip sales to China could proceed following a licensing review and a proposed 25% fee. The decision reopened a limited pathway for exporting advanced US AI hardware, subject to regulatory approval in both Washington and Beijing.

If authorised, the H200 shipments would represent the most powerful US-made AI chips permitted in China since restrictions were introduced. The move could help NVIDIA monetise existing H200 inventory while easing pressure on its China business as it transitions towards newer Blackwell chips.

Strategically, the decision may slow China’s push for AI chip self-sufficiency, as domestic alternatives still lag behind NVIDIA’s technology.

At the same time, the policy highlights a transactional approach to export controls, raising uncertainty over long-term US efforts to contain China’s technological rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Microsoft is teaching AI to understand biological systems

Medicine still relies largely on population averages, even though genetic and cellular differences shape how diseases develop and respond to treatment.

Researchers at Microsoft argue that AI could transform healthcare by learning the language of biology and enabling truly personalised medicine instead of one-size-fits-all therapies.

Ava Amini, principal researcher at Microsoft Research, explains that AI can detect biological patterns at a scale impossible for human analysis.

Single cancer biopsies can generate tens of millions of data points, allowing AI models to identify meaningful signals and support precision treatment strategies tailored to individual patients.

Building on decades of biological coding systems, Microsoft has developed generative models such as EvoDiff and the Dayhoff Atlas to design new proteins using biological language.

Lab testing has shown a marked improvement in functional success, demonstrating that AI-driven protein design is moving beyond theory into real-world application.

Challenges remain in modelling entire human cells, where current AI systems still predict averages rather than biological diversity. Microsoft researchers continue to pursue integrated experimental and computational approaches, aiming to bring precision oncology closer to everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI could reach AGI by 2026 as the AI race intensifies

Elon Musk has told xAI employees that the next two to three years will determine whether the company survives and emerges as a leading force in artificial general intelligence.

Speaking during a company-wide meeting, Musk argued that endurance during such a period could position xAI at the forefront of the AGI race.

Musk suggested that AGI could be achieved by xAI as early as 2026, pointing to rapid advances in the Grok model family. He has previously offered shifting timelines for AGI development, underscoring both technological momentum and persistent uncertainty surrounding the field.

The remarks come as competition across the AI sector intensifies, with OpenAI accelerating model releases and Google unveiling new iterations of its Gemini system. Against larger incumbents, xAI is positioning itself as a challenger focused on speed, scale and aggressive execution.

Central to that strategy is the Colossus project, which has already deployed around 200,000 GPUs and plans to expand to one million.

Musk also highlighted operational synergies with Tesla and SpaceX, while floating longer-term concepts such as space-based data centres, reinforcing xAI’s ambition to differentiate through scale and unconventional infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU moves to extend child abuse detection rules

The European Commission has proposed extending the Interim Regulation that allows online service providers to voluntarily detect and report child sexual abuse instead of facing a legal gap once the current rules expire.

These measures would preserve existing safeguards while negotiations on permanent legislation continue.

The Interim Regulation enables providers of certain communication services to identify and remove child sexual abuse material under a temporary exemption from e-Privacy rules.

Without an extension beyond April 2026, voluntary detection would have to stop, making it easier for offenders to share illegal material and groom children online.

According to the Commission, proactive reporting by platforms has played a critical role for more than fifteen years in identifying abuse and supporting criminal investigations. Extending the interim framework until April 2028 is intended to maintain these protections until long-term EU rules are agreed.

The proposal now moves to the European Parliament and the Council, with the Commission urging swift agreement to ensure continued protection for children across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bank of England governor warns AI could displace jobs at Industrial Revolution scale

Bank of England Governor Andrew Bailey said the widespread adoption of AI is likely to displace workers from existing roles, drawing parallels with the labour disruption caused by the Industrial Revolution.

He emphasised that while AI can boost productivity and economic growth, the UK must invest in training and education to help workers transition into jobs that are AI-enabled.

Bailey expressed particular concern about the impact on younger and inexperienced workers, warning that AI may reduce entry-level opportunities in sectors such as law, accountancy and administration. He noted that firms may hire fewer junior staff as AI systems replace routine data and document analysis.

Despite these risks, Bailey described AI as a potential driver of future UK growth, although he cautioned that productivity gains may take time to materialise.

He also stated that the Bank of England is experimenting with AI internally while monitoring concerns about a potential AI market bubble and the risks of a sharp valuation correction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital fraud declines in Russia after rollout of Cyberbez measures

Russia has reported a sharp decline in cyber fraud following the introduction of new regulatory measures in 2025. Officials say legislative action targeting telephone and online scams has begun to deliver measurable results.

State Secretary and Deputy Minister of Digital Development Ivan Lebedev told the State Duma that crimes covered by the first package of reforms, known as ‘Cyberbez 1.0’, have fallen by 40%, according to confirmed statistics.

Earlier this year, Lebedev said Russia records roughly 677,000 cases of phone and online fraud annually, with incidents rising by more than 35% since 2022, highlighting the scale of the challenge faced by authorities.

In April, President Vladimir Putin signed a law introducing a range of countermeasures, including a state information system to combat fraud, limits on unsolicited marketing calls, stricter SIM card issuance rules, and new compliance obligations for banks.

Further steps are now under discussion. Officials say a second package is being prepared, while a third set of initiatives was announced in December as Russia continues to strengthen its digital security framework.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Competing visions of AGI emerge at Google DeepMind and Microsoft

Two former DeepMind co-founders now leading rival AI labs have outlined sharply different visions for how artificial general intelligence (AGI) should be developed, highlighting a growing strategic divide at the top of the industry.

Google DeepMind chief executive Demis Hassabis has framed AGI as a scientific tool for tackling foundational challenges. These include fusion energy, advanced materials, and fundamental physics. He says current models still lack consistent reasoning across tasks.

Hassabis has pointed to weaknesses, such as so-called ‘jagged intelligence’. Systems can perform well on complex benchmarks but fail simple tasks. DeepMind is investing in physics-based evaluations and AlphaZero-inspired research to enable genuine knowledge discovery rather than data replication.

Microsoft AI chief executive Mustafa Suleyman has taken a more product-led stance, framing AGI as an economic force rather than a scientific milestone. He has rejected the idea of race, instead prioritising controllable and reliable AI agents that operate under human oversight.

Suleyman has argued that governance, not raw capability, is the central challenge. He has emphasised containment, liability frameworks, and certified agents, reflecting wider tensions between rapid deployment and long-term scientific ambition as AI systems grow more influential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!