European ombudsman opens probe into AI use in EU funding reviews

A formal inquiry has been opened into how AI is used in the evaluation of the EU funding proposals, marking the first investigation of its kind at the institutional level.

European Ombudsman Teresa Anjinho initiated the probe following allegations that external experts relied on AI systems when assessing applications.

Concerns emerged after a Polish company failed to secure support from the European Innovation Council Accelerator programme after submitting its bid before the November 2023 deadline. The complainant alleged that third-party AI use compromised fairness and influenced the assessment outcome.

Requests have been made for clearer governance standards, including explicit disclosure when AI systems are used in proposal reviews. Fears also emerged that sensitive commercial data could be exposed through external AI platforms.

Despite no grounds to reopen the case, a systemic probe into AI transparency and safeguards was launched. Document inspections are scheduled through March, followed by institutional meetings in April to determine whether regulatory or procedural changes are warranted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety leader quits Anthropic with global risk warning

A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.

Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.

His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.

Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young voices seek critical approach to AI in classrooms

In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.

Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.

Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.

After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Illicit trafficking payments rise across blockchain channels

Cryptocurrency flows linked to suspected human trafficking services surged sharply in 2025, with transaction volumes rising 85% year-on-year, according to new blockchain analysis.

Investigators say the financial activity reflects the rapid expansion of digitally enabled exploitation networks operating across borders.

Growth is linked to Southeast Asia-based illicit networks, including scam compounds, gambling platforms, and laundering groups operating via encrypted messaging channels.

Analysts identified multiple trafficking service categories, each with distinct transaction structures and payment preferences.

Stablecoins became the dominant payment method, especially for escort networks, thanks to their price stability and ease of conversion. Larger transfers and structured pricing models indicate increasingly professionalised operations supported by organised financial infrastructure.

Despite the scale of the activity, blockchain transparency continues to provide enforcement advantages. Transaction tracing has aided investigations, shutdowns, and arrests, strengthening digital forensics in combating trafficking-linked financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers blanket crypto ban targeting Russia

European Union officials are weighing a sweeping prohibition on cryptocurrency transactions involving Russia, signalling a more rigid sanctions posture against alternative financial networks.

Policymakers argue that the rapid emergence of replacement crypto service providers has undermined existing restrictions.

Internal European Commission discussions indicate concern that digital assets are facilitating trade flows supporting Russia’s war economy. Authorities say platform-specific sanctions are ineffective, as new entities quickly replicate restricted services.

Proposals under review extend beyond private crypto platforms. Measures could include sanctions on additional Russian banks, restrictions linked to the digital ruble, and scrutiny of payments infrastructure tied to sanctioned trade channels.

The consensus remains uncertain, with some states warning that a blanket ban could shift activity to non-European markets. Parallel trade controls targeting dual-use exports to Kyrgyzstan are also being considered as part of broader anti-circumvention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New AI system forecasts mobility after joint replacement

AI is being deployed to forecast how well patients regain mobility after hip replacement surgery, offering new precision in orthopaedic recovery planning.

Researchers at the Karlsruhe Institute of Technology developed a model capable of analysing complex gait biomechanics to assess post-operative walking outcomes.

Hip osteoarthritis remains one of the leading drivers of joint replacement procedures, with around 200,000 artificial hips implanted in Germany in 2024 alone. Recovery varies widely, driving research into tools predicting post-surgery mobility and pain relief.

Movement data collected before and after operations were analysed using AI as part of a joint project with the Universitätsmedizin Frankfurt.

The system examined biomechanical indicators, including joint angles and loading patterns, enabling researchers to classify patients into three distinct gait recovery groups.

Results show the model can predict who regains near-normal walking and who needs intensive rehabilitation. Researchers say the framework could guide personalised therapy and expand to other joints and musculoskeletal disorders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model achieves accurate detection of placenta accreta spectrum in high-risk pregnancies

A new AI model has shown strong potential for detecting placenta accreta spectrum, a dangerous condition that often goes undiagnosed during pregnancy.

Researchers presented the findings at the annual meeting of the Society for Maternal-Fetal Medicine, highlighting that traditional screening identifies only about half of all cases.

Placenta accreta spectrum arises when the placenta attaches abnormally to the uterine wall, often after previous surgical procedures such as caesarean delivery.

The condition can trigger severe haemorrhage, organ failure, and death, yet many pregnancies with elevated risk receive inconclusive or incorrect assessments through standard ultrasound examinations.

A study that involved a retrospective review by specialists at the Baylor College of Medicine, who analysed 2D obstetric ultrasound images from 113 high-risk pregnancies managed at the Texas Children’s Hospital between 2018 and 2025.

The AI system detected every confirmed case of placenta accreta spectrum, produced two false positives, and generated no false negatives.

Researchers believe such technology could significantly improve early identification and clinical preparation.

They argue that AI screening, when used in addition to current methods, may reduce maternal complications and support safer outcomes for patients facing this increasingly common condition.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn launches agentic AI for in-house legal teams

LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.

Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.

The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.

Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!