AI safety leader quits Anthropic with global risk warning

A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.

Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.

His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.

Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New AI system forecasts mobility after joint replacement

AI is being deployed to forecast how well patients regain mobility after hip replacement surgery, offering new precision in orthopaedic recovery planning.

Researchers at the Karlsruhe Institute of Technology developed a model capable of analysing complex gait biomechanics to assess post-operative walking outcomes.

Hip osteoarthritis remains one of the leading drivers of joint replacement procedures, with around 200,000 artificial hips implanted in Germany in 2024 alone. Recovery varies widely, driving research into tools predicting post-surgery mobility and pain relief.

Movement data collected before and after operations were analysed using AI as part of a joint project with the Universitätsmedizin Frankfurt.

The system examined biomechanical indicators, including joint angles and loading patterns, enabling researchers to classify patients into three distinct gait recovery groups.

Results show the model can predict who regains near-normal walking and who needs intensive rehabilitation. Researchers say the framework could guide personalised therapy and expand to other joints and musculoskeletal disorders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model achieves accurate detection of placenta accreta spectrum in high-risk pregnancies

A new AI model has shown strong potential for detecting placenta accreta spectrum, a dangerous condition that often goes undiagnosed during pregnancy.

Researchers presented the findings at the annual meeting of the Society for Maternal-Fetal Medicine, highlighting that traditional screening identifies only about half of all cases.

Placenta accreta spectrum arises when the placenta attaches abnormally to the uterine wall, often after previous surgical procedures such as caesarean delivery.

The condition can trigger severe haemorrhage, organ failure, and death, yet many pregnancies with elevated risk receive inconclusive or incorrect assessments through standard ultrasound examinations.

A study that involved a retrospective review by specialists at the Baylor College of Medicine, who analysed 2D obstetric ultrasound images from 113 high-risk pregnancies managed at the Texas Children’s Hospital between 2018 and 2025.

The AI system detected every confirmed case of placenta accreta spectrum, produced two false positives, and generated no false negatives.

Researchers believe such technology could significantly improve early identification and clinical preparation.

They argue that AI screening, when used in addition to current methods, may reduce maternal complications and support safer outcomes for patients facing this increasingly common condition.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn launches agentic AI for in-house legal teams

LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.

Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.

The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.

Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance struggles to match rapid adoption

Accelerating AI adoption is exposing clear weaknesses in corporate AI governance. Research shows that while most organisations claim to have oversight processes, only a small minority describe them as mature.

Rapid rollouts across marketing, operations and manufacturing have outpaced safeguards designed to manage bias, transparency and accountability, leaving many firms reacting rather than planning ahead.

Privacy rules, data sovereignty questions and vendor data-sharing risks are further complicating deployment decisions. Fragmented data governance and unclear ownership across departments often stall progress.

Experts argue that effective AI governance must operate as an ongoing, cross-functional model embedded into product lifecycles. Defined accountability, routine audits and clear escalation paths are increasingly viewed as essential for building trust and reducing long-term risk.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI pushes schools to rethink learning priorities

Students speaking at a major education technology conference said AI has revealed weaknesses in traditional learning. Heavy focus on memorisation is becoming less relevant in a world where digital tools provide instant answers.

AI helps learners summarise information and understand complex subjects more easily. Improved access to such tools has made studying more efficient and, in some cases, more engaging.

Teachers have responded by restricting technology use and returning to handwritten assignments. These measures aim to protect academic integrity but have created mixed reactions among students.

Participants supported guided AI use instead of banning it completely. Communication, collaboration and presentation skills were seen as more valuable and less vulnerable to AI shortcuts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!