Illicit trafficking payments rise across blockchain channels

Cryptocurrency flows linked to suspected human trafficking services surged sharply in 2025, with transaction volumes rising 85% year-on-year, according to new blockchain analysis.

Investigators say the financial activity reflects the rapid expansion of digitally enabled exploitation networks operating across borders.

Growth is linked to Southeast Asia-based illicit networks, including scam compounds, gambling platforms, and laundering groups operating via encrypted messaging channels.

Analysts identified multiple trafficking service categories, each with distinct transaction structures and payment preferences.

Stablecoins became the dominant payment method, especially for escort networks, thanks to their price stability and ease of conversion. Larger transfers and structured pricing models indicate increasingly professionalised operations supported by organised financial infrastructure.

Despite the scale of the activity, blockchain transparency continues to provide enforcement advantages. Transaction tracing has aided investigations, shutdowns, and arrests, strengthening digital forensics in combating trafficking-linked financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers blanket crypto ban targeting Russia

European Union officials are weighing a sweeping prohibition on cryptocurrency transactions involving Russia, signalling a more rigid sanctions posture against alternative financial networks.

Policymakers argue that the rapid emergence of replacement crypto service providers has undermined existing restrictions.

Internal European Commission discussions indicate concern that digital assets are facilitating trade flows supporting Russia’s war economy. Authorities say platform-specific sanctions are ineffective, as new entities quickly replicate restricted services.

Proposals under review extend beyond private crypto platforms. Measures could include sanctions on additional Russian banks, restrictions linked to the digital ruble, and scrutiny of payments infrastructure tied to sanctioned trade channels.

The consensus remains uncertain, with some states warning that a blanket ban could shift activity to non-European markets. Parallel trade controls targeting dual-use exports to Kyrgyzstan are also being considered as part of broader anti-circumvention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia recasts Vision 2030 with new priorities

The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.

Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.

The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.

Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.

Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.

Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.

The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.

An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.

Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.

Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.

With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New AI system forecasts mobility after joint replacement

AI is being deployed to forecast how well patients regain mobility after hip replacement surgery, offering new precision in orthopaedic recovery planning.

Researchers at the Karlsruhe Institute of Technology developed a model capable of analysing complex gait biomechanics to assess post-operative walking outcomes.

Hip osteoarthritis remains one of the leading drivers of joint replacement procedures, with around 200,000 artificial hips implanted in Germany in 2024 alone. Recovery varies widely, driving research into tools predicting post-surgery mobility and pain relief.

Movement data collected before and after operations were analysed using AI as part of a joint project with the Universitätsmedizin Frankfurt.

The system examined biomechanical indicators, including joint angles and loading patterns, enabling researchers to classify patients into three distinct gait recovery groups.

Results show the model can predict who regains near-normal walking and who needs intensive rehabilitation. Researchers say the framework could guide personalised therapy and expand to other joints and musculoskeletal disorders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Enterprise AI adoption stalls despite heavy investment

AI has moved from experimentation to expectation, yet many enterprise AI rollouts continue to stall. Boards demand returns, leaders approve tools and governance, but day-to-day workarounds spread, risk grows, and promised value fails to materialise.

The problem rarely lies with the technology itself. Adoption breaks down when AI is treated as an IT deployment rather than an internal product, leaving employees with approved tools but no clear value proposition, limited capacity, and governance that prioritises control over learning.

A global B2B services firm experienced this pattern during an eight-month enterprise AI rollout across commercial teams. Usage dashboards showed activity, but approved platforms failed to align with actual workflows, leading teams to comply superficially or rely on external tools under delivery pressure.

The experience exposed what some leaders describe as the ‘mandate trap’, where adoption is ordered from the top while usability problems fall with middle managers. Hesitation reflected workflow friction and risk rather than resistance, revealing an internal product–market fit issue.

Progress followed when leaders paused broad deployment and refocused on outcomes, workflow redesign, and protected learning time. Narrow pilots and employee-led enterprise AI testing helped scale only tools that reduced friction and earned trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn launches agentic AI for in-house legal teams

LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.

Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.

The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.

Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance struggles to match rapid adoption

Accelerating AI adoption is exposing clear weaknesses in corporate AI governance. Research shows that while most organisations claim to have oversight processes, only a small minority describe them as mature.

Rapid rollouts across marketing, operations and manufacturing have outpaced safeguards designed to manage bias, transparency and accountability, leaving many firms reacting rather than planning ahead.

Privacy rules, data sovereignty questions and vendor data-sharing risks are further complicating deployment decisions. Fragmented data governance and unclear ownership across departments often stall progress.

Experts argue that effective AI governance must operate as an ongoing, cross-functional model embedded into product lifecycles. Defined accountability, routine audits and clear escalation paths are increasingly viewed as essential for building trust and reducing long-term risk.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex growth prompts OpenAI to expand access

OpenAI said its new Codex Mac app has surpassed one million downloads just over a week after launch, with overall Codex usage rising by 60% following the release of GPT-5.3-Codex.

The strong uptake has prompted OpenAI to extend free access to Codex for Free and Go users beyond the initial launch promotion. Sam Altman said usage limits for lower tiers may be tightened, but access would remain available so more users can experiment and build.

Separately, OpenAI released a YouTube video showcasing a redesigned Deep Research interface, introducing a full-screen report viewer that opens research outputs in a separate window from the chat interface.

The updated layout includes a table of contents for navigation, hyperlinks, and anchor tags within reports, and a dedicated source panel for verification. Users can also download reports as PDF or Word files, while new controls allow research scopes and sources to be adjusted during generation.

The Deep Research updates are available to Plus and Pro users, with broader access expected soon. OpenAI also confirmed the changes in ChatGPT release notes on 10 February and announced a more minor GPT-5.2 update focused on more measured responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!