Ripple transforms cross-border payments with XRP

Cross-border payments have long struggled with delays and high costs, but networks like SWIFT could be transformed by systems that leverage blockchain. Ripple, launched by Ripple Labs in 2012, enables faster, more transparent, and cost-effective international transfers.

RippleNet, the company’s unified payment network, connects multiple banks via the interledger standard, removing intermediaries and enabling near-instant settlement. XRP, Ripple’s digital token, acts as a bridge currency to provide liquidity, though transactions can occur without it.

XRP boasts low fees, high scalability, and settlement times of just a few seconds.

Since its creation, Ripple has evolved from individual protocols to the unified RippleNet platform, supported by the XRPL Foundation. Unlike Bitcoin, XRP is premined and relies on a select group of validators, offering a different governance model and centralisation approach.

The network also supports broader financial applications, including central bank digital currencies, DeFi, and NFTs.

Despite its potential, investing in Ripple carries risks typical of crypto assets, including volatility, lack of regulation, and complexity. Investors are advised to research thoroughly and limit high-risk exposure to ensure a diversified portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil servants and AI will work together in 2050

Public administrations worldwide are facing unprecedented change as AI reshapes automation, procurement, and decision-making. Governments must stay flexible, open, and resilient, preparing for multiple futures with foresight, continuous learning, and adaptability.

During World Futures Day, experts from the SPARK-AI Alliance and representatives from governments, academia, and the private sector explored four potential scenarios for public service in 2050.

Scenarios ranged from human-centred administrations that reinforce trust, to algorithmic bureaucracies focused on oversight, agentic administrations with semi-autonomous AI actors, and data-eroded futures that require renewed governance of poor-quality data.

Key insights highlighted the growing importance of anticipatory capacity, positioning AI as a ‘co-worker’ rather than a replacement, and emphasising the need to safeguard public trust.

Civil servants will increasingly focus on ethical reasoning, interpretation of automated processes, and cross-disciplinary collaboration, supported by robust accountability and transparent data governance.

The SPARK-AI Alliance has launched a Working Group on the Future of Work in the Public Sector to help governments anticipate and prepare for change. Its focus will be on building resilient public administrations, evolving civil-service roles, and maintaining trust in AI-enabled governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK report quantifies rapid advances in frontier AI capabilities

For the first time, the UK has published a detailed, evidence-based assessment of frontier AI capabilities. The Frontier AI Trends Report draws on two years of structured testing across areas including cybersecurity, software engineering, chemistry, and biology.

The findings show rapid progress in technical performance. Success rates on apprentice-level cyber tasks rose from under 9% in 2023 to around 50% in 2025, while models also completed expert-level cyber challenges previously requiring a decade of experience.

Safeguards designed to limit misuse are also improving, according to the report. Red-team testing found that the time required to identify universal jailbreaks increased from minutes to several hours between model generations, representing an estimated forty-fold improvement in resistance.

The analysis highlights advances beyond cybersecurity. AI systems now complete hour-long software engineering tasks more than 40% of the time, while biology and chemistry models outperform PhD-level researchers in controlled knowledge tests and support non-experts in laboratory-style workflows.

While the report avoids policy recommendations, UK officials say it strengthens transparency around advanced AI systems. The government plans to continue investing in evaluation science through the AI Security Institute, supporting independent testing and international collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strong AI memory demand boosts Micron outlook into 2026

Micron Technology reported record first-quarter revenue for fiscal 2026, supported by strong pricing, a favourable product mix and operating leverage. The company said tight supply conditions and robust AI-related demand are expected to continue into 2026.

The Boise-based chipmaker generated $13.64 billion in quarterly revenue, led by record sales across DRAM, NAND, high-bandwidth memory and data centres. Chief executive Sanjay Mehrotra said structural shifts are driving rising demand for advanced memory in AI workloads.

Margins expanded sharply, setting Micron apart from peers such as Broadcom and Oracle, which reported margin pressure in recent earnings. Chief financial officer Mark Murphy said gross margin is expected to rise further in the second quarter, supported by higher prices, lower costs and a favourable revenue mix.

Analysts highlighted improving fundamentals and longer-term visibility. Baird said DRAM and NAND pricing could rise sequentially as Micron finalises long-term supply agreements, while capital expenditure plans for fiscal 2026 were viewed as manageable and focused on expanding high-margin HBM capacity.

Retail sentiment also turned strongly positive following the earnings release, with Micron shares jumping around 8 per cent in after-hours trading. The stock is on track to finish the year as the best-performing semiconductor company in the S&P 500, reinforcing confidence in its AI-driven growth trajectory.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon considers 10 billion investment in OpenAI

Amazon is reportedly considering a $10 billion investment in OpenAI, highlighting its growing focus on the generative AI market. The investment follows OpenAI’s October restructuring, giving it more flexibility to raise funds and form new tech partnerships.

OpenAI has recently secured major infrastructure agreements, including a $38 billion cloud computing deal with Amazon Web Services (AWS). Deals with Nvidia, AMD, and Broadcom boost OpenAI’s access to computing power for its AI development.

Amazon has invested $8 billion in Anthropic and continues developing AI hardware through AWS’s Inferentia and Trainium chips. The move into OpenAI reflects Amazon’s strategy to expand its influence across the AI sector.

OpenAI’s prior $13 billion Microsoft exclusivity has ended, enabling it to pursue new partnerships. The combination of fresh funding, cloud capacity, and hardware support positions OpenAI for continued growth in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instacart faces FTC scrutiny over AI pricing tool

US regulators are examining Instacart’s use of AI in grocery pricing, after reports that shoppers were shown different prices for identical items. Sources told Reuters the Federal Trade Commission has opened a probe into the company’s AI-driven pricing practices.

The FTC has issued a civil investigative demand seeking information about Instacart’s Eversight tool, which allows retailers to test different prices using AI. The agency said it does not comment on ongoing investigations, but expressed concern over reports of alleged pricing behaviour.

Scrutiny follows a study of 437 shoppers across four US cities, which found average price differences of 7 percent for the same grocery lists at the same stores. Some shoppers reportedly paid up to 23 percent more than others for identical items, according to the researchers.

Instacart said the pricing experiments were randomised and not based on personal data or individual behaviour. The company maintains that retailers, not Instacart, set prices on the platform, with the exception of Target, where prices are sourced externally and adjusted to cover costs.

The investigation comes amid wider regulatory focus on technology-driven pricing as living costs remain politically sensitive in the United States. Lawmakers have urged greater transparency, while the FTC continues broader inquiries into AI tools used to analyse consumer data and set prices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Competing visions of AGI emerge at Google DeepMind and Microsoft

Two former DeepMind co-founders now leading rival AI labs have outlined sharply different visions for how artificial general intelligence (AGI) should be developed, highlighting a growing strategic divide at the top of the industry.

Google DeepMind chief executive Demis Hassabis has framed AGI as a scientific tool for tackling foundational challenges. These include fusion energy, advanced materials, and fundamental physics. He says current models still lack consistent reasoning across tasks.

Hassabis has pointed to weaknesses, such as so-called ‘jagged intelligence’. Systems can perform well on complex benchmarks but fail simple tasks. DeepMind is investing in physics-based evaluations and AlphaZero-inspired research to enable genuine knowledge discovery rather than data replication.

Microsoft AI chief executive Mustafa Suleyman has taken a more product-led stance, framing AGI as an economic force rather than a scientific milestone. He has rejected the idea of race, instead prioritising controllable and reliable AI agents that operate under human oversight.

Suleyman has argued that governance, not raw capability, is the central challenge. He has emphasised containment, liability frameworks, and certified agents, reflecting wider tensions between rapid deployment and long-term scientific ambition as AI systems grow more influential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Euro stablecoins pass $1 billion milestone

Euro-denominated stablecoins have surpassed $1 billion in circulating supply, according to industry data, marking a significant milestone but remaining relatively insignificant within Europe’s broader monetary system. The total represents just 0.006% of the eurozone’s estimated $15.5 trillion M2 money supply.

Issuance activity was limited during 2020 and 2021, before accelerating from late 2023 onwards. Growth has continued through 2024 and into 2025, signalling renewed interest in tokenised euro products despite their small overall footprint.

Ethereum still hosts the largest share of euro stablecoins, although issuance has expanded to other blockchain networks, including Solana, Polygon, Arbitrum, Base, Avalanche, and Stellar. The shift reflects a move toward multi-chain deployment, focusing on payments, settlement, and cross-border transfers.

Euro stablecoins remain far smaller than dollar-based tokens, which continue to dominate on-chain liquidity and settlement. The euro’s limited digital presence highlights growth potential if regulation and institutional adoption advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated ads face new disclosure rules in South Korea

South Korea will require advertisers to label AI-generated or AI-assisted advertising from early 2026, marking a shift in how the country governs AI in online commerce and consumer protection.

The measure responds to a sharp rise in deceptive ads using synthetic imagery and deepfakes, particularly in healthcare and financial promotions. Regulators say transparency at the point of content delivery is intended to reduce manipulation and restore consumer trust.

Authorities in South Korea acknowledge that mandatory labelling alone may not deter malicious actors, who can bypass rules through offshore hosting or rapidly changing content. Detection challenges and uneven enforcement capacity across platforms remain open concerns.

South Korea’s industry groups warn that the policy could have uneven economic effects within the country’s advertising ecosystem. Large platforms and agencies are expected to adapt quickly, while smaller firms may face higher compliance costs that slow experimentation with generative tools.

Policymakers argue the framework aligns with South Korea’s broader AI governance strategy, positioning the country between innovation-led and precautionary regulatory models as synthetic media becomes more widespread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!