Italy fines Apple €98 million over App Store competition breach

Apple has been fined €98 million by Italy’s competition authority after regulators concluded that its App Tracking Transparency framework distorted competition in the app store market.

Authorities stated that the policy strengthened Apple’s dominant position while limiting how third-party developers collect advertising data.

The investigation found that developers were required to request consent multiple times for the same data processing purposes, creating friction that disproportionately affected competitors.

Regulators in Italy argued that equivalent privacy protections could have been achieved through a single consent mechanism instead of duplicated prompts.

According to the Italian authority, the rules were imposed unilaterally across the App Store ecosystem and harmed commercial partners reliant on targeted advertising. The watchdog also questioned whether the policy was proportionate from a data protection perspective under the EU law.

Apple rejected the findings and confirmed plans to appeal, stating that App Tracking Transparency prioritises user privacy over the interests of ad technology firms.

The decision follows similar penalties and warnings issued in France and Germany, reinforcing broader European scrutiny of platform governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN report highlights AI opportunities for small businesses

AI is increasingly helping entrepreneurs in developing countries launch, manage, and grow their businesses, according to a new UNCTAD report. Start-ups and small businesses are using AI for marketing, customer service, logistics, finance, and product design.

Large language models are enabling smaller firms to adopt AI quickly and affordably, but adoption remains uneven. Many entrepreneurs struggle to see AI’s business value, and limited skills and talent slow adoption, especially in smaller firms.

Experts emphasise that supportive ecosystems, clear governance, and skills development are essential for meaningful AI integration.

Access to affordable technology and finance also plays a crucial role. Open-source platforms, collaborations, and phased adoption- from off-the-shelf tools to in-house capabilities, help firms experiment, learn, and grow while managing risk.

UNCTAD’s report highlights the importance of policy frameworks to foster AI adoption, recommending that governments provide clear, practical rules, accessible infrastructure, and targeted training.

Entrepreneurship support centres in several countries are already helping firms identify use cases and build hands-on AI skills, bridging the gap between strategy and practical implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia seeks China market access as US eases AI chip restrictions

The US tech giant NVIDIA has largely remained shut out of China’s market for advanced AI chips, as US export controls have restricted sales due to national security concerns.

High-performance processors such as the H100 and H200 were barred, forcing NVIDIA to develop downgraded alternatives tailored for Chinese customers instead of flagship products.

A shift in policy emerged after President Donald Trump announced that H200 chip sales to China could proceed following a licensing review and a proposed 25% fee. The decision reopened a limited pathway for exporting advanced US AI hardware, subject to regulatory approval in both Washington and Beijing.

If authorised, the H200 shipments would represent the most powerful US-made AI chips permitted in China since restrictions were introduced. The move could help NVIDIA monetise existing H200 inventory while easing pressure on its China business as it transitions towards newer Blackwell chips.

Strategically, the decision may slow China’s push for AI chip self-sufficiency, as domestic alternatives still lag behind NVIDIA’s technology.

At the same time, the policy highlights a transactional approach to export controls, raising uncertainty over long-term US efforts to contain China’s technological rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ripple transforms cross-border payments with XRP

Cross-border payments have long struggled with delays and high costs, but networks like SWIFT could be transformed by systems that leverage blockchain. Ripple, launched by Ripple Labs in 2012, enables faster, more transparent, and cost-effective international transfers.

RippleNet, the company’s unified payment network, connects multiple banks via the interledger standard, removing intermediaries and enabling near-instant settlement. XRP, Ripple’s digital token, acts as a bridge currency to provide liquidity, though transactions can occur without it.

XRP boasts low fees, high scalability, and settlement times of just a few seconds.

Since its creation, Ripple has evolved from individual protocols to the unified RippleNet platform, supported by the XRPL Foundation. Unlike Bitcoin, XRP is premined and relies on a select group of validators, offering a different governance model and centralisation approach.

The network also supports broader financial applications, including central bank digital currencies, DeFi, and NFTs.

Despite its potential, investing in Ripple carries risks typical of crypto assets, including volatility, lack of regulation, and complexity. Investors are advised to research thoroughly and limit high-risk exposure to ensure a diversified portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil servants and AI will work together in 2050

Public administrations worldwide are facing unprecedented change as AI reshapes automation, procurement, and decision-making. Governments must stay flexible, open, and resilient, preparing for multiple futures with foresight, continuous learning, and adaptability.

During World Futures Day, experts from the SPARK-AI Alliance and representatives from governments, academia, and the private sector explored four potential scenarios for public service in 2050.

Scenarios ranged from human-centred administrations that reinforce trust, to algorithmic bureaucracies focused on oversight, agentic administrations with semi-autonomous AI actors, and data-eroded futures that require renewed governance of poor-quality data.

Key insights highlighted the growing importance of anticipatory capacity, positioning AI as a ‘co-worker’ rather than a replacement, and emphasising the need to safeguard public trust.

Civil servants will increasingly focus on ethical reasoning, interpretation of automated processes, and cross-disciplinary collaboration, supported by robust accountability and transparent data governance.

The SPARK-AI Alliance has launched a Working Group on the Future of Work in the Public Sector to help governments anticipate and prepare for change. Its focus will be on building resilient public administrations, evolving civil-service roles, and maintaining trust in AI-enabled governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK report quantifies rapid advances in frontier AI capabilities

For the first time, the UK has published a detailed, evidence-based assessment of frontier AI capabilities. The Frontier AI Trends Report draws on two years of structured testing across areas including cybersecurity, software engineering, chemistry, and biology.

The findings show rapid progress in technical performance. Success rates on apprentice-level cyber tasks rose from under 9% in 2023 to around 50% in 2025, while models also completed expert-level cyber challenges previously requiring a decade of experience.

Safeguards designed to limit misuse are also improving, according to the report. Red-team testing found that the time required to identify universal jailbreaks increased from minutes to several hours between model generations, representing an estimated forty-fold improvement in resistance.

The analysis highlights advances beyond cybersecurity. AI systems now complete hour-long software engineering tasks more than 40% of the time, while biology and chemistry models outperform PhD-level researchers in controlled knowledge tests and support non-experts in laboratory-style workflows.

While the report avoids policy recommendations, UK officials say it strengthens transparency around advanced AI systems. The government plans to continue investing in evaluation science through the AI Security Institute, supporting independent testing and international collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strong AI memory demand boosts Micron outlook into 2026

Micron Technology reported record first-quarter revenue for fiscal 2026, supported by strong pricing, a favourable product mix and operating leverage. The company said tight supply conditions and robust AI-related demand are expected to continue into 2026.

The Boise-based chipmaker generated $13.64 billion in quarterly revenue, led by record sales across DRAM, NAND, high-bandwidth memory and data centres. Chief executive Sanjay Mehrotra said structural shifts are driving rising demand for advanced memory in AI workloads.

Margins expanded sharply, setting Micron apart from peers such as Broadcom and Oracle, which reported margin pressure in recent earnings. Chief financial officer Mark Murphy said gross margin is expected to rise further in the second quarter, supported by higher prices, lower costs and a favourable revenue mix.

Analysts highlighted improving fundamentals and longer-term visibility. Baird said DRAM and NAND pricing could rise sequentially as Micron finalises long-term supply agreements, while capital expenditure plans for fiscal 2026 were viewed as manageable and focused on expanding high-margin HBM capacity.

Retail sentiment also turned strongly positive following the earnings release, with Micron shares jumping around 8 per cent in after-hours trading. The stock is on track to finish the year as the best-performing semiconductor company in the S&P 500, reinforcing confidence in its AI-driven growth trajectory.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon considers 10 billion investment in OpenAI

Amazon is reportedly considering a $10 billion investment in OpenAI, highlighting its growing focus on the generative AI market. The investment follows OpenAI’s October restructuring, giving it more flexibility to raise funds and form new tech partnerships.

OpenAI has recently secured major infrastructure agreements, including a $38 billion cloud computing deal with Amazon Web Services (AWS). Deals with Nvidia, AMD, and Broadcom boost OpenAI’s access to computing power for its AI development.

Amazon has invested $8 billion in Anthropic and continues developing AI hardware through AWS’s Inferentia and Trainium chips. The move into OpenAI reflects Amazon’s strategy to expand its influence across the AI sector.

OpenAI’s prior $13 billion Microsoft exclusivity has ended, enabling it to pursue new partnerships. The combination of fresh funding, cloud capacity, and hardware support positions OpenAI for continued growth in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instacart faces FTC scrutiny over AI pricing tool

US regulators are examining Instacart’s use of AI in grocery pricing, after reports that shoppers were shown different prices for identical items. Sources told Reuters the Federal Trade Commission has opened a probe into the company’s AI-driven pricing practices.

The FTC has issued a civil investigative demand seeking information about Instacart’s Eversight tool, which allows retailers to test different prices using AI. The agency said it does not comment on ongoing investigations, but expressed concern over reports of alleged pricing behaviour.

Scrutiny follows a study of 437 shoppers across four US cities, which found average price differences of 7 percent for the same grocery lists at the same stores. Some shoppers reportedly paid up to 23 percent more than others for identical items, according to the researchers.

Instacart said the pricing experiments were randomised and not based on personal data or individual behaviour. The company maintains that retailers, not Instacart, set prices on the platform, with the exception of Target, where prices are sourced externally and adjusted to cover costs.

The investigation comes amid wider regulatory focus on technology-driven pricing as living costs remain politically sensitive in the United States. Lawmakers have urged greater transparency, while the FTC continues broader inquiries into AI tools used to analyse consumer data and set prices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!