Japan’s mobile competition law forces Apple to adjust iOS app payments

Apple has announced changes to how iOS apps are distributed and monetised in Japan, bringing its platform into compliance with the country’s Mobile Software Competition Act. The updates introduce new options for alternative app marketplaces and payment methods for digital goods.

Under the revised framework, developers in Japan can distribute apps outside the App Store and offer alternative payment processing alongside the In-App Purchase. Apple said the changes aim to meet legal requirements while limiting new risks linked to fraud, malware, and data misuse.

Safeguards include app notarisation, authorisation rules for alternative marketplaces, and baseline security checks for all iOS apps. The measures are aimed at protecting users, including children, even as apps outside the App Store receive fewer protections.

Safeguards include app notarisation, authorisation rules for alternative marketplaces, and baseline security checks for all iOS apps. Apple said the measures aim to protect users, including children, even as apps outside the App Store receive fewer protections.

Additional controls are being rolled out with iOS 26.2, including browser and search engine choice screens, new default app settings, and expanded developer APIs. Apple said it will continue engaging with Japanese regulators as the new framework takes effect.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital fraud declines in Russia after rollout of Cyberbez measures

Russia has reported a sharp decline in cyber fraud following the introduction of new regulatory measures in 2025. Officials say legislative action targeting telephone and online scams has begun to deliver measurable results.

State Secretary and Deputy Minister of Digital Development Ivan Lebedev told the State Duma that crimes covered by the first package of reforms, known as ‘Cyberbez 1.0’, have fallen by 40%, according to confirmed statistics.

Earlier this year, Lebedev said Russia records roughly 677,000 cases of phone and online fraud annually, with incidents rising by more than 35% since 2022, highlighting the scale of the challenge faced by authorities.

In April, President Vladimir Putin signed a law introducing a range of countermeasures, including a state information system to combat fraud, limits on unsolicited marketing calls, stricter SIM card issuance rules, and new compliance obligations for banks.

Further steps are now under discussion. Officials say a second package is being prepared, while a third set of initiatives was announced in December as Russia continues to strengthen its digital security framework.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK report quantifies rapid advances in frontier AI capabilities

For the first time, the UK has published a detailed, evidence-based assessment of frontier AI capabilities. The Frontier AI Trends Report draws on two years of structured testing across areas including cybersecurity, software engineering, chemistry, and biology.

The findings show rapid progress in technical performance. Success rates on apprentice-level cyber tasks rose from under 9% in 2023 to around 50% in 2025, while models also completed expert-level cyber challenges previously requiring a decade of experience.

Safeguards designed to limit misuse are also improving, according to the report. Red-team testing found that the time required to identify universal jailbreaks increased from minutes to several hours between model generations, representing an estimated forty-fold improvement in resistance.

The analysis highlights advances beyond cybersecurity. AI systems now complete hour-long software engineering tasks more than 40% of the time, while biology and chemistry models outperform PhD-level researchers in controlled knowledge tests and support non-experts in laboratory-style workflows.

While the report avoids policy recommendations, UK officials say it strengthens transparency around advanced AI systems. The government plans to continue investing in evaluation science through the AI Security Institute, supporting independent testing and international collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strong AI memory demand boosts Micron outlook into 2026

Micron Technology reported record first-quarter revenue for fiscal 2026, supported by strong pricing, a favourable product mix and operating leverage. The company said tight supply conditions and robust AI-related demand are expected to continue into 2026.

The Boise-based chipmaker generated $13.64 billion in quarterly revenue, led by record sales across DRAM, NAND, high-bandwidth memory and data centres. Chief executive Sanjay Mehrotra said structural shifts are driving rising demand for advanced memory in AI workloads.

Margins expanded sharply, setting Micron apart from peers such as Broadcom and Oracle, which reported margin pressure in recent earnings. Chief financial officer Mark Murphy said gross margin is expected to rise further in the second quarter, supported by higher prices, lower costs and a favourable revenue mix.

Analysts highlighted improving fundamentals and longer-term visibility. Baird said DRAM and NAND pricing could rise sequentially as Micron finalises long-term supply agreements, while capital expenditure plans for fiscal 2026 were viewed as manageable and focused on expanding high-margin HBM capacity.

Retail sentiment also turned strongly positive following the earnings release, with Micron shares jumping around 8 per cent in after-hours trading. The stock is on track to finish the year as the best-performing semiconductor company in the S&P 500, reinforcing confidence in its AI-driven growth trajectory.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Kimwolf Android botnet linked to a record-breaking DDoS attacks

Cybersecurity researchers have uncovered a rapidly expanding Android botnet known as Kimwolf, which has already compromised approximately 1.8 million devices worldwide.

The malware primarily targets smart TVs, set-top boxes, and tablets connected to residential networks, with infections concentrated in countries including Brazil, India, the US, Argentina, South Africa, and the Philippines.

Analysis by QiAnXin XLab indicates that Kimwolf demonstrates a high degree of operational resilience.

Despite multiple disruptions to its command-and-control infrastructure, the botnet has repeatedly re-emerged with enhanced capabilities, including the adoption of Ethereum Name Service to harden its communications against takedown efforts.

Researchers also identified significant similarities between Kimwolf and AISURU, one of the most powerful botnets observed in recent years. Shared source code, infrastructure, and infection scripts suggest both botnets are operated by the same threat group and have coexisted on large numbers of infected devices.

AISURU has previously drawn attention for launching record-setting distributed denial-of-service attacks, including traffic peaks approaching 30 terabits per second.

The emergence of Kimwolf alongside such activity highlights the growing scale and sophistication of botnet-driven cyber threats targeting global internet infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PwC automates AI governance with Agent Mode

The global professional services network, PwC, has expanded its Model Edge platform with the launch of Agent Mode, an AI assistant designed to automate governance, compliance and documentation across enterprise AI model lifecycles.

The capability targets the growing administrative burden faced by organisations as AI model portfolios scale and regulatory expectations intensify.

Agent Mode allows users to describe governance tasks in natural language, instead of manually navigating workflows.

A system that executes actions directly within Model Edge, generates leadership-ready documentation and supports common document and reporting formats, significantly reducing routine compliance effort.

PwC estimates weekly time savings of between 20 and 50 percent for governance and model risk teams.

Behind the interface, a secure orchestration engine interprets user intent, verifies role based permissions and selects appropriate large language models based on task complexity. The design ensures governance guardrails remain intact while enabling faster and more consistent oversight.

PwC positions Agent Mode as a step towards fully automated, agent-driven AI governance, enabling organisations to focus expert attention on risk assessment and regulatory judgement instead of process management as enterprise AI adoption accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The limits of raw computing power in AI

As the global race for AI accelerates, a growing number of experts are questioning whether simply adding more computing power still delivers meaningful results. In a recent blog post, digital policy expert Jovan Kurbalija argues that AI development is approaching a critical plateau, where massive investments in hardware produce only marginal gains in performance.

Despite the dominance of advanced GPUs and ever-larger data centres, improvements in accuracy and reasoning among leading models are slowing, exposing what he describes as an emerging ‘AI Pareto paradox’.

According to Kurbalija, the imbalance is striking: around 80% of AI investment is currently spent on computing infrastructure, yet it accounts for only a fraction of real-world impact. As hardware becomes cheaper and more widely available, he suggests it is no longer the decisive factor.

Instead, the next phase of AI progress will depend on how effectively organisations integrate human knowledge, skills, and processes into AI systems.

That shift places people, not machines, at the centre of AI transformation. Kurbalija highlights the limits of traditional training approaches and points to new models of learning that focus on hands-on development and deep understanding of data.

Building a simple AI tool may now take minutes, but turning it into a reliable, high-precision system requires sustained human effort, from refining data to rethinking internal workflows.

Looking ahead to 2026, the message is clear. Success in AI will not be defined by who owns the most powerful chips, but by who invests most wisely in people.

As Kurbalija concludes, organisations that treat AI as a skill to be cultivated, rather than a product to be purchased, are far more likely to see lasting benefits from the technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and security trends shape the internet in 2025

Cloudflare released its sixth annual Year in Review, providing a comprehensive snapshot of global Internet trends in 2025. The report highlights rising digital reliance, AI progress, and evolving security threats across Cloudflare’s network and Radar data.

Global Internet traffic rose 19 percent year-on-year, reflecting increased use for personal and professional activities. A key trend was the move from large-scale AI training to continuous AI inference, alongside rapid growth in generative AI platforms.

Google and Meta remained the most popular services, while ChatGPT led in generative AI usage.

Cybersecurity remained a critical concern. Post-quantum encryption now protects 52 percent of Internet traffic, yet record-breaking DDoS attacks underscored rising cyber risks.

Civil society and non-profit organisations were the most targeted sectors for the first time, while government actions caused nearly half of the major Internet outages.

Connectivity varied by region, with Europe leading in speed and quality and Spain ranking highest globally. The report outlines 2025’s Internet challenges and progress, providing insights for governments, businesses, and users aiming for greater resilience and security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto theft soars in 2025 with fewer but bigger attacks

Cryptocurrency theft intensified in 2025, with total stolen funds exceeding $3.4 billion despite fewer large-scale incidents. Losses became increasingly concentrated, with a few major breaches driving most of the annual damage and widening the gap between typical hacks and extreme outliers.

North Korea remained the dominant threat actor, stealing at least $2.02 billion in digital assets during the year, a 51% increase compared with 2024.

Larger thefts were achieved through fewer operations, often relying on insider access, executive impersonation, and long-term infiltration of crypto firms rather than frequent attacks.

Laundering activity linked to North Korean actors followed a distinctive and disciplined pattern. Stolen funds moved in smaller tranches through Chinese-language laundering networks, bridges, and mixing services, usually following a structured 45-day cycle.

Individual wallet attacks surged, impacting tens of thousands of victims, while the total value stolen from personal wallets fell. Decentralised finance remained resilient, with hack losses low despite rising locked capital, indicating stronger security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!