The limits of raw computing power in AI

As the global race for AI accelerates, a growing number of experts are questioning whether simply adding more computing power still delivers meaningful results. In a recent blog post, digital policy expert Jovan Kurbalija argues that AI development is approaching a critical plateau, where massive investments in hardware produce only marginal gains in performance.

Despite the dominance of advanced GPUs and ever-larger data centres, improvements in accuracy and reasoning among leading models are slowing, exposing what he describes as an emerging ‘AI Pareto paradox’.

According to Kurbalija, the imbalance is striking: around 80% of AI investment is currently spent on computing infrastructure, yet it accounts for only a fraction of real-world impact. As hardware becomes cheaper and more widely available, he suggests it is no longer the decisive factor.

Instead, the next phase of AI progress will depend on how effectively organisations integrate human knowledge, skills, and processes into AI systems.

That shift places people, not machines, at the centre of AI transformation. Kurbalija highlights the limits of traditional training approaches and points to new models of learning that focus on hands-on development and deep understanding of data.

Building a simple AI tool may now take minutes, but turning it into a reliable, high-precision system requires sustained human effort, from refining data to rethinking internal workflows.

Looking ahead to 2026, the message is clear. Success in AI will not be defined by who owns the most powerful chips, but by who invests most wisely in people.

As Kurbalija concludes, organisations that treat AI as a skill to be cultivated, rather than a product to be purchased, are far more likely to see lasting benefits from the technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and security trends shape the internet in 2025

Cloudflare released its sixth annual Year in Review, providing a comprehensive snapshot of global Internet trends in 2025. The report highlights rising digital reliance, AI progress, and evolving security threats across Cloudflare’s network and Radar data.

Global Internet traffic rose 19 percent year-on-year, reflecting increased use for personal and professional activities. A key trend was the move from large-scale AI training to continuous AI inference, alongside rapid growth in generative AI platforms.

Google and Meta remained the most popular services, while ChatGPT led in generative AI usage.

Cybersecurity remained a critical concern. Post-quantum encryption now protects 52 percent of Internet traffic, yet record-breaking DDoS attacks underscored rising cyber risks.

Civil society and non-profit organisations were the most targeted sectors for the first time, while government actions caused nearly half of the major Internet outages.

Connectivity varied by region, with Europe leading in speed and quality and Spain ranking highest globally. The report outlines 2025’s Internet challenges and progress, providing insights for governments, businesses, and users aiming for greater resilience and security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto theft soars in 2025 with fewer but bigger attacks

Cryptocurrency theft intensified in 2025, with total stolen funds exceeding $3.4 billion despite fewer large-scale incidents. Losses became increasingly concentrated, with a few major breaches driving most of the annual damage and widening the gap between typical hacks and extreme outliers.

North Korea remained the dominant threat actor, stealing at least $2.02 billion in digital assets during the year, a 51% increase compared with 2024.

Larger thefts were achieved through fewer operations, often relying on insider access, executive impersonation, and long-term infiltration of crypto firms rather than frequent attacks.

Laundering activity linked to North Korean actors followed a distinctive and disciplined pattern. Stolen funds moved in smaller tranches through Chinese-language laundering networks, bridges, and mixing services, usually following a structured 45-day cycle.

Individual wallet attacks surged, impacting tens of thousands of victims, while the total value stolen from personal wallets fell. Decentralised finance remained resilient, with hack losses low despite rising locked capital, indicating stronger security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Joule Agent workshops help organisations build practical AI agent solutions

Artificial intelligence agents, autonomous systems that perform tasks or assist decision-making, are increasingly part of digital transformation discussions, but their value depends on solving actual business problems rather than adopting technology for its own sake.

SAP’s AppHaus Joule Agent Discovery and Design workshops provide a structured, human-centred approach to help organisations discover where agentic AI can deliver real impact and design agents that collaborate effectively with humans.

The Discovery workshop focuses on identifying challenges and inefficiencies where automation can add value, guiding participants to select high-priority use cases that suit agentic solutions.

The Design workshop then brings users and business experts together to define each AI agent’s role, responsibilities and required skills. By the end of these sessions, participants have detailed plans defining tasks, workflows and instructions that can be translated into actual AI agent implementations.

SAP also supports these formats with self-paced learning courses and toolkits to help anyone run the workshops confidently, emphasising practical human–AI partnerships rather than technology hype.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini users can now build custom AI mini-apps with Opal

Google has expanded the availability of Opal, a no-code experimental tool from Google Labs, by integrating it directly into the Gemini web application.

This integration allows users to build AI-powered mini-apps, known as Gems, without writing any code, using natural language descriptions and a visual workflow editor inside Gemini’s interface.

Previously available only via separate Google Labs experiments, Opal now appears in the Gems manager section of the Gemini web app, where users can describe the functionality they want and have Gemini generate a customised mini-app.

These mini-apps can be reused for specific tasks and workflows and saved as part of a user’s Gem collection.

The no-code ‘vibe-coding’ approach aims to democratise AI development by enabling creators, developers and non-technical users alike to build applications that automate or augment tasks, all through intuitive language prompts and visual building blocks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada drives AI growth and adoption

The Government of Canada is investing over $19 million to help 20 AI and tech businesses in southern Ontario bring new solutions to market. The funding aims to boost Canada’s global competitiveness in AI.

The Ontario Brain Institute receives $2 million to expand its Centre for Analytics, providing secure and bias-free AI tools. This initiative supports safe and responsible AI adoption across industries.

Investments are expected to create jobs and accelerate AI adoption nationwide. The Regional Artificial Intelligence Initiative builds on over $450 million in FedDev Ontario funding since 2015.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Instacart faces FTC scrutiny over AI pricing tool

US regulators are examining Instacart’s use of AI in grocery pricing, after reports that shoppers were shown different prices for identical items. Sources told Reuters the Federal Trade Commission has opened a probe into the company’s AI-driven pricing practices.

The FTC has issued a civil investigative demand seeking information about Instacart’s Eversight tool, which allows retailers to test different prices using AI. The agency said it does not comment on ongoing investigations, but expressed concern over reports of alleged pricing behaviour.

Scrutiny follows a study of 437 shoppers across four US cities, which found average price differences of 7 percent for the same grocery lists at the same stores. Some shoppers reportedly paid up to 23 percent more than others for identical items, according to the researchers.

Instacart said the pricing experiments were randomised and not based on personal data or individual behaviour. The company maintains that retailers, not Instacart, set prices on the platform, with the exception of Target, where prices are sourced externally and adjusted to cover costs.

The investigation comes amid wider regulatory focus on technology-driven pricing as living costs remain politically sensitive in the United States. Lawmakers have urged greater transparency, while the FTC continues broader inquiries into AI tools used to analyse consumer data and set prices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands with a new app directory from OpenAI

OpenAI has opened submissions for third-party apps inside ChatGPT, allowing developers to publish tools that extend conversations with real-world actions. Approved apps will appear in a new in-product directory, enabling users to move directly from discussion to execution.

The initiative builds on OpenAI’s earlier DevDay announcement, where it outlined how apps could add specialised context to conversations. Developers can now submit apps for review, provided they meet the company’s requirements on safety, privacy, and user experience.

ChatGPT apps are designed to support practical workflows such as ordering groceries, creating slide decks, or searching for apartments. Apps can be activated during conversations via the tools menu, by mentioning them directly, or through automated recommendations based on context and usage signals.

To support adoption, OpenAI has released developer resources including best-practice guides, open-source example apps, and a chat-native UI library. An Apps SDK, currently in beta, allows developers to build experiences that integrate directly into conversational flows.

During the initial rollout, OpenAI’s monetisation is limited to external links directing users to developers’ own platforms. said it plans to explore additional revenue models over time as the app ecosystem matures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!