Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

€700 million crypto fraud network spanning Europe broken up

Authorities have broken an extensive cryptocurrency fraud and money laundering network that moved over EUR 700 million after years of international investigation.

The operation began with an investigation into a single fraudulent cryptocurrency platform and eventually uncovered an extensive network of fake investment schemes targeting thousands of victims.

Victims were drawn in by fake ads promising high returns and pressured via criminal call centres to pay more. Transferred funds were stolen and laundered across blockchains and exchanges, exposing a highly organised operation across Europe and beyond.

Police raids across Cyprus, Germany, and Spain in late October 2025 resulted in nine arrests and the seizure of millions in assets, including bank deposits, cryptocurrencies, cash, digital devices, and luxury watches.

Europol and Eurojust coordinated the cross-border operation with national authorities from France, Belgium, Germany, Spain, Malta, Cyprus, and other nations.

The second phase, executed in November, targeted the affiliate marketing infrastructure behind fraudulent online advertising, including deepfake campaigns impersonating celebrities and media outlets.

Law enforcement teams in Belgium, Bulgaria, Germany, and Israel conducted searches, dismantling key elements of the scam ecosystem. Investigations continue to track down remaining assets and dismantle the broader network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia blocks Snapchat and FaceTime access

Russia’s state communications watchdog has intensified its campaign against major foreign platforms by blocking Snapchat and restricting FaceTime calls.

The move follows earlier reports of disrupted Apple services inside the country, while users could still connect through VPNs instead of relying on direct access. Roskomnadzor accused Snapchat of enabling criminal activity and repeated earlier claims targeting Apple’s service.

A decision that marks the authorities’ first formal confirmation of limits on both platforms. It arrives as pressure increases on WhatsApp, which remains Russia’s most popular messenger, with officials warning that a whole block is possible.

Meta is accused of failing to meet data-localisation rules and of what the authorities describe as repeated violations linked to terrorism and fraud.

Digital rights groups argue that technical restrictions are designed to push citizens toward Max, a government-backed messenger that activists say grants officials sweeping access to private conversations, rather than protecting user privacy.

These measures coincide with wider crackdowns, including the recent blocking of the Roblox gaming platform over allegations of extremist content and harmful influence on children.

The tightening of controls reflects a broader effort to regulate online communication as Russia seeks stronger oversight of digital platforms. The latest blocks add further uncertainty for millions of users who depend on familiar services instead of switching to state-supported alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japanese high-schooler suspected of hacking net-cafe chain using AI

Authorities in Tokyo have issued an arrest warrant for a 17-year-old boy from Osaka on suspicion of orchestrating a large-scale cyberattack using artificial intelligence. The alleged target was the operator of the Kaikatsu Club internet-café chain (along with related fitness-gym business), which may have exposed the personal data of about 7.3 million customers.

According to investigators, the suspect used a computer programme, reportedly built with help from an AI chatbot, to send unauthorised commands around 7.24 million times to the company’s servers in order to extract membership information. The teenager was previously arrested in November in connection with a separate fraud case involving credit-card misuse.

Police have charged him under Japan’s law against unauthorised computer access and for obstructing business, though so far no evidence has emerged of misuse (for example, resale or public leaks) of the stolen data.

In his statement to investigators, the suspect reportedly said he carried out the hack simply because he found it fun to probe system vulnerabilities.

This case is the latest in a growing pattern of so-called AI-enabled cyber crimes in Japan, from fraudulent subscription schemes to ransomware generation. Experts warn that generative AI is lowering the barrier to entry for complex attacks, enabling individuals with limited technical training to carry out large-scale hacking or fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts Nigeria’s AI development

The US tech giant, Google, has announced a $2.1 million Google.org commitment to support Nigeria’s AI-powered future, aiming to strengthen local talent and improve digital safety nationwide.

An initiative that supports Nigeria’s National AI Strategy and its ambition to create one million digital jobs, recognising the economic potential of AI, which could add $15 billion to the country’s economy by 2030.

The investment focuses on developing advanced AI skills among students and developers instead of limiting progress to short-term training schemes.

Google will fund programmes led by expert partners such as FATE Foundation, the African Institute for Mathematical Sciences, and the African Technology Forum.

Their work will introduce advanced AI curricula into universities and provide developers with structured, practical routes from training to building real-world products.

The commitment also expands digital safety initiatives so communities can participate securely in the digital economy.

Junior Achievement Africa will scale Google’s ‘Be Internet Awesome’ curriculum to help families understand safe online behaviour, while the CyberSafe Foundation will deliver cybersecurity training and technical assistance to public institutions, strengthening national digital resilience.

Google aims to create more opportunities similar to those of Nigerian learners who used digital skills to secure full-time careers instead of remaining excluded from the digital economy.

By combining advanced AI training with improved digital safety, the company intends to support inclusive growth and build long-term capacity across Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK positions itself for leadership in the quantum computing race

Quantum computing is advancing as governments and industry pursue new frontiers beyond AI. The UK benefits from strong research traditions and skilled talent. Policymakers see early planning as vital for long-term competitiveness.

Companies across finance, energy and logistics are testing quantum methods for optimisation and modelling. Early pilots suggest that quantum techniques may offer advantages where classical approaches slow down or fail to scale. Interest in practical applications is rising across Europe.

The UK benefits from strong university spinouts and deep industrial partnerships. Joint programmes are accelerating work on molecular modelling and drug discovery. Many researchers argue that early experimentation helps build a more resilient quantum workforce.

New processors promise higher connectivity and lower error rates as the field moves closer to quantum advantage. Research teams are refining designs for future error-corrected systems. Hardware roadmaps indicate steady progress towards more reliable architectures.

Policy support will shape how quickly the UK can translate research into real-world capability. Long-term investments, open scientific collaboration and predictable regulation will be critical. Momentum suggests a decisive period for the country’s quantum ambitions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber Resilience Act signals a major shift in EU product security

EU regulators are preparing to enforce the Cyber Resilience Act, setting core security requirements for digital products in the European market. The law spans software, hardware, and firmware, establishing shared expectations for secure development and maintenance.

Scope captures apps, embedded systems, and cloud-linked features. Risk classes run from default to critical, directing firms to self-assess or undergo third-party checks. Any product sold beyond December 2027 must align with the regulation.

Obligations apply to manufacturers, importers, distributors, and developers. Duties include secure-by-design practices, documented risk analysis, disclosure procedures, and long-term support. Firms must notify ENISA within 24 hours of active exploitation and provide follow-up reports on a strict timeline.

Compliance requires technical files covering threat assessments, update plans, and software bills of materials. High-risk categories demand third-party evaluation, while lower-risk segments may rely on internal checks. Existing certifications help, but cannot replace CRA-specific conformity work.

Non-compliance risks fines, market restrictions, and reputational damage. Organisations preparing early are urged to classify products, run gap assessments, build structured roadmaps, and align development cycles with CRA guidance. EU authorities plan to provide templates and support as firms transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!