ESMA could gain direct supervision over crypto firms

The European Commission has proposed giving the European Securities and Markets Authority (ESMA) expanded powers to oversee crypto and broader financial markets, aiming to close the regulatory gap with the United States.

The plan would give ESMA direct supervision of crypto service providers, trading venues, and central counterparties, while boosting its role in asset management coordination. Approval from the European Parliament and the Council is still required.

Calls for stronger oversight have grown following concerns over lenient national regimes, including Malta’s crypto licensing system. France, Austria, and Italy have called for ESMA to directly oversee major crypto firms, with France threatening to block cross-border licence passporting.

Revisions to the Markets in Crypto-Assets Regulation (MiCA) are also under discussion, with proposals for stricter rules on offshore crypto activities, improved cybersecurity oversight, and tighter regulations for token offerings.

Experts warn that centralising ESMA supervision may slow innovation, especially for smaller crypto and fintech startups reliant on national regulators. ESMA would need significant resources for the expanded mandate, which could slow decision-making across the EU.

The proposal aims to boost EU capital market competitiveness and increase wealth for citizens. EU stock exchanges currently account for just 73% of the bloc’s GDP, compared with 270% in the US, highlighting the need for a more integrated regulatory framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches nationwide AI initiative in Australia

OpenAI has launched OpenAI for Australia, a nationwide initiative to unlock the economic and societal benefits of AI. The program aims to support sovereign AI infrastructure, upskill Australians, and accelerate the country’s local AI ecosystem.

CEO Sam Altman highlighted Australia’s deep technical talent and strong institutions as key factors in becoming a global leader in AI.

A significant partnership with NEXTDC will see the development of a next-generation hyperscale AI campus and large GPU supercluster at Sydney’s Eastern Creek S7 site.

The project is expected to create thousands of jobs, boost local supplier opportunities, strengthen STEM and AI skills, and provide sovereign compute capacity for critical workloads.

OpenAI will also upskill more than 1.2 million Australians in collaboration with CommBank, Coles and Wesfarmers. OpenAI Academy will provide tailored modules to give workers and small business owners practical AI skills for confident daily use.

The nationwide rollout of courses is scheduled to begin in 2026.

OpenAI is launching its first Australian start-up program with local venture capital firms Blackbird, Square Peg, and AirTree to support home-grown innovation. Start-ups will receive API credits, mentorship, workshops, and access to Founder Day to accelerate product development and scale AI solutions locally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU partners with EIB to support AI gigafactories

The European Commission and the European Investment Bank Group (EIB) have signed a memorandum of understanding to support the development of AI Gigafactories across the EU. The partnership aims to position Europe as a leading AI hub by accelerating financing and the construction of large-scale AI facilities.

The agreement establishes a framework to guide consortia responding to the Commission’s informal Call for Expression of Interest. EIB advisory support will help turn proposals into bankable projects for the 2026 AI Gigafactory call, with possible co-financing.

The initiative builds on InvestAI, announced in February 2025, mobilising €20 billion to support up to five AI Gigafactories. These facilities will boost Europe’s computing infrastructure, reinforce technological sovereignty, and drive innovation across the continent.

By translating Europe’s AI ambitions into concrete, large-scale projects, the Commission and the EIB aim to position the EU as a global leader in next-generation AI, while fostering investment and industrial growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

€700 million crypto fraud network spanning Europe broken up

Authorities have broken an extensive cryptocurrency fraud and money laundering network that moved over EUR 700 million after years of international investigation.

The operation began with an investigation into a single fraudulent cryptocurrency platform and eventually uncovered an extensive network of fake investment schemes targeting thousands of victims.

Victims were drawn in by fake ads promising high returns and pressured via criminal call centres to pay more. Transferred funds were stolen and laundered across blockchains and exchanges, exposing a highly organised operation across Europe and beyond.

Police raids across Cyprus, Germany, and Spain in late October 2025 resulted in nine arrests and the seizure of millions in assets, including bank deposits, cryptocurrencies, cash, digital devices, and luxury watches.

Europol and Eurojust coordinated the cross-border operation with national authorities from France, Belgium, Germany, Spain, Malta, Cyprus, and other nations.

The second phase, executed in November, targeted the affiliate marketing infrastructure behind fraudulent online advertising, including deepfake campaigns impersonating celebrities and media outlets.

Law enforcement teams in Belgium, Bulgaria, Germany, and Israel conducted searches, dismantling key elements of the scam ecosystem. Investigations continue to track down remaining assets and dismantle the broader network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches Workspace Studio for AI-powered automation

Google has made Workspace Studio generally available, allowing employees to design, manage, and share AI agents directly within Workspace. Powered by Gemini 3, these agents automate tasks ranging from simple routines to complex business workflows, all without coding.

The platform aims to save time on repetitive work, freeing employees to focus on higher-value activities.

Agents can understand context, reason through problems, and integrate with core Workspace apps such as Gmail, Drive, and Chat, as well as enterprise platforms like Asana, Jira, Mailchimp, and Salesforce.

Early adopters, including cleaning solutions leader Kärcher, have utilised Workspace Studio to streamline workflows, reducing planning time by up to 90% and consolidating multiple tasks into a single minute.

Workspace Studio allows users to build agents using templates or natural language prompts, making automation accessible to non-specialists. Agents can manage status reports, reminders, email triage, and critical tasks, such as legal notices or travel requests.

Teams can also easily share agents, ensuring collaboration and consistency across workflows.

The rollout to business customers will continue over the coming weeks. Users can start creating agents immediately, explore templates, use prompts for automations, and join the Gemini Alpha program to test early features and controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Nigeria’s AI development

The US tech giant, Google, has announced a $2.1 million Google.org commitment to support Nigeria’s AI-powered future, aiming to strengthen local talent and improve digital safety nationwide.

An initiative that supports Nigeria’s National AI Strategy and its ambition to create one million digital jobs, recognising the economic potential of AI, which could add $15 billion to the country’s economy by 2030.

The investment focuses on developing advanced AI skills among students and developers instead of limiting progress to short-term training schemes.

Google will fund programmes led by expert partners such as FATE Foundation, the African Institute for Mathematical Sciences, and the African Technology Forum.

Their work will introduce advanced AI curricula into universities and provide developers with structured, practical routes from training to building real-world products.

The commitment also expands digital safety initiatives so communities can participate securely in the digital economy.

Junior Achievement Africa will scale Google’s ‘Be Internet Awesome’ curriculum to help families understand safe online behaviour, while the CyberSafe Foundation will deliver cybersecurity training and technical assistance to public institutions, strengthening national digital resilience.

Google aims to create more opportunities similar to those of Nigerian learners who used digital skills to secure full-time careers instead of remaining excluded from the digital economy.

By combining advanced AI training with improved digital safety, the company intends to support inclusive growth and build long-term capacity across Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!