Argentina weighs letting banks offer crypto services

Argentina may soon shift its digital-asset policy as the central bank considers rules allowing banks to offer crypto trading and custody services. The proposal marks a move towards integrating a market that has largely operated through exchanges and fintech platforms.

Industry sources say approval could arrive by April 2026 if the process stays on schedule.

Crypto usage in Argentina remains far above regional averages, driven by years of inflation and strict currency controls. Many households use digital assets as a store of value, and regulated banks could provide clearer safeguards and easier access for everyday users.

Regulators are still debating sensitive issues such as custody requirements, capital treatment and which tokens banks would be permitted to handle.

The conversation continues in the shadow of the Libra meme-coin scandal, which left thousands of Argentines with steep losses and highlighted the risks of politically amplified speculation.

Regulators are weighing custody, capital, and token rules while aiming to formalise the market without boosting volatility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets X for breaking the Digital Services Act

European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.

The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.

Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.

EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.

The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.

European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.

Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.

X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.

Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Son warns of vast AI leap as SoftBank outlines future risks

SoftBank chief Masayoshi Son told South Korean President Lee Jae Myung that advanced AI could surpass humans by an extreme margin. He suggested future systems may be 10,000 times more capable than people. The remarks came during a meeting in Seoul focused on national AI ambitions.

Son compared the potential intelligence gap to the difference between humans and goldfish. He said AI might relate to humans as humans relate to pets. Lee acknowledged the vision but admitted feeling uneasy about the scale of the described change.

Son argued that superintelligent systems would not threaten humans physically, noting they lack biological needs. He framed coexistence as the likely outcome. His comments followed renewed political interest in positioning South Korea as an AI leader.

The debate turned to cultural capability when Lee asked whether AI might win the Nobel Prize in Literature. Son said such an achievement was plausible. He pointed to fast-moving advances that continue to challenge expectations about machine creativity.

Researchers say artificial superintelligence remains theoretical, but early steps toward AGI may emerge within a decade. Many expect systems to outperform humans across a wide set of tasks. Policy discussions in South Korea reflect growing urgency around AI governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LLM shortcomings highlighted by Gary Marcus during industry debate

Gary Marcus argued at Axios’ AI+ Summit that large language models (LLMs) offer utility but fall short of the transformative claims made by their developers. He framed their fundamental role as groundwork for future artificial general intelligence. He suggested that meaningful capability shifts lie beyond today’s systems.

Marcus said alignment challenges stem from LLMs lacking robust world models and reliable constraints. He noted that models still hallucinate despite explicit instructions to avoid errors. He described current systems as an early rehearsal rather than a route to AGI.

Concerns raised included bias, misinformation, environmental impact and implications for education. Marcus also warned about the decline of online information quality as automated content spreads. He believes structural flaws make these issues persistent.

Industry momentum remains strong despite unresolved risks. Developers continue to push forward without clear explanations for model behaviour. Investment flows remain focused on the promise of AGI, despite timelines consistently shifting.

Strategic competition adds pressure, with the United States seeking to maintain an edge over China in advanced AI. Political signals reinforce the drive toward rapid development. Marcus argued that stronger frameworks are needed before systems scale further.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ESMA could gain direct supervision over crypto firms

The European Commission has proposed giving the European Securities and Markets Authority (ESMA) expanded powers to oversee crypto and broader financial markets, aiming to close the regulatory gap with the United States.

The plan would give ESMA direct supervision of crypto service providers, trading venues, and central counterparties, while boosting its role in asset management coordination. Approval from the European Parliament and the Council is still required.

Calls for stronger oversight have grown following concerns over lenient national regimes, including Malta’s crypto licensing system. France, Austria, and Italy have called for ESMA to directly oversee major crypto firms, with France threatening to block cross-border licence passporting.

Revisions to the Markets in Crypto-Assets Regulation (MiCA) are also under discussion, with proposals for stricter rules on offshore crypto activities, improved cybersecurity oversight, and tighter regulations for token offerings.

Experts warn that centralising ESMA supervision may slow innovation, especially for smaller crypto and fintech startups reliant on national regulators. ESMA would need significant resources for the expanded mandate, which could slow decision-making across the EU.

The proposal aims to boost EU capital market competitiveness and increase wealth for citizens. EU stock exchanges currently account for just 73% of the bloc’s GDP, compared with 270% in the US, highlighting the need for a more integrated regulatory framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot