Crypto gains official recognition in Argentina investor framework

Argentina’s securities regulator has officially recognised cryptocurrencies as part of an individual’s net worth when determining qualified investor status. The change is set out in CNV Resolution 1125/2026, which allows digital assets to be included in the financial threshold of roughly $479,000.

The measure defines virtual assets as transferable digital value, covering cryptocurrencies, tokenised assets, and stablecoins. Authorities stated that incorporating these assets reflects a broader view of financial capacity and aims to expand participation in investment markets.

A 2022 central bank ban still prevents banks from offering crypto services, though some institutions are testing blockchain-based settlement systems internally. The restriction is expected to ease as the government signals a more open stance towards digital assets.

The policy shift positions Argentina as gradually integrating crypto into its formal financial framework, with the potential to widen investor access and align regulation with evolving digital markets.

Financial systems are gradually adapting to digital assets, even in jurisdictions with strict restrictions, signalling a slow convergence between traditional banking infrastructure and blockchain-based settlement technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Corporate AI governance gaps highlighted in UNESCO report

UNESCO and the Thomson Reuters Foundation have published ‘Responsible AI in practice: 2025 global insights from the AI Company Data Initiative‘, presenting findings from what the report describes as the largest global dataset of corporate responsible AI disclosures.

The report analyses 2,972 companies across 11 sectors and multiple regions using publicly available disclosures and company survey responses collected through the AI Company Data Initiative.

The report says AI is being embedded across companies’ products, services, and internal operations faster than governance and disclosure are developing. It states that 43.7% of companies publicly communicate having an AI strategy or guidelines, but only 13% publicly claim adherence to a formal AI governance framework.

Among those that do cite a framework, 53% refer to the EU AI Act, while the report says 43.6% cite ‘other’ frameworks, which it presents as weakening comparability across the wider AI governance ecosystem.

The publication also says many companies describe AI governance in conceptual terms while providing less evidence on operational controls, accountability pathways, monitoring, and remediation. It states that 40% report board- or committee-level oversight on AI, and 12.4% report having a policy to ensure a human oversees AI systems.

At the same time, the publication says 72% of companies do not report conducting any AI-related impact assessment. Of those that do, 11% report environmental impact assessments and 7% report human rights impact assessments. The key statistics on page 10 visually present these findings.

Regarding labour impacts, the report says companies do not provide adequate protection for workers as AI reshapes jobs. It states that while 31% of companies claim to have AI training programmes, only 12% offered structured training with comprehensive coverage. It also argues that effective worker protection requires stronger evidence of reskilling, retraining, redeployment, transition support, and access to remedy where AI affects workers’ rights.

Why does it matter?

The report further states that ethical issues, including human rights and environmental impacts, are being sidelined in AI governance and risk management, while transparency regarding training data, third-party systems, and user rights remains uneven. It presents the AI Company Data Initiative as a tool to help companies assess their governance practices against UNESCO’s Recommendation on the Ethics of AI and to give investors more comparable information on how AI is governed in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU approves Italian State aid to support graphene-based photonic chip development

The European Commission has approved a €211 million Italian State aid measure to support the development of photonic chips based on graphene technology.

A funding will be provided to the Italian SME CamGraPhIC, with project activities taking place in Pisa and Bergamo.

Such an initiative focuses on optical transceivers that transmit data using light rather than electrons. The use of graphene instead of silicon is expected to enhance performance and energy efficiency across sectors such as telecommunications, automotive, aerospace and defence.

The Commission assessed the measure under the EU State aid rules and concluded that the funding is necessary, proportionate and aligned with research and innovation objectives. It also found that the project would not proceed without public support, demonstrating an incentive effect.

A decision that reflects broader EU efforts to strengthen semiconductor capabilities and support advanced digital technologies through targeted public investment and regulatory oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Microsoft outlines approach to scaling AI across organisational systems

A shift from early AI adoption towards what it terms ‘frontier transformation’ has been described by Microsoft, where AI is integrated into core organisational processes.

Such an approach reflects how AI is increasingly embedded within everyday workflows rather than used in isolated pilots.

According to Microsoft, scaling AI requires moving beyond experimentation and establishing structured operating models. It includes addressing practical challenges such as data integration, system reliability, and alignment with organisational objectives.

A framework that also highlights the importance of governance and execution, with AI systems expected to operate under defined standards similar to other critical infrastructure. Something that involves coordination across platforms, internal processes, and external partners.

Why does it matter?

Frontier transformation illustrates a broader transition in how organisations approach AI deployment, focusing on long-term integration, operational consistency, and scalable implementation across different sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China pushes blockchain adoption in banking sector

The State Administration of Taxation and the National Financial Regulatory Administration of China have called on banks to integrate blockchain and privacy computing into lending systems, aiming to improve transparency and expand access to financing for small businesses.

The initiative focuses on upgrading the ‘bank-tax interaction’ model by strengthening data sharing between financial institutions, tax authorities, and enterprises.

Authorities emphasise the need to standardise data exchange and reduce information asymmetry, which has long limited credit access for smaller firms. Improved credit models and faster approvals aim to support compliant businesses while boosting financial efficiency.

The directive aligns with China’s broader strategy to build a national data infrastructure supported by blockchain technology. A roadmap led by the National Development and Reform Commission targets nationwide implementation by 2029, with projected annual investment reaching 400 billion yuan.

Despite strict restrictions on cryptocurrency trading, China continues to promote blockchain as a core technology for economic development. Earlier initiatives, including blockchain invoicing, show a steady push to integrate the technology into real-world finance and administration.

Strengthening data sharing and transparency in lending could improve access to finance for small businesses, which remain a key driver of economic growth.

Wider blockchain integration may also support more efficient financial systems, reinforce trust in institutional processes, and advance China’s long-term digital infrastructure strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UNCTAD report notes global trade growth alongside increasing fragmentation risks

United Nations Conference on Trade and Development reports that global trade expanded by $2.5 trillion in 2025, reaching a total value of $35 trillion, driven by continued growth in goods and services.

Despite this expansion, the outlook has become more uncertain due to rising geopolitical tensions and disruptions to key shipping routes.

Conflicts in the Middle East and instability in critical maritime corridors are increasing energy and transport costs, placing additional pressure on developing economies. Higher import expenses and tighter financial conditions are limiting fiscal flexibility and constraining growth prospects in vulnerable regions.

While trade growth remains broad-based, services expansion has slowed, and much of the recent increase is linked to higher prices rather than volume gains. Emerging markets in East Asia and Africa remain central, supported by strong South–South trade and shifting supply chains.

The report notes that ongoing fragmentation in global trade, including US–China decoupling, is reshaping commercial flows and creating new ‘connector economies’. Although offering some value chain opportunities, inflation, debt pressures, and protectionism are expected to weigh on global trade growth in 2026.

Rising fragmentation and uneven growth highlight widening gaps in how countries benefit from globalisation, with developing economies most exposed to cost shocks and financial constraints.

Shifting global trade will shape investment flows, development prospects, and economic resilience, increasing the need for coordinated policy responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Kazakhstan Machinery Forum examines technology policy, industrial development and energy strategy

Representatives from Samruk-Energy JSC took part in the 13th Kazakhstan Machinery Forum, according to the company. The event brought together government and business figures to discuss the future of the machinery and manufacturing sectors.

During a sector session, Managing Director Galymbek Autalipov outlined plans to adopt and localise clean coal technologies. These are described as a strategic priority to balance energy security with environmental commitments.

Company representatives also joined discussions on procurement and industrial policy under Samruk-Kazyna JSC. Talks focused on import substitution, technological modernisation and increasing domestic value in supply chains.

The forum serves as a platform for shaping long-term industrial strategy through cooperation between state bodies and businesses, including the development of manufacturing capacity and modern technologies in Kazakhstan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Transparency push for online advertising systems

Researchers from the University of California and Iowa have warned that structural weaknesses in the digital advertising ecosystem continue to expose advertisers to hidden risks and fraud. The study highlights how complexity and limited transparency enable manipulation across the supply chain.

A key issue identified is ‘dark pooling’, in which lower-quality advertising inventory is bundled with premium placements, obscuring their true value. This practice can mislead buyers and distort pricing across the market.

The authors argue that current safeguards fail to address these vulnerabilities effectively, as responsibilities are fragmented among multiple stakeholders. This lack of coordination allows systemic issues to persist.

To address the problem, the researchers propose a shared vulnerability notification framework involving advertisers, publishers and intermediaries. The study suggests such collaboration could strengthen accountability and improve trust in digital advertising markets in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot