Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU states strike deal on chat-scanning law

EU member states have finally reached a unified stance on a long-debated law aimed at tackling online child sexual abuse, ending years of stalemate driven by fierce privacy concerns. Governments agreed to drop the most controversial element of the original proposal, mandatory scanning of private messages, after repeated blockages and public opposition from privacy advocates who warned it would amount to mass surveillance.

The move comes as reports of child abuse material continue to surge, with global hotlines processing nearly 2.5 million suspected images last year.

The compromise, pushed forward under Denmark’s Council presidency, maintains the option for tech companies to scan content voluntarily while affirming that end-to-end encryption must not be compromised. Supporters argue that the agreement closes a regulatory gap that will occur when temporary EU rules allowing voluntary detection expire in 2026.

However, children’s rights groups argue that the Council has not gone far enough, saying that simply preserving the current system will not adequately address the scale of the problem.

Privacy campaigners remain alarmed. Critics fear that framing voluntary scanning as a risk-reduction measure could encourage platforms to expand surveillance of user communications to shield themselves from liability.

Former MEP Patrick Breyer, a prominent voice in the campaign against so-called ‘chat control,’ warned that the compromise could still lead to widespread monitoring and possibly age-verification requirements that limit access to digital services.

With the Council and European Parliament now holding formal positions, negotiations will finally begin on the regulation’s final shape. But with political divisions still deep and the clock ticking toward the 2026 deadline, it may be months before the EU determines how far it is willing to go in regulating the detection of child sexual abuse material, and at what cost to users’ privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy secures new EU support for growth and reform

The European Commission has endorsed Italy’s latest request for funding under the Recovery and Resilience Facility, marking an important step in the country’s economic modernisation.

An approval that covers 12.8 billion euros, combining grants and loans, and supports efforts to strengthen competitiveness and long-term growth across key sectors of national life.

Italy completed 32 milestones and targets connected to the eighth instalment, enabling progress in public administration, procurement, employment, education, research, tourism, renewable energy and the circular economy.

Thousands of schools have gained new resources to improve multilingual learning and build stronger skills in science, technology, engineering, arts and mathematics.

Many primary and secondary schools have also secured modern digital tools to enhance teaching quality instead of relying on outdated systems.

Health research forms another major part of the package. Projects focused on rare diseases, cancer and other high-impact conditions have gained fresh funding to support scientific work and improve treatment pathways.

These measures contribute to a broader transformation programme financed through 194.4 billion euros, representing one of the largest recovery plans in the EU.

A four-week review by the Economic and Financial Committee will follow before the payment can be released. Once completed, Italy’s total receipts will exceed 153 billion euros, covering more than 70 percent of its full Recovery and Resilience Facility allocation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europol backs major takedown of Cryptomixer in Switzerland

Europol has supported a coordinated action week in Zurich, where Swiss and German authorities dismantled the illegal cryptocurrency mixing service Cryptomixer.

Three servers were seized in Switzerland, together with the cryptomixer.io domain, leading to the confiscation of more than €25 million in Bitcoin and over 12 terabytes of operational data.

Cryptomixer operated on both the clear web and the dark web, enabling cybercriminals to conceal the origins of illicit funds. The platform has mixed over €1.3 billion in Bitcoin since 2016, aiding ransomware groups, dark web markets, and criminals involved in drug trafficking, weapons trafficking, and credit card fraud.

Its randomised pooling system effectively blocked the traceability of funds across the blockchain.

Mixing services, such as Cryptomixer, are used to anonymise illegal funds before moving them to exchanges or converting them into other cryptocurrencies or fiat. The takedown halts further laundering and disrupts a key tool used by organised cybercrime networks.

Europol facilitated information exchange through the Joint Cybercrime Action Taskforce and coordinated operational meetings throughout the investigation. The agency deployed cybercrime specialists on the final day to provide on-site support and forensics.

Earlier efforts included support for the 2023 takedown of Chipmixer, then the largest mixer of its kind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea retailer admits worst-ever data leak

Coupang disclosed a major data breach on 30 November 2025 that exposed 33.7 million customer accounts. The leaked data includes names, email addresses, phone numbers, shipping addresses and some order history but excludes payment or login credentials.

The company said it first detected unauthorised access on 18 November. Subsequent investigations revealed that attacks likely began on 24 June through overseas servers and may involve a former employee’s still-active authentication key.

South Korean authorities launched an emergency probe to determine if Coupang violated data-protection laws. The government warned customers to stay alert to phishing and fraud attempts using the leaked information.

Cybersecurity experts say the breach may be one of the worst personal-data leaks in Korean history. Critics claim the incident underlines deep structural weaknesses in corporate cybersecurity practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot