Crypto crackdown intensifies in Kazakhstan over illegal exchanges

Kazakhstan’s financial regulator has warned that several major cryptocurrency exchanges are operating without the licences required under the country’s current digital asset framework, reinforcing its strict authorisation regime.

The Astana Financial Services Authority identified prominent platforms, including HTX, Bitget, OKX, and MEXC, as operating without the necessary permits. Under existing rules, only entities licensed within the Astana International Financial Centre are allowed to provide regulated digital asset services.

Authorities stressed that international popularity does not exempt platforms from complying with local law. They also warned that unauthorised exchanges can expose users to financial losses, data breaches, and fraudulent schemes, and urged the public to verify platforms through the official register of licensed firms. AFSA’s website currently shows a regulated ecosystem with dozens of authorised entities across the AIFC framework.

The warning comes amid broader enforcement efforts as Kazakhstan tries to formalise its crypto sector while positioning itself as a regulated regional hub for digital assets. In parallel, law enforcement agencies have reported wider crackdowns on illegal crypto activity, including shadow exchanges and money-laundering networks.

Why does it matter?

Kazakhstan’s tightening enforcement shows a broader push to bring crypto activity into a more formal and supervised market structure. By restricting unlicensed platforms and steering users towards authorised entities, the authorities are trying to reduce exposure to financial crime, improve market transparency, and build credibility for Kazakhstan’s ambition to become a regulated regional digital asset hub.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Tax season phishing scams surge with fake government sites

Cybercriminal activity tends to intensify during tax-return season, as taxpayers face tighter deadlines and share sensitive financial information. A recent Kaspersky analysis highlights the growing use of fake tax authority websites, phishing emails, and malicious downloads designed to steal personal and banking data.

Attackers are impersonating official revenue services across multiple countries, creating convincing portals that mimic government branding and online tax services. Victims are often prompted to enter login credentials, payment details, or download files containing malware aimed at compromising devices or extracting sensitive information.

Crypto holders are also being targeted through fake compliance portals and fraudulent regulatory notices. These schemes try to trick users into revealing wallet recovery phrases or linking digital wallets, which can lead to full asset theft once access is granted.

AI adds another layer of risk. Kaspersky warns that users who upload tax documents or personal financial data to unverified AI platforms may expose confidential information to leakage, misuse, or further fraud. More broadly, AI is also making phishing and impersonation campaigns easier to scale and harder to detect.

Security experts recommend relying only on official tax channels, checking websites and email sources carefully, avoiding unsolicited downloads, and using secure storage and trusted protection tools when handling tax documents.

Why does it matter?

Tax-season phishing campaigns show how financial data is increasingly being treated as a high-value target for cybercrime. As tax systems, digital finance, crypto assets, and AI tools overlap more closely, a single successful scam can lead not only to immediate financial loss but also to identity theft, device compromise, and broader damage to trust in digital services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Cyprus defence minister highlights role of AI and advanced technologies in defence

The Cyprus Defence Minister Vasilis Palmas has said that AI and advanced technologies are transforming defence, requiring stronger domestic capabilities. His remarks were recently reported by the Cyprus Mail.

He highlighted the growing roles of AI, autonomous systems, cyberdefence and space technology, stressing the need to secure supply chains and meet the National Guard’s requirements.

Palmas said participation in the European defence innovation programmes is a strategic priority, supporting local technological development and integration into wider industry networks.

The country is advancing several funded projects, strengthening research infrastructure, and preparing a national defence industry plan. The comments were made at an event in Cyprus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybersecurity reform in the EU advances through Spain consultation

Spain has launched a public consultation on the proposed EU Cybersecurity Act 2, inviting input from operators, citizens, and other interested parties on the need for, objectives of, and possible alternatives to the planned reform.

The consultation covers the European Commission’s proposal COM(2026) 11 final, which would repeal and replace Regulation (EU) 2019/881. The proposal is presented as a response to changes in the cyber threat landscape and to new strategic and regulatory challenges that have emerged since the current framework entered into force in 2019.

According to the consultation text, the reform is intended to address four main structural problems: a mismatch between the EU cybersecurity framework and current operational needs, limited practical use of the European Cybersecurity Certification Framework, fragmentation across the wider EU cybersecurity acquis, and growing cybersecurity risks in ICT supply chains.

Regarding ENISA, the proposal argues that the agency’s current functions and resources are insufficient to meet the needs of member states, the EU institutions, and market actors, particularly in policy implementation, operational cooperation, and crisis response. It also says the certification framework created under the current regulation has proved too slow and too complex in practice, with limited market uptake and governance mechanisms that have not delivered at the required speed.

The text also links the proposal to the growing complexity of compliance created by instruments such as NIS2, the Cyber Resilience Act, DORA, and the CER Directive. It says the new regulation would seek greater coherence and interoperability across those frameworks while reducing administrative burdens for companies and competent authorities.

A further objective is to create, for the first time, a horizontal EU-level framework for managing ICT supply-chain cybersecurity risks, including the identification of critical ICT assets, the possible designation of high-risk suppliers, and the adoption of proportionate measures to reduce strategic dependencies.

The proposal would also strengthen ENISA’s mandate and resources, reform and expand the certification framework, and support a more centralised incident-notification model linked to the wider Digital Omnibus simplification agenda.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore urges organisations to strengthen AI governance frameworks

GovTech Singapore has argued that stronger AI governance in workplaces is essential for trust, compliance, risk management, and responsible innovation as AI adoption expands across business operations.

The agency leading Singapore’s Smart Nation and digital government efforts defines AI governance as a framework of policies, processes, and responsibilities guiding the ethical, transparent, and accountable development and deployment of AI systems within an organisation. The framework is linked to oversight across the AI lifecycle, from design through to ongoing monitoring.

Key elements identified by GovTech Singapore include transparency and explainability, fairness and bias mitigation, accountability and human oversight, and data privacy and security. Responsible AI is also linked to Singapore’s wider Smart Nation agenda, which the agency describes as a national priority.

The guidance recommends that organisations establish clear internal policies on AI use, build AI literacy across teams, carry out regular audits and assessments, and prioritise secure development practices. It also points to Singapore’s Model AI Governance Framework for Generative AI, developed by the AI Verify Foundation and the Infocomm Media Development Authority, as a reference point for businesses adapting governance frameworks to their own needs.

As part of its effort to support responsible AI use in the public sector, GovTech Singapore also highlights its AI Guardian suite. The suite includes Litmus, a testing platform using adversarial prompts to identify risks and vulnerabilities, and Sentinel, a guardrails service designed to detect and mitigate unsafe or irrelevant content before it affects AI models or users.

Overall, GovTech Singapore presents AI governance not only as a compliance issue, but as part of building a trusted digital environment in which AI can be deployed safely and effectively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The digital asset framework in Australia enters a critical rollout period

Australia’s crypto sector is entering a critical transition period as digital asset reforms move from policy design into implementation. Two overlapping timelines now define the landscape: immediate AUSTRAC AML/CTF and virtual asset service obligations, and a broader ASIC Digital Assets Framework set to commence in 2027.

Key compliance measures are already active or imminent, including stronger AML/CTF obligations and the Travel Rule from July 2026. Existing financial services law also continues to apply, meaning firms must operate within current licensing requirements while preparing for the next regulatory phase.

Policy development is also converging around stablecoins and scam prevention. While stablecoins are being addressed through payments reform and related financial regulation, scam prevention falls within a broader national framework that spans multiple sectors. In that environment, crypto exchanges occupy a particularly important point of control, where funds move on-chain and where detection and intervention efforts can be most effective.

Authorities and market participants increasingly recognise that the next 18 months will be decisive in showing how these systems work in practice. Stronger alignment with international standards, including FATF expectations, is likely to shape Australia’s shift from regulatory planning to active supervision and enforcement.

Why does it matter?

Australia’s approach reflects a broader global shift from fragmented crypto oversight towards a more integrated financial system regulation. As digital assets become more closely tied to payments, investment flows, and cross-border transfers, governments are increasingly treating crypto infrastructure as part of core financial plumbing rather than a separate experimental market. However, this is an inference grounded in the structure and timing of the reforms now underway.

From a wider perspective, the real significance lies in systemic coordination. Combining AML enforcement, stablecoin oversight, and scam prevention will help determine whether illicit activity can be disrupted at the point of conversion rather than only after funds are lost. How effectively Australia connects these layers will shape not only domestic market integrity, but also its credibility within evolving international standards for digital finance governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot