Quantum-safe security upgrades SIM and eSIM cards

Thales has successfully demonstrated a world-first capability that prepares 5G networks for the era of quantum computing. The test proved that SIM and eSIM cards can be remotely upgraded to support post-quantum cryptography, boosting security without disrupting services or user experience.

The breakthrough highlights the potential of crypto-agile networks to evolve securely as quantum threats emerge.

Replacing millions of devices is impractical, so Thales enables operators to deploy quantum-safe algorithms directly to existing devices. Remote upgrades preserve data and connectivity while instantly boosting security, keeping 5G networks resilient and trusted.

The demonstration reinforces Thales’ leadership in post-quantum cryptography, with dedicated research teams developing quantum-resistant methods and contributing to international standards, including NIST initiatives.

Operators can now protect long-term investments, secure critical services, and prepare for the next generation of quantum computing without operational disruptions.

Thales’ approach offers a practical roadmap for telecoms to adopt quantum-safe security today, ensuring continuity, trust, and resilience across mobile networks as digital threats evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Central bank in Russia cracks down on crypto-enabled pyramid schemes

Russia’s central bank reports that two-thirds of pyramid scheme operators use crypto, with funds sent to over 4,600 fraudster-controlled wallets in 2025. Authorities identified 7,087 online scams last year, most of which used crypto and money mules to collect illicit funds.

Officials highlighted that these schemes typically operate without physical offices, engaging victims via social media, chat apps, and phone calls. Nearly 1,500 firms offered fake crypto investments, and 84% of scammers used cryptocurrency to raise funds, up from 77% in 2024.

The central bank has blocked 21,500 web pages and social media posts linked to fraudulent operators.

The government is fast-tracking regulations, warning that only licensed firms can offer investments to Russian retail investors. Authorities plan to continue monitoring sophisticated online schemes and enhance public awareness to combat crypto-enabled fraud.

Crypto markets remain active, with Bitcoin trading at $66,566, up 3.8%, and Ethereum at $1,990, up more than 6% in the past 24 hours.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Finance ministry in South Korea pledges reform for public crypto management

South Korea’s finance minister, Koo Yun-cheol, has pledged urgent reforms to how government agencies manage digital assets following high-profile failures in state custody.

Recent incidents revealed that police and tax authorities mishandled seized cryptocurrency, highlighting weaknesses in oversight and security practices. Authorities will review current management methods and implement measures to prevent future losses.

Operational risks around securing crypto in public institutions have become increasingly apparent. A notable case involved Seoul police in Gangnam losing access to 22 BTC, worth around $1.4 million, after failing to retain private keys and allowing a third-party firm to manage the assets.

Prosecutors are now investigating potential bribery linked to the case.

The government says it holds only digital assets acquired through lawful enforcement, such as seizures for unpaid taxes or criminal cases. The reforms aim to strengthen security, improve operational controls, and restore confidence in the public sector’s handling of crypto amid growing scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan’s digital transformation highlighted as UNESCO advances AI ethics

UNESCO used the Pakistan Governance Forum 2026 to highlight the need for a structured Ethical AI and Data Governance Framework as the country accelerates its digital transformation.

Federal leaders, provincial authorities and civil society convened to examine governance reforms, with UNESCO urging Pakistan to align its expanding digital public infrastructure with coherent standards that protect rights while enabling innovation.

Speaking at the Forum, Fuad Pashayev underlined that Pakistan’s reform priority should centre on the Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by all 193 Member States.

Anchoring national systems in transparency, accountability and meaningful human oversight was framed as essential for maintaining public trust as digital services reshape access to benefits and interactions between citizens and the state.

To support the shift, UNESCO promoted its AI Readiness Assessment Methodology (RAM), which is already deployed in more than 50 countries. The tool helps governments identify regulatory gaps, strengthen institutional coordination and design safeguards against discrimination and algorithmic bias.

UNESCO has already contributed to Pakistan’s draft National AI Policy, ensuring alignment with international ethical frameworks while accommodating national development needs.

Capacity building formed a major pillar of UNESCO’s engagement. In partnership with the University of Oxford, the organisation launched a global course on AI and Digital Transformation in Government in 2025, attracting over nineteen thousand enrolments worldwide.

Pakistan leads participation globally, reflecting both the country’s momentum and growing demand for structured training.

UNESCO’s ongoing work aims to reinforce data governance, improve AI readiness and embed ethical safeguards across Pakistan’s digital transformation strategy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google API keys exposed after Gemini privilege expansion

Security researchers warn that exposed Google API keys in public client-side code could be used to authenticate with the Gemini AI assistant and access private data. The issue arose after developers enabled the Generative Language API in existing projects without updating key permissions.

Truffle Security scanned the November 2025 Common Crawl dataset and identified more than 2,800 live Google API keys publicly exposed in website source code. Some belonged to financial institutions, security firms, recruitment companies, and Google infrastructure.

Before Gemini’s launch, Google Cloud API keys were widely treated as non-sensitive identifiers for services such as Maps, YouTube embeds, analytics, and Firebase. After Gemini was introduced, those duplicate Google API keys also acted as authentication credentials for the AI assistant, expanding their privileges.

Researchers demonstrated the risk by using one exposed key to query the Gemini API models endpoint and list available models. They warned that attackers could exploit such access to extract private data or generate substantial API charges on victim accounts.

Google was notified in November 2025 and later classified the issue as a single-service privilege escalation. The company said it has introduced controls to block leaked keys, limit new AI Studio keys to Gemini-only scope, and notify developers of detected exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Financial crime risks are reshaped by the rise of autonomous AI agents

Autonomous AI agents are transforming finance by executing transactions independently and speeding up workflows in digital assets and programmable finance. Software can manage wallets and move funds across blockchains in seconds, narrowing detection windows.

AI agents don’t create new crimes but increase speed and complexity, making accountability essential. Responsibility rests with developers, operators, and beneficiaries, with investigators tracing control, configuration, and economic benefit to determine liability.

Weak oversight or misconfigured rules can lead to significant compliance and enforcement consequences.

Investigations face new challenges as autonomous agents operate across multiple blockchains, decentralised exchanges, and global jurisdictions.

Real-time analytics and automated tracing are essential to link transactions to accountable actors before funds move. Governance architecture and monitoring systems increasingly serve as evidence in regulatory or criminal actions.

Institutions and law enforcement are using AI monitoring, anomaly detection, and automated containment systems. Autonomous AI impacts sanctions and national security, emphasising the need for human oversight alongside automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!