Tether and UN join to boost digital security in Africa

Tether has joined the UN Office on Drugs and Crime to enhance cybersecurity and digital asset education across Africa. The collaboration aims to reduce vulnerabilities to cybercrime and safeguard communities against online scams and fraud.

Africa, emerging as the third-fastest-growing crypto region, faces increasing threats from digital asset fraud. A recent Interpol operation uncovered $260 million in illicit crypto and fiat across Africa, highlighting the urgent need for stronger digital security.

The partnership includes several key initiatives. In Senegal, youth will participate in a multi-phase cybersecurity education programme featuring boot camps, mentorship, and micro-grants to support innovative projects.

Civil society organisations across Africa will receive funding to support human trafficking victims in Nigeria, DRC, Malawi, Ethiopia, and Uganda. In Papua New Guinea, universities will host competitions to promote financial inclusion and prevent digital asset fraud using blockchain solutions.

Tether and UNODC aim to create secure digital ecosystems, boost economic opportunities, and equip communities to prevent organised crime. Coordinated action across sectors is considered vital to creating safer and more inclusive environments for vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gmail enters the Gemini era with AI-powered inbox tools

Google is reshaping Gmail around its Gemini AI models, aiming to turn email into a proactive assistant for more than three billion users worldwide.

With inbox volumes continuing to rise, the focus shifts towards managing information flows instead of simply sending and receiving messages.

New AI Overviews allow Gmail to summarise long email threads and answer natural language questions directly from inbox content.

Users can retrieve details from past conversations without complex searches, while conversation summaries roll out globally at no cost, with advanced query features reserved for paid AI subscriptions.

Writing tools are also expanding, with Help Me Write, upgraded Suggested Replies, and Proofread features designed to speed up drafting while preserving individual tone and style.

Deeper personalisation is planned through connections with other Google services, enabling emails to reflect broader user context.

A redesigned AI Inbox further prioritises urgent messages and key tasks by analysing communication patterns and relationships.

Powered by Gemini 3, these features begin rolling out in the US in English, with additional languages and regions scheduled to follow during 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Sedgwick breach linked to TridentLocker ransomware attack

Sedgwick has confirmed a data breach at its government-focused subsidiary after the TridentLocker ransomware group claimed responsibility for stealing 3.4 gigabytes of data. The incident underscores growing threats to federal contractors handling sensitive US agency information.

The company said the breach affected only an isolated file transfer system used by Sedgwick Government Solutions, which serves agencies such as DHS, ICE, and CISA. Segmentation reportedly prevented any impact on wider corporate systems or ongoing client operations.

TridentLocker, a ransomware-as-a-service group that appeared in late 2025, listed Sedgwick Government Solutions on its dark web leak site and posted samples of stolen documents. The gang is known for double-extortion tactics, combining data encryption and public exposure threats.

Sedgwick has informed US law enforcement and affected clients while continuing to investigate with external cybersecurity experts. The firm emphasised operational continuity and noted no evidence of intrusion into its claims management servers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New UK cyber strategy focuses on trust in online public services

The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.

Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.

The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.

The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.

Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Roblox rolls out facial age checks for chat

The online gaming platform, Roblox, has begun a global rollout requiring facial age checks before users can access chat features, expanding a system first tested in selected regions late last year.

The measure applies wherever chat is available and aims to create age-appropriate communication environments across the platform.

Instead of relying on self-declared ages, Roblox uses facial age estimation to group users and restrict interactions, limiting contact between adults and children under 16. Younger users need parental consent to chat, while verified users aged 13 and over can connect more freely through Trusted Connections.

The company says privacy safeguards remain central, with images deleted immediately after secure processing and no image sharing allowed in chat. Appeals, ID verification and parental controls support accuracy, while ongoing behavioural checks may trigger repeat age verification if discrepancies appear.

Roblox plans to extend age checks beyond chat later in 2026, including creator tools and community features, as part of a broader push to strengthen online safety and rebuild trust in youth-focused digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!