Quantum cryptography pioneers win top computing prize

Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.

The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.

Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.

The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta to end Instagram private message encryption after May 8

The US tech giant, Meta, has announced that end-to-end encryption for private messages on Instagram will no longer be supported after 8 May.

Previously, such a technology ensured that only intended recipients could read messages, preventing even Meta from accessing their contents.

The decision follows concerns from law enforcement and child protection organisations, which argued that encrypted messages can make it harder to identify harmful content involving children.

Meta has stated that the update allows the platform to monitor messages while maintaining standard privacy safeguards.

End-to-end encryption had been the default for several messaging platforms, including WhatsApp, Messenger, and other Meta services.

The company first signalled its intent to expand encryption across Instagram and Messenger in 2019, implementing it in 2023. The plan was met with objections from organisations such as the Internet Watch Foundation and the Virtual Global Taskforce.

These groups highlighted potential risks in preventing the timely detection of harmful content, particularly child sexual abuse material.

Meta’s shift reflects a compromise between privacy, platform security, and online child safety. The company has not provided further details on changes to encryption policies beyond Instagram’s private messaging service.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK announces £2.5 billion investment in AI and quantum technologies

Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.

Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.

Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.

Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU calls on US tech firms to respect rules on handling staff data

Concerns over data protection have intensified as the European Commission calls on major technology companies to apply the EU standards when handling sensitive staff information linked to digital regulation.

Pressure follows requests from the US House Judiciary Committee seeking access to communications between US firms and the EU officials involved in enforcing laws such as the Digital Services Act and Digital Markets Act.

The EU officials emphasise that formal exchanges with companies take place through official channels, including documented correspondence, rather than informal messaging platforms. Internal communication practices may involve encrypted tools, reflecting growing concerns about data security and external scrutiny.

Debate surrounding the issue reflects wider tensions between the EU and the US over digital governance, privacy protections and regulatory authority. Questions over jurisdiction and access to sensitive communications are likely to remain central as transatlantic tech policy evolves.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!