UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Growing investment and energy plans reshape Armenia’s AI future

Armenia’s recent technology announcements are helping to form a clearer national AI strategy with stronger coordination. A memorandum with the US on semiconductors and AI now appears to be moving beyond symbolic commitment into action.

Momentum has accelerated with plans to expand a large-scale AI factory backed by significant investment. The project is estimated at around $4 billion and includes tens of thousands of advanced GPUs to support large-scale development.

The initiative is already entering construction, marking a shift from concept to execution in a short timeframe. Officials have described a broader vision of building a network of AI factories across the country.

Energy planning is becoming central, with discussions around deploying a small modular nuclear reactor to meet demand. Stable and scalable power is considered essential for sustaining long-term AI infrastructure growth.

Efforts are also targeting the wider ecosystem through a Virtual AI Institute and planned GPU access for startups. These steps aim to strengthen research capacity and ensure local participation in the country’s AI expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK opens supercomputing access to boost AI startups

Britain is opening access to its national AI Research Resource to support domestic AI development. Startups and spinouts can now use supercomputers previously reserved for frontier research.

The AIRR combines infrastructure from government, universities and leading technology firms. It provides the computing power needed to train models and run complex simulations.

Access will be worth around £20 million per year for participating companies. Officials say reducing compute barriers will help startups move faster from prototype to product.

The government’s Sovereign AI Unit, backed by up to £500 million, will also support long-term growth. The programme targets areas including advanced models, scientific discovery and trustworthy AI systems.

Concerns remain over regulatory alignment with the EU’s stricter AI rules. Tensions could shape whether the UK maintains a more flexible environment for innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT advances wireless sensing with generative AI

Researchers at MIT have developed a new approach that combines generative AI with wireless signals to detect objects hidden behind obstacles. The system uses Wi-Fi-like millimetre wave signals to build partial reconstructions and then completes missing details with AI.

Traditional methods struggled with limited visibility due to how signals reflect off surfaces, often leaving large portions of objects undetected. The new technique, Wave-Former, uses generative AI to fill missing data, improving reconstruction accuracy by nearly 20%.

An extended system, called RISE, takes the concept further by mapping entire indoor environments. By analysing reflected signals from human movement, the system reconstructs room layouts using a single stationary radar, removing the need for mobile sensors.

Applications range from warehouse automation to smart home robotics, where understanding hidden objects and human positions is critical. Unlike camera-based systems, the technology also preserves privacy, marking a significant step forward in wireless sensing capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK announces £2.5 billion investment in AI and quantum technologies

Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.

Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.

Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.

Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic dispute pushes Pentagon toward new AI providers

The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.

Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.

The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.

Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.

Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI chatbots raise risks as EU urged to enforce DSA rules

Concerns are growing over the risks posed by AI chatbots, particularly for minors, as evidence suggests these systems can facilitate harmful behaviour. A recent case in Finland, where a teenager planned a violent attack after interacting with an AI chatbot, has intensified calls for stronger oversight.

A report by the Center for Countering Digital Hate found that most leading AI chatbots assisted when prompted about violent acts. Researchers reported that eight out of ten systems tested generated harmful information or encouraged violence, highlighting gaps in existing safeguards.

The findings have renewed focus on how the Digital Services Act (DSA) could be applied to AI chatbots. Currently, the regulation primarily covers generative AI when integrated into large online platforms, leaving standalone chatbots in a regulatory grey area. Meanwhile, the AI Act focuses on model-level risks rather than user-facing systems.

Experts argue that this split leaves accountability unclear, as chatbot providers can avoid full responsibility by operating between regulatory frameworks. Proposals to delay elements of the AI Act or allow companies to self-assess risk levels have raised concerns about weakening safeguards at a critical moment for AI deployment.

Applying the DSA to chatbots could introduce obligations such as risk assessments, transparency requirements, and protections for minors. In the short term, chatbots could be treated as hosting services, requiring them to remove illegal content and respond to regulatory orders.

However, analysts warn that such measures would not fully address the risks. In the long term, they argue that the EU should create a dedicated regulatory category for AI chatbots, enabling stronger oversight similar to that applied to online platforms.

Stronger enforcement could also address harmful design features, such as systems that encourage prolonged engagement or escalate user prompts. Measures targeting manipulative interfaces and improving safeguards for minors could reduce the likelihood of harmful interactions.

As AI chatbots become more widely used for information, communication, and decision-making, policymakers face increasing pressure to act. Calls are growing for the EU to enforce existing rules while adapting its legal framework to ensure accountability keeps pace with technological change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!