Social media ban in Ecuador targets youth crime recruitment

A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.

Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.

Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.

Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.

In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse crisis escalates worldwide

AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.

Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.

Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.

Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.

Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.

Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated songs used in $10 million streaming fraud

A large-scale fraud scheme using AI-generated music has exposed vulnerabilities in streaming platforms and royalty systems. Billions of fake streams were used to divert payments away from legitimate artists and rights holders.

The scheme ran from 2017 to 2024 and involved uploading hundreds of thousands of AI-generated tracks. Automated programs were then used to stream the songs at scale, inflating play counts and generating revenue.

The operation relied on thousands of bot accounts, bulk email registrations and cloud-based systems. Streaming activity was spread across many tracks to reduce detection and maintain consistent earnings over time.

Michael Smith, a 54-year-old from North Carolina, has pleaded guilty to conspiracy to commit wire fraud in federal court. Prosecutors say he obtained more than $10 million and agreed to forfeit over $8 million in proceeds.

Authorities say the case highlights how AI and automation can be used to manipulate digital platforms. The court will determine the final sentence as concerns grow over similar schemes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

FBI warns of fake tokens targeting Tron wallets

The FBI’s New York Field Office has warned that fraudulent tokens impersonating the agency are being airdropped to Tron wallets, with recipients threatened with ‘total block’ of assets unless they submit personal information via phishing sites.

At least 728 wallets were affected, some holding over US$1 million in USDT, when the warning was issued on 19 March.

The scam warns users that their wallets are ‘under investigation’ and instructs them to complete an online anti-money-laundering form. The FBI urged crypto holders to ignore these messages and avoid entering any personal data on linked websites.

Attackers exploit Tron for its fast and low-cost transactions, using bots to distribute tokens widely and generate spoofed addresses.

Impersonation scams have surged dramatically in 2025, with Chainalysis reporting a 1,400% year-over-year increase. Total crypto fraud losses are estimated at US$17 billion, with AI-assisted scams proving far more profitable than traditional schemes.

The FBI previously ran a blockchain sting using Ethereum tokens, resulting in indictments and the seizure of millions in assets.

The bureau encourages anyone who receives the fake FBI tokens to report the incident to the Internet Crime Complaint Centre to help combat ongoing crypto fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New iPhone vulnerability raises concerns over advanced mobile cyber threats

A newly identified cyberattack known as ‘DarkSword’ is raising concerns about the security of iPhone devices, following reports that millions of users could be exposed to rapid data extraction techniques.

Cybersecurity researchers indicate that the attack targets specific iOS versions, exploiting vulnerabilities in the Safari browser and a graphics processing feature known as WebGPU.

Once access is gained, attackers can retrieve sensitive information, including messages, emails and location data, within minutes, while removing traces of the intrusion almost immediately.

Estimates suggest that a significant share of global iPhone users may be affected, with hundreds of millions of devices running vulnerable software versions.

The scale of exposure remains uncertain, particularly as experts continue to assess whether additional versions of iOS may also be impacted.

Researchers have associated the campaign with a threat actor previously identified by Google, with observed activity across multiple regions.

Such a development highlights growing concerns about the evolution of mobile cyber threats, where increasingly sophisticated techniques are being deployed beyond traditional state-level operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin moves closer to quantum resistance with BIP-360

BTQ Technologies has deployed Bitcoin Improvement Proposal BIP-360 on its Bitcoin Quantum Testnet v0.3.0, marking the first live test of the proposal. The upgrade introduces a quantum-resistant transaction model, Pay-to-Merkle-Root, designed to strengthen Bitcoin’s long-term security.

BIP-360 focuses on mitigating a vulnerability linked to Taproot’s key-path spending mechanism, which can expose public keys on-chain. Such exposure may become a risk if future quantum computers are capable of exploiting cryptographic weaknesses using advanced algorithms.

The testnet adds new consensus rules, post-quantum signatures, and full transaction lifecycle testing. Faster one-minute block times and adjusted fee structures have been introduced to accommodate larger and more complex signatures.

Growing global attention on quantum threats adds urgency to the development. US, EU, and Canadian authorities are setting timelines for post-quantum cryptography to protect future system security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!