Scammers use fake celebrities to steal millions in crypto fraud

Fraudsters increasingly pretend to be celebrities to deceive people into fake cryptocurrency schemes. Richard Lyons lost $10,000 after falling for a scam involving a fake Elon Musk, who used an AI-generated voice and images to make the investment offer appear authentic.

The FBI has highlighted a sharp rise in crypto scams during 2024, with billions lost as fraudsters pose as financial experts or love interests. Many scams involve fake websites that mimic legitimate investment platforms, showing false gains before stealing funds.

Lyons was shown a fake web page indicating his investment had grown to $50,000 before the scam was uncovered.

Experts warn that thorough research and caution are essential when approached online with investment offers. The FBI urges potential investors to consult trusted advisers and avoid sending money to strangers.

Blockchain firms like Lionsgate Network now offer rapid tracing of stolen crypto, although recovery is usually limited to high-value cases.

Lyons described the scam’s impact as devastating, leaving him struggling with everyday expenses. Authorities advise anyone targeted by similar frauds to report promptly for a better chance of recovery and protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court convicts duo in UK crypto fraud worth $2 million

A UK court has sentenced Raymondip Bedi and Patrick Mavanga to a combined 12 years in prison for a cryptocurrency fraud scheme that defrauded 65 victims of around $2 million. Between 2017 and 2019, the pair cold-called investors posing as advisers, which led them to fake crypto websites.

Bedi was sentenced to five years and four months, while Mavanga received six years and six months behind bars. Both operated under CCX Capital and Astaria Group LLP, deliberately bypassing financial regulations to extract illicit gains.

The scam targeted retail investors with little crypto experience, luring them with promises of high profits and misleading sales materials.

Victim impact statements revealed severe financial and emotional consequences. Some lost their life savings or fell into debt, and others developed mental health issues. Mavanga was also found guilty of hiding incriminating evidence during the investigation.

The Financial Conduct Authority (FCA) led the prosecution amid a heavy backlog of crypto fraud cases, exposing regulators’ challenges in enforcing laws.

The court encouraged victims to seek support and highlighted the need for vigilance against similar scams. While the prosecution offers closure for some, the lengthy process underscores the ongoing difficulties in policing the fast-evolving crypto market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Three nations outline cyber law views ahead of UN talks

In the lead-up to the concluding session of the UN Open-Ended Working Group (OEWG) on ICTs, Thailand, New Zealand, and South Korea have released their respective national positions on the application of international law in cyberspace, contributing to the growing corpus of state practice on the issue.

Thailand’s position (July 2025) emphasises that existing international law, including the Charter of the UN, applies to the conduct of States in cyberspace. Speaking of international humanitarian law (IHL), Thailand stresses that the IHL applies to cyber operations conducted in the context of armed conflicts and all forms of warfare, including cyberwarfare. Thailand also affirms that sovereignty applies in full to state activities conducted in cyberspace, and even if the cyber operation does not rise to the level of a prohibited use of force under international law, such an act still amounts to an internationally wrongful act.

New Zealand’s updated statement builds upon its 2020 position by reaffirming that international law applies to cyberspace “in the same way it applies in the physical world.” It provides expanded commentary on the principles of sovereignty and due diligence, explicitly recognising that New Zealand

does not consider that territorial sovereignty prohibits every unauthorised intrusion into a foreign ICT system or prohibits all cyber activity which has effects on the territory of another state. The statement further provides that New Zealand considers that the rule of territorial sovereignty, as applied in the cyber context, does not prohibit states from taking necessary measures, with minimally destructive effects, to defend against the harmful activity

of malicious cyber actors.

South Korea’s position focuses on the applicability of international law to military cyber operations. It affirms the applicability of the UN Charter and IHL, emphasising restraint and the protection of civilians in cyberspace. Commenting on sovereignty, they say their position is close to Thailand’s. South Korea affirms that no State may intervene in the domestic affairs of another and reminds that this principle is explicitly codified in Article 2(7) of the UN Charter and has been affirmed in international jurisprudence. Hence, according to the document, the principle of sovereignty also applies equally in cyberspace. The position paper also highlights that under general international law, lawful countermeasures are permissible in response to internationally wrongful acts, and this principle applies equally in cyberspace. Given the anonymity and transboundary nature of cyberspace, which

often places the injured state at a structural disadvantage, the necessity of countermeasures may be recognised as a means of ensuring adequate protection for the wounded state.

These publications come at a critical juncture as the OEWG seeks to finalise its report on responsible state behaviour in cyberspace. With these latest contributions, the number of publicly released national positions on international law in cyberspace continues to grow, reflecting increasing engagement from states across regions.

Ransomware gangs feud after M&S cyberattack

A turf war has erupted between two significant ransomware gangs, DragonForce and RansomHub, following cyberattacks on UK retailers including Marks and Spencer and Harrods.

Security experts warn that the feud could result in companies being extorted multiple times as criminal groups compete to control the lucrative ransomware-as-a-service (RaaS) market.

DragonForce, a predominantly Russian-speaking group, reportedly triggered the conflict by rebranding as a cartel and expanding its affiliate base.

Tensions escalated after RansomHub’s dark-web site was taken offline in what is believed to be a hostile move by DragonForce, prompting retaliation through digital vandalism.

Cybersecurity analysts say the breakdown in relationships between hacking groups has created instability, increasing the likelihood of future attacks. Experts also point to a growing risk of follow-up extortion attempts by affiliates when criminal partnerships collapse.

The rivalry reflects the ruthless dynamics of the ransomware economy, which is forecast to cost businesses $10 trillion globally by the end of 2025. Victims now face not only technical challenges but also the legal and financial fallout of navigating increasingly unpredictable criminal networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Cyber Command proposes $5M AI Initiative for 2026 budget

US Cyber Command is seeking $5 million in its fiscal year 2026 budget to launch a new AI project to advance data integration and operational capabilities.

While the amount represents a small fraction of the command’s $1.3 billion research and development (R&D) portfolio, the effort reflects growing emphasis on incorporating AI into cyber operations.

The initiative follows congressional direction set in the fiscal year (FY) 2023 National Defense Authorization Act, which tasked Cyber Command and the Department of Defense’s Chief Information Officer—working with the Chief Digital and Artificial Intelligence Officer, DARPA, the NSA, and the Undersecretary of Defense for Research and Engineering—to produce a five-year guide and implementation plan for rapid AI adoption.

However, this roadmap, developed shortly after, identified priorities for deploying AI systems, applications, and supporting data processes across cyber forces.

Cyber Command formed an AI task force within its Cyber National Mission Force (CNMF) to operationalise these priorities. The newly proposed funding would support the task force’s efforts to establish core data standards, curate and tag operational data, and accelerate the integration of AI and machine learning solutions.

Known as Artificial Intelligence for Cyberspace Operations, the project will focus on piloting AI technologies using an agile 90-day cycle. This approach is designed to rapidly assess potential solutions against real-world use cases, enabling quick iteration in response to evolving cyber threats.

Budget documents indicate the CNMF plans to explore how AI can enhance threat detection, automate data analysis, and support decision-making processes. The command’s Cyber Immersion Laboratory will be essential in testing and evaluating these cyber capabilities, with external organisations conducting independent operational assessments.

The AI roadmap identifies five categories for applying AI across Cyber Command’s enterprise: vulnerabilities and exploits; network security, monitoring, and visualisation; modelling and predictive analytics; persona and identity management; and infrastructure and transport systems.

To fund this effort, Cyber Command plans to shift resources from its operations and maintenance account into its R&D budget as part of the transition from FY2025 to FY2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US targets Southeast Asia to stop AI chip leaks to China

The US is preparing stricter export controls on high-end Nvidia AI chips destined for Malaysia and Thailand, in a move to block China’s indirect access to advanced GPU hardware.

According to sources cited by Bloomberg, the new restrictions would require exporters to obtain licences before sending AI processors to either country.

The change follows reports that Chinese engineers have hand-carried data to Malaysia for AI training after Singapore began restricting chip re-exports.

Washington suspects Chinese firms are using Southeast Asian intermediaries, including shell companies, to bypass existing export bans on AI chips like Nvidia’s H100.

Although some easing has occurred between the US and China in areas such as ethane and engine components, Washington remains committed to its broader decoupling strategy. The proposed measures will reportedly include safeguards to prevent regional supply chain disruption.

Malaysia’s Trade Minister confirmed earlier this year that the US had requested detailed monitoring of all Nvidia chip shipments into the country.

As the global race for AI dominance intensifies, Washington appears determined to tighten enforcement and limit Beijing’s access to advanced computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware disrupts Ingram Micro’s systems and operations

Ingram Micro has confirmed a ransomware attack that affected internal systems and forced some services offline. The global IT distributor says it acted quickly to contain the incident, implemented mitigation steps, and involved cybersecurity experts.

The company is working with a third-party firm to investigate the breach and has informed law enforcement. Order processing and shipping operations have been disrupted while systems are being restored.

While details remain limited, the attack is reportedly linked to the SafePay ransomware group.

According to BleepingComputer, the gang exploited Ingram’s GlobalProtect VPN to gain access last Thursday.

In response, Ingram Micro shut down multiple platforms, including GlobalProtect VPN and its Xvantage AI platform. Employees were instructed to work remotely as a precaution during the response effort.

SafePay first appeared in late 2024 and has targeted over 220 companies. It often breaches networks using password spraying and compromised credentials, primarily through VPNs.

Ingram Micro has not disclosed what data was accessed or the size of the ransom demand.

The company apologised for the disruption and said it is working to restore systems as quickly as possible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!