Meta under pressure after small business loses thousands

A New Orleans bar owner lost $10,000 after cyber criminals hijacked her Facebook business account, highlighting the growing threat of online scams targeting small businesses. Despite efforts to recover the account, the company was locked out for weeks, disrupting sales.

The US-based scam involved a fake Meta support message that tricked the owner into giving hackers access to her page. Once inside, the attackers began running ads and draining funds from the business account linked to the platform.

Cyber fraud like this is increasingly common as small businesses rely more on social media to reach their customers. The incident has renewed calls for tech giants like Meta to implement stronger user protections and improve support for scam victims.

Meta says it has systems to detect and remove fraudulent activity, but did not respond directly to this case. Experts argue that current protections are insufficient, especially for small firms with fewer resources and little recourse after attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers steal $500K via malicious Cursor AI extension

A cyberattack targeting the Cursor AI development environment has resulted in the theft of $500,000 in cryptocurrency from a Russian developer. Despite strong security practices and a fresh operating system, the victim downloaded a malicious extension named ‘Solidity Language’ in June 2025.

Masquerading as a syntax highlighting tool, the fake extension exploited search rankings to appear more legitimate than actual alternatives. Once installed, the extension served as a dropper for malware rather than offering any development features.

It contacted a command-and-control server and began deploying scripts designed to check for remote desktop software and install backdoors. The malware used PowerShell scripts to install ScreenConnect, granting persistent access to the victim’s system through a relay server.

Securelist analysts found that the extension exploited Open VSX registry algorithms by publishing with a more recent update date. Further investigation revealed the same attack methods were used in other packages, including npm’s ‘solsafe’ and three VS Code extensions.

The campaign reflects a growing trend of supply chain attacks exploiting AI coding tools to distribute persistent, stealthy malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harnessing the power of space: Bridging innovation and the SDGs

At the WSIS+20 High-Level Event in Geneva, experts gathered to explore how a growing and diversifying space ecosystem can be harnessed to meet the Sustainable Development Goals (SDGs). Moderated by Alexandre Vallet from ITU, the panel highlighted how space has evolved from providing niche satellite connectivity to enabling comprehensive systems that address environmental, humanitarian, and developmental challenges on a global scale.

Almudena Azcarate-Ortega of UNIDIR emphasised the importance of distinguishing between space security—focused on intentional threats like cyberattacks and jamming—and space safety, which concerns accidental hazards. She highlighted the legal gap in existing treaties and underlined how inconsistent interpretations of key terms complicate international negotiations.

Meanwhile, Dr Ingo Baumann traced the evolution of space law from Cold War-era compliance to modern frameworks that prioritise national competitiveness, such as the proposed EU Space Act.

Technological innovation also featured prominently. Bruno Bechard from Kineis presented how their IoT satellite constellation supports SDGs by monitoring wildlife, detecting forest fires, and improving supply chains across remote areas underserved by terrestrial networks. However, he noted that narrowband services like theirs face outdated regulatory frameworks and high fees, making market entry more difficult than for broadband providers.

Chloe Saboye-Pasquier of Ridespace closed with a call for more harmonised regulations. Her company brokers satellite launches and often navigates conflicting legal systems across countries.

She flagged radio frequency registration delays and a lack of mutual recognition between national laws as critical barriers, especially for newcomers and countries without dedicated space agencies. As the panel concluded, speakers agreed that achieving the SDGs through space innovation requires not just cutting-edge technology, but also cohesive global governance, clear legal standards, and inclusive access to space infrastructure.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Building digital resilience in an age of crisis

At the WSIS+20 High-Level Event in Geneva, the session ‘Information Society in Times of Risk’ spotlighted how societies can harness digital tools to weather crises more effectively. Experts and researchers from across the globe shared innovations and case studies that emphasised collaboration, inclusiveness, and preparedness.

Chairs Horst Kremers and Professor Ke Gong opened the discussion by reinforcing the UN’s all-of-society principle, which advocates cooperation among governments, civil society, tech companies, and academia in facing disaster risks.

The Singapore team unveiled their pioneering DRIVE framework—Digital Resilience Indicators for Veritable Empowerment—redefining resilience not as a personal skill set but as a dynamic process shaped by individuals’ environments, from family to national policies. They argued that digital resilience must include social dimensions such as citizenship, support networks, and systemic access, making it a collective responsibility in the digital era.

Turkish researchers analysed over 54,000 social media images shared after the 2023 earthquakes, showing how visual content can fuel digital solidarity and real-time coordination. However, they also revealed how the breakdown of communication infrastructure in the immediate aftermath severely hampered response efforts, underscoring the urgent need for robust and redundant networks.

Meanwhile, Chinese tech giant Tencent demonstrated how integrated platforms—such as WeChat and AI-powered tools—transform disaster response, enabling donations, rescues, and community support on a massive scale. Yet, presenters cautioned that while AI holds promise, its current role in real-time crisis management remains limited.

The session closed with calls for pro-social platform designs to combat polarisation and disinformation, and a shared commitment to building inclusive, digitally resilient societies that leave no one behind.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Parliamentarians step up as key players in shaping the digital future

At the 2025 WSIS+20 High-Level Event in Geneva, lawmakers from Egypt, Uruguay, Tanzania, and Thailand united to call for a transformative shift in how parliaments approach digital governance. Hosted by ITU and the IPU, the session emphasised that legislators are no longer passive observers but essential drivers of digital policy.

While digital innovation presents opportunities for growth and inclusion, it also brings serious challenges, chief among them the digital divide, online harms, and the risks posed by AI.

Speakers underscored a shared urgency to ensure digital policies are people-centred and grounded in human rights. Egypt’s Amira Saber spotlighted her country’s leap toward AI regulation and its rapid expansion of connectivity, but also expressed concerns over online censorship and inequality.

Uruguay’s Rodrigo Goñi warned that traditional, reactive policymaking won’t suffice in the fast-paced digital age, proposing a new paradigm of ‘political intelligence.’ Thailand’s Senator Nophadol In-na praised national digital progress but warned of growing gaps between urban and rural communities. Meanwhile, Tanzania’s Neema Lugangira pushed for more capacity-building, especially for female lawmakers, and direct dialogue between legislators and big tech companies.

Across the board, there was strong consensus – parliamentarians must be empowered with digital literacy and AI tools to legislate effectively. Both ITU and IPU committed to ramping up support through training, partnerships, and initiatives like the AI Skills Coalition. They also pledged to help parliaments engage directly with tech leaders and tackle issues such as online abuse, misinformation, and accessibility, particularly in the Global South.

The discussion ended with cautious optimism. While challenges are formidable, the collaborative spirit and concrete proposals laid out in Geneva point toward a digital future where democratic values and inclusivity remain central. As the December WSIS+20 review approaches, these commitments could start a new era in global digital governance, led not by technocrats alone but by informed, engaged, and forward-thinking parliamentarians.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Report shows China outpacing the US and EU in AI research

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Space operators face strict cybersecurity obligations under EU plan

The European Commission has unveiled a new draft law introducing cybersecurity requirements for space infrastructure, aiming to protect ground and orbital systems.

Operators must implement rigorous cyber risk management measures, including supply chain oversight, encryption, access control and incident response systems. A notable provision places direct accountability on company boards, which could be held personally liable for failures to comply.

The proposed law builds on existing EU regulations such as NIS 2 and DORA, with additional tailored obligations for the space domain. Non-EU firms will also fall within scope unless their home jurisdictions are recognised as offering equivalent regulatory protections.

Fines of up to 2% of global revenue are foreseen, with member states and the EU’s space agency EUSPA granted inspection and enforcement powers. Industry stakeholders are encouraged to engage with the legislative process and align existing cybersecurity frameworks with the Act’s provisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!