CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under pressure after small business loses thousands

A New Orleans bar owner lost $10,000 after cyber criminals hijacked her Facebook business account, highlighting the growing threat of online scams targeting small businesses. Despite efforts to recover the account, the company was locked out for weeks, disrupting sales.

The US-based scam involved a fake Meta support message that tricked the owner into giving hackers access to her page. Once inside, the attackers began running ads and draining funds from the business account linked to the platform.

Cyber fraud like this is increasingly common as small businesses rely more on social media to reach their customers. The incident has renewed calls for tech giants like Meta to implement stronger user protections and improve support for scam victims.

Meta says it has systems to detect and remove fraudulent activity, but did not respond directly to this case. Experts argue that current protections are insufficient, especially for small firms with fewer resources and little recourse after attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moscow targets crypto miners to protect AI infrastructure

Russia is preparing to ban cryptocurrency mining in data centres as it shifts national focus towards digitalisation and AI development. The draft law aims to prevent miners from accessing discounted power and infrastructure support reserved for AI-related operations.

Amendments to the bill, introduced at the request of President Vladimir Putin, will prohibit mining activity in facilities registered as official data centres. These centres will instead benefit from lower electricity rates and faster grid access to help scale computing power for big data and AI.

The legislation redefines data centres as communications infrastructure and places them under stricter classification and control. If passed, it could blow to companies like BitRiver, which operate large-scale mining hubs in regions like Irkutsk.

Putin defended the move by citing the strain on regional electricity grids and a need to use surplus energy wisely. While crypto mining was legalised in 2024, many Russian territories have imposed bans, raising questions about the industry’s long-term viability in the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung confirms core Galaxy AI tools remain free

Samsung has confirmed that core Galaxy AI features will continue to be available free of charge for all users.

Speaking during the recent Galaxy Unpacked event, a company representative clarified that any AI tools installed on a device by default—such as Live Translate, Note Assist, Zoom Nightography and Audio Eraser—will not require a paid subscription.

Instead of leaving users uncertain, Samsung has publicly addressed speculation around possible Galaxy AI subscription plans.

While there are no additional paid AI features on offer at present, the company has not ruled out future developments. Samsung has already hinted that upcoming subscription services linked to Samsung Health could eventually include extra AI capabilities.

Alongside Samsung’s announcement, attention has also turned towards Google’s freemium model for its Gemini AI assistant, which appears on many Android devices. Users can access basic features without charge, but upgrading to Google AI Pro or Ultra unlocks advanced tools and increased storage.

New Galaxy Z Fold 7 and Z Flip 7 handsets even come bundled with six months of free access to premium Google AI services.

Although Samsung is keeping its pre-installed Galaxy AI features free, industry observers expect further changes as AI continues to evolve.

Whether Samsung will follow Google’s path with a broader subscription model remains to be seen, but for now, essential Galaxy AI functions stay open to all users without extra cost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei challenges Nvidia in global AI chip market

Huawei Technologies is exploring AI chip exports to the Middle East and Southeast Asia in a bid to compete with Nvidia, according to a Bloomberg News report published Thursday.

The Chinese telecom firm has contacted potential buyers in the United Arab Emirates, Saudi Arabia, and Thailand to promote its Ascend 910B chips, an earlier-generation AI processor.

The offer involves a limited number of chips, reportedly in the low thousands, although specific quantities remain undisclosed. No deals have been finalised so far. Sources cited in the report said there is limited interest in the UAE, and the status of talks in Thailand remains uncertain.

Government representatives in Thailand and Saudi Arabia did not immediately respond to Reuters’ requests for comment. Huawei also declined to comment. The initiative is part of a broader strategy to expand into markets where US chipmakers have long held dominance.

Huawei also promotes remote access to CloudMatrix 384, a China-based AI system built using its more advanced chipsets. However, due to supply limitations, the company cannot export these high-end models outside China.

The Middle East has quickly become a high-demand region for AI infrastructure, attracting interest from leading technology companies. Nvidia has already struck several regional deals, positioning itself as a major player in AI development across Saudi Arabia and neighbouring countries.

Huawei is simultaneously focusing on domestic sales of its newer 910C chips, offering them to Chinese firms that cannot purchase US AI chips due to ongoing export restrictions imposed by Washington.

US administrations have long cited national security concerns in limiting China’s access to cutting-edge chip technologies, fearing their potential use in military applications.

‘With the current export controls, we are effectively out of the China datacenter market, which is now served only by competitors such as Huawei,’ an Nvidia spokesperson told Reuters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini AI tool animates photos into short video clips

Google has rolled out a new feature for Gemini AI that transforms still photos into short, animated eight-second videos with sound. The capability is powered by Veo 3, Google’s latest video generation model, and is currently available to Gemini Advanced Ultra and Pro subscribers.

The tool supports background noise, ambient audio, and even spoken dialogue, with support gradually expanding to users in select countries, including India. At launch, access to the web interface is limited, though Google has announced that mobile support will follow later in the week.

To use the tool, users upload a photo, describe the intended motion, and optionally add prompts for sound effects or narration. Gemini then generates a 720p MP4 video in a 16:9 landscape format, automatically synchronising visuals and audio.

Josh Woodward, Vice President of the Gemini app and Google Labs, showcased the feature on X (formerly Twitter), animating a child’s drawing. ‘Still experimental, but we wanted our Pro and Ultra members to try it first,’ he said, calling the result fun and expressive.

To maintain authenticity, each video includes a visible ‘Veo’ watermark in the bottom-right corner and an invisible SynthID watermark. This hidden digital signature, developed by Google DeepMind, helps identify AI-generated content and preserve transparency around synthetic media.

The company has emphasised its commitment to responsible AI deployment by embedding traceable markers in all output from this tool. These safeguards come amid increasing scrutiny of generative video tools and deepfakes across digital platforms.

To animate a photo using Gemini AI’s new tool, users should follow these steps: Click on the ‘tools’ icon in the prompt bar, then choose the ‘video’ option from the menu. Upload the still image, describe the desired motion, and provide sound or narration instructions, optionally.

The underlying Veo 3 model was first introduced at Google I/O as the company’s most advanced video generation engine. It can produce high-quality visuals, simulate real-world physics, and even lip-sync dialogue from text and image-based prompts.

A Google blog post explains: ‘Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing.’ The company says users can craft short story prompts and expect realistic, cinematic responses from the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20: Inclusive ICT policies urged to close global digital divide

At the WSIS+20 High-Level Event in Geneva, Dr Hakikur Rahman and Dr Ranojit Kumar Dutta presented a sobering picture of global digital inequality, revealing that more than 2.6 billion people remain offline. Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure, and a lack of digital literacy continue to block access, especially for marginalised communities.

The speakers proposed a structured three-pillar framework — inclusion, ethics, and sustainability- to ensure that no one is left behind in the digital age.

The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programs, citing India’s Digital India and Kenya’s Community Networks as examples of success. On ethics, they called for policies grounded in human rights, data privacy, and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.

The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.

Dr Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.

Encouraging trends, such as rising female enrollment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.

The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Building digital resilience in an age of crisis

At the WSIS+20 High-Level Event in Geneva, the session ‘Information Society in Times of Risk’ spotlighted how societies can harness digital tools to weather crises more effectively. Experts and researchers from across the globe shared innovations and case studies that emphasised collaboration, inclusiveness, and preparedness.

Chairs Horst Kremers and Professor Ke Gong opened the discussion by reinforcing the UN’s all-of-society principle, which advocates cooperation among governments, civil society, tech companies, and academia in facing disaster risks.

The Singapore team unveiled their pioneering DRIVE framework—Digital Resilience Indicators for Veritable Empowerment—redefining resilience not as a personal skill set but as a dynamic process shaped by individuals’ environments, from family to national policies. They argued that digital resilience must include social dimensions such as citizenship, support networks, and systemic access, making it a collective responsibility in the digital era.

Turkish researchers analysed over 54,000 social media images shared after the 2023 earthquakes, showing how visual content can fuel digital solidarity and real-time coordination. However, they also revealed how the breakdown of communication infrastructure in the immediate aftermath severely hampered response efforts, underscoring the urgent need for robust and redundant networks.

Meanwhile, Chinese tech giant Tencent demonstrated how integrated platforms—such as WeChat and AI-powered tools—transform disaster response, enabling donations, rescues, and community support on a massive scale. Yet, presenters cautioned that while AI holds promise, its current role in real-time crisis management remains limited.

The session closed with calls for pro-social platform designs to combat polarisation and disinformation, and a shared commitment to building inclusive, digitally resilient societies that leave no one behind.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.