Cyberattack on Nova Scotia Power exposes sensitive data of 280,000 customers

Canada’s top cyber-defence official has spoken out following the ransomware attack that compromised the personal data of 280,000 Nova Scotia Power customers.

The breach, which occurred on 19 March but went undetected until 25 April, affected over half of the utility’s customer base. Stolen data included names, addresses, birthdates, driver’s licences, social insurance numbers, and banking details.

Rajiv Gupta, head of the Canadian Centre for Cyber Security, confirmed that Nova Scotia Power had contacted the agency following the incident.

While he refrained from discussing operational details or attributing blame, he highlighted the rising frequency of ransomware attacks against critical infrastructure across Canada.

He explained how criminal groups use double extortion tactics — stealing data and locking systems — to pressure organisations into paying ransoms, often without guaranteeing system restoration or data confidentiality.

Although the utility declined to pay the ransom, the fallout has led to a wave of scrutiny. Gupta warned that interconnectivity and integrating legacy systems with internet-facing platforms have increased vulnerability.

He urged utilities and other infrastructure operators to build defences based on worst-case scenarios and to adopt recommended cyber hygiene practices and the Centre’s ransomware playbook.

In response to the breach, the Nova Scotia Energy Board has approved a $1.8 million investment in cybersecurity upgrades.

The Canadian cyber agency, although lacking regulatory authority, continues to provide support and share lessons from such incidents with other organisations to raise national resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan tightens rules on chip shipments to China

Taiwan has officially banned the export of chips and chiplets to China’s Huawei and SMIC, joining the US in tightening restrictions on advanced semiconductor transfers.

The decision follows reports that TSMC, the world’s largest contract chipmaker, was unknowingly misled into supplying chiplets used in Huawei’s Ascend 910B AI accelerator. The US Commerce Department had reportedly considered a fine of over $1 billion against TSMC for that incident.

Taiwan’s new rules aim to prevent further breaches by requiring export permits for any transactions with Huawei or SMIC.

The distinction between chips and chiplets is key to the case. Traditional chips are built as single-die monoliths using the same process node, while chiplets are modular and can combine various specialised components, such as CPU or AI cores.

Huawei allegedly used shell companies to acquire chiplets from TSMC, bypassing existing US restrictions. If TSMC had known the true customer, it likely would have withheld the order. Taiwan’s new export controls are designed to ensure stricter oversight of future transactions and prevent repeat deceptions.

The broader geopolitical stakes are clear. Taiwan views the transfer of advanced chips to China as a national security threat, given Beijing’s ambitions to reunify with Taiwan and the potential militarisation of high-end semiconductors.

With Huawei claiming its processors are nearly on par with Western chips—though analysts argue they lag two to three generations behind—the export ban could further isolate China’s chipmakers.

Speculation persists that Taiwan’s move was partly influenced by negotiations with the US to avoid the proposed fine on TSMC, bringing both countries into closer alignment on chip sanctions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Denmark moves to replace Microsoft software as part of digital sovereignty strategy

Prior to the Danish government’s formal decision, the cities of Copenhagen and Aarhus had already announced plans to reduce reliance on Microsoft software and cloud services. The national government has now followed suit.

Caroline Stage, Denmark’s Minister of Digitalisation, confirmed that the government will begin transitioning from Microsoft Office to the open-source alternative, LibreOffice. The decision aligns with broader European Union efforts to enhance digital sovereignty—a concept referring to the ability of states to maintain control over their digital infrastructure, data, and technologies.

EU member states have increasingly prioritised digital sovereignty in response to a range of concerns, including security, economic resilience, regulatory control, and the geopolitical implications of dependency on non-European technology providers.

Among the considerations are questions about data governance, operational autonomy, and the risks associated with potential service disruptions in times of political tension. For example, reports following US sanctions against the International Criminal Court (ICC) suggest that Microsoft temporarily restricted access to email services for the ICC’s Chief Prosecutor, Karim Khan, highlighting the potential vulnerabilities linked to foreign service providers.

Denmark’s move is part of a wider trend within the EU aimed at diversifying digital service providers and strengthening domestic or European alternatives. LibreOffice is developed by The Document Foundation (TDF), an independent, non-profit organisation based in Germany.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK National Cyber Security Centre calls for strategic cybersecurity policy agenda

The United Kingdom’s National Cyber Security Centre (NCSC), part of GCHQ, has called for the adoption of a long-term, strategic policy agenda to address increasing cybersecurity risks. That appeal follows prolonged delays in the introduction of updated cybersecurity legislation by the UK government.

In a blog post, co-authored by Ollie Whitehouse, NCSC’s Chief Technology Officer, and Paul W., the Principal Technical Director, the agency underscored the need for more political engagement in shaping the country’s cybersecurity landscape. Although the NCSC does not possess policymaking powers, its latest message highlights its growing concern over the UK’s limited progress in implementing comprehensive cybersecurity reforms.

Whitehouse has previously argued that the current technology market fails to incentivise the development and maintenance of secure digital products. He asserts that while the technical community knows how to build secure systems, commercial pressures and market conditions often favour speed, cost-cutting, and short-term gains over security. That, he notes, is a structural issue that cannot be resolved through voluntary best practices alone and likely requires legislative and regulatory measures.

The UK government has yet to introduce the long-anticipated Cyber Security and Resilience Bill to Parliament. Initially described by its predecessor as a step toward modernising the country’s cyber legislation, the bill remains unpublished. Another delayed effort is a consultation led by the Home Office on ransomware response policy, which was postponed due to the snap election and is still awaiting an official government response.

The agency’s call mirrors similar debates in the United States, where former Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly advocated for holding software vendors accountable for product security. The Biden administration’s national cybersecurity strategy introduced early steps toward vendor liability, a concept that has gained traction among experts like Whitehouse.

However, the current US administration under President Trump has since rolled back some of these requirements, most notably through a recent executive order eliminating obligations for government contractors to attest to their products’ security.

By contrast, the European Union has advanced several legislative initiatives aimed at strengthening digital security, including the Cyber Resilience Act. Yet, these efforts face challenges of their own, such as reconciling economic priorities with cybersecurity requirements and adapting EU-wide standards to national legal systems.

In its blog post, the NCSC reiterated that the financial and societal burden of cybersecurity failures is currently borne by consumers, governments, insurers, and other downstream actors. The agency argues that addressing these issues requires a reassessment of underlying market dynamics—particularly those that do not reward secure development practices or long-term resilience.

While the NCSC lacks the authority to enforce regulations, its increasingly direct communications reflect a broader shift within parts of the UK’s cybersecurity community toward advocating for more comprehensive policy intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump highlights crypto plans at Coinbase summit

US President Donald Trump sent a prerecorded message to Coinbase’s State of Crypto Summit, reaffirming his commitment to advancing crypto regulation in the US.

The administration is working with Congress to pass the GENIUS Act supporting dollar-backed stablecoins and clear market frameworks.

Congress is preparing to vote on the GENIUS Act in the Senate, while the House is moving forward with the CLARITY Act. The latter seeks to clarify the regulatory roles of the SEC and the Commodity Futures Trading Commission concerning digital assets.

Both bills form part of a broader effort to create a clear legal environment for the crypto industry.

Some Democrats oppose Trump’s crypto ties, especially the family-backed stablecoin from World Liberty Financial. Despite tensions, Trump continues promoting his crypto agenda through conferences and videos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Durov questions motives behind French arrest

Telegram founder Pavel Durov says he remains baffled by his detention in France, describing the incident as politically charged and unjustified. In his first interview since his August 2024 arrest, Durov said French prosecutors treated Telegram’s operations as a mystery.

Durov was indicted on six charges, including complicity in criminal activity, money laundering, and failing to respond to legal requests. He denied the accusations, stating that a top-tier accounting firm audits Telegram and spends millions on compliance quarterly.

‘We did nothing wrong,’ he said, accusing French authorities of failing to follow due legal process.

Carlson criticized the arrest as an attempt to humiliate Durov and questioned why civil liberties advocates were silent.

In response, Durov pointed out that over nine million Telegram users have signed a letter demanding his release. He also emphasized that Telegram is prepared to leave countries that oppose its values.

Telegram’s global user base continues to grow rapidly, reaching one billion monthly active users as of March 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit targets AI firm over scraped sports posts

Reddit has taken legal action against AI company Anthropic, accusing it of scraping content from the platform’s sports-focused communities.

The lawsuit claims Anthropic violated Reddit’s user agreement by collecting posts without permission, particularly from fan-driven discussions that are central to how sports content is shared online.

Reddit argues the scraping undermines its obligations to over 100 million daily users, especially around privacy and user control. According to the filing, Anthropic’s actions override assurances that users can manage or delete their content as they see fit.

The platform emphasises that users gain no benefit from technology built using their contributions.

These online sports communities are rich sources of original fan commentary and analysis. On a large scale, such content could enable AI models to imitate sports fan behaviour with impressive accuracy.

While teams or platforms might use such models to enhance engagement or communication, Reddit warns that unauthorised use brings serious ethical and legal risks.

The case could influence how AI companies handle user-generated content across the internet, not just in sports. As web scraping grows more common, the outcome of the dispute may shape future standards for AI training practices and online content rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!