Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Experts debate when quantum computers could break modern encryption

Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.

While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.

Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.

Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.

Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.

Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.

Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.

Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI surge ‘bigger than the internet’ but with risk of major shake-out

In a commentary highlighted by a BBC article, Cisco’s chief executive, Chuck Robbins, reportedly compared the current AI boom to the early dot-com bubble, suggesting that while AI’s long-term impact could be transformative, the market may also face a period of significant turbulence and ‘wreckage’ before durable winners emerge.

Robbins warned that massive capital flows into AI companies, many of which lack clear revenue paths, resemble past speculative cycles and could lead to sharp contractions or failures among weaker players in the tech ecosystem.

He also noted that productivity gains from AI may be real but come with job reshaping, security risks and economic disruptions along the way.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada’s Cyber Centre flags rising ransomware risks for 2025 to 2027

The national cyber authority of Canada has warned that ransomware will remain one of the country’s most serious cyber threats through 2027, as attacks become faster, cheaper and harder to detect.

The Canadian Centre for Cyber Security, part of Communications Security Establishment Canada, says ransomware now operates as a highly interconnected criminal ecosystem driven by financial motives and opportunistic targeting.

According to the outlook, threat actors are increasingly using AI and cryptocurrency while expanding extortion techniques beyond simple data encryption.

Businesses, public institutions and critical infrastructure in Canada remain at risk, with attackers continuously adapting their tactics, techniques and procedures to maximise financial returns.

The Cyber Centre stresses that basic cyber hygiene still provides strong protection. Regular software updates, multi-factor authentication and vigilance against phishing attempts significantly reduce exposure, even as attack methods evolve.

A report that also highlights the importance of cooperation between government bodies, law enforcement, private organisations and the public.

Officials conclude that while ransomware threats will intensify over the next two years, early warnings, shared intelligence and preventive measures can limit damage.

Canada’s cyber authorities say continued investment in partnerships and guidance remains central to building national digital resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

TikTok struggles to stabilise US infrastructure after data centre outage

TikTok says recovery of its US infrastructure is progressing, although technical issues continue to affect parts of the platform after a data centre power outage.

The disruption followed the launch of a new US-based entity backed by American investors, a move aimed at avoiding a nationwide ban.

Users across the country reported problems with searches, video playback, posting content, loading comments and unexpected behaviour in the For You algorithm. TikTok said the outage also affected other apps and warned that slower load times and timeouts may persist, rather than returning to normal performance.

In a statement posted by the TikTok USDS Joint Venture, the company said collaboration with its US data centre partner has restored much of the infrastructure, but posting new content may still trigger errors.

Creators may also see missing views, likes, or earnings due to server timeouts rather than actual data loss.

TikTok has not named the data centre partner involved, while severe winter storms across the US may have contributed to the outage. Despite growing scepticism around the timing of the disruption, the company insists that user data and engagement remain secure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facial recognition and AI power Android’s new theft protection upgrades

Android is rolling out expanded theft protection features aimed at reducing financial fraud and safeguarding personal data when smartphones are stolen, with new security controls now available across recent device versions.

The latest updates introduce stronger protections against unauthorised access, including tighter lockout rules after failed authentication attempts and broader biometric safeguards covering third-party apps such as banking services and password managers.

Recovery tools are also being enhanced, with remote locking now offering optional security challenges to ensure only verified owners can secure lost or stolen devices through web access.

For new Android devices activated in Brazil, AI-powered theft detection and remote locking are enabled by default, using on-device intelligence to identify snatch-and-run incidents and immediately lock the screen.

The expanded protections reflect a broader shift towards multi-layered mobile security, as device makers respond to rising phone theft linked to identity fraud, financial crime, and data exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scam emails impersonating JFSC target island businesses

Island businesses have been alerted to scam emails impersonating an employee of the Jersey Financial Services Commission. The fraudulent messages use the fake address ‘thomas.niederberger@jerseyfsc.org.cliopost.com’ and falsely claim to relate to an internal review of a company’s profile and activity.

According to the JFSC, the emails attempt to pressure recipients into clicking a link to access supposed documents delivered via a so-called ‘CLIOPOST eFAX Delivery’ service.

The regulator has confirmed that these messages are a scam and are not connected to the JFSC in any way. Businesses are urged not to respond, click on links, or open attachments.

To verify genuine contact from the JFSC, organisations are advised to use only the official website and ensure emails come from the @jerseyfsc.org domain.

Anyone unsure about a message’s authenticity can contact the JFSC directly by phone. Additional guidance on preventing and responding to scams is available on the Jersey Fraud Prevention Forum’s social media channels.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!