Historic digital assets regulation bill approved by US Senate committee for the first time

The US Senate Agriculture Committee has voted along party lines to advance legislation on the cryptocurrency market structure, marking the first time such a bill has cleared a Senate committee.

The Digital Commodity Intermediaries Act passed with 12 Republicans voting in favour and 11 Democrats opposing, representing a significant development for digital asset regulation in the United States.

The legislation would grant the Commodity Futures Trading Commission new regulatory authority over digital commodities and establish consumer protections, including safeguards against conflicts of interest.

Chairman John Boozman proceeded with the bill after losing bipartisan support when Senator Cory Booker withdrew backing for the version presented. The Senate Banking Committee must approve the measure before the two versions can be combined and advanced to the Senate floor.

Democrats raised concerns about the legislation, particularly regarding President Donald Trump’s cryptocurrency ventures. Senator Booker stated the bill departed from bipartisan principles established in November, noting Republicans ‘walked away’ from previous agreements.

Democrats offered amendments to ban public officials from engaging in the crypto industry and to address foreign-adversary involvement in digital commodities. Still, all were rejected as outside the committee’s jurisdiction.

Senator Gillibrand expressed optimism about the bill’s advancement, whilst Boozman called the vote ‘a critical step towards creating clear rules’. The Senate Banking Committee’s consideration was postponed following opposition from the crypto industry, with no new hearing date set.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netherlands faces rising digital sovereignty threat, data authority warns

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, warning that dependence on overseas technology firms could expose vital public services to significant risk.

Concern has intensified after DigiD, the national digital identity system, appeared set for acquisition by a US company, raising questions about long-term control of key infrastructure.

The watchdog argues that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

Additionally, the watchdog criticises the government for treating digital autonomy as an academic exercise rather than recognising its immediate implications for communication between the state and citizens.

In a letter to the economy minister, the authority calls for a unified national approach rather than fragmented decisions by individual public bodies.

It proposes sovereignty criteria for all government contracts and suggests termination clauses that enable the state to withdraw immediately if a provider is sold abroad. It also notes the importance of designing public services to allow smooth provider changes when required.

The watchdog urges the government to strengthen European capacity by investing in scalable domestic alternatives, including a Dutch-controlled government cloud. The economy ministry has declined to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK expands free AI training to reach 10 million workers by 2030

The government has expanded the UK joint industry programme offering free AI training to every adult, with the ambition of upskilling 10 million workers by 2030.

Newly benchmarked courses are available through the AI Skills Hub, giving people practical workplace skills while supporting Britain’s aim to become the fastest AI adopter in the G7.

The programme includes short online courses that teach workers in the UK how to use basic AI tools for everyday tasks such as drafting text, managing content and reducing administrative workloads.

Participants who complete approved training receive a government-backed virtual AI foundations badge, setting a national standard for AI capability across sectors.

Public sector staff, including NHS and local government employees, are among the groups targeted as the initiative expands.

Ministers also announced £27 million in funding to support local tech jobs, graduate traineeships and professional practice courses, alongside the launch of a new cross-government unit to monitor AI’s impact on jobs and labour markets.

Officials argue that widening access to AI skills will boost productivity, support economic growth and help workers adapt to technological change. The programme builds on existing digital skills initiatives and brings together government, industry and trade unions to shape a fair and resilient future of work.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Pornhub to block new UK users over tougher age-check rules

Pornhub will begin blocking access for new UK users from 2 February 2026, allowing entry only to people who had already created an account and completed age checks before that date, the company said, framing the move as a protest against how the UK’s Online Safety Act is being enforced.

The UK regime, overseen by Ofcom, requires porn services accessible in Britain to deploy ‘highly effective’ age assurance measures, not simple click-through age gates. Ofcom says traffic to pornography sites has fallen by about a third since the age-check deadline of 25 July 2025, and it has pursued investigations into dozens of services as enforcement ramps up.

Aylo, Pornhub’s parent company, argues the current approach is backfiring: it says users, adults and minors, are shifting toward non-compliant sites, and it is campaigning for device-based age verification, handled at the operating-system or app-store level rather than site-by-site checks. In parallel, UK VPN downloads surged after age checks began, underscoring how quickly users can try to route around country-based controls.

Privacy and security concerns become sharper when adult platforms are turned into identity checkpoints. In December 2025, reporting linked a large leak of Pornhub premium-user analytics data, including emails and viewing/search histories, to a breach involving a third-party analytics provider, underscoring how sensitive such datasets can be when they are collected or retained.

Government and regulator messaging emphasises child protection and the Online Safety Act’s enforcement teeth, including significant penalties and, in extreme cases, access restrictions, while companies like Aylo argue that inconsistent enforcement simply pushes demand to riskier corners of the internet and fuels workarounds like VPNs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Class-action claims challenge WhatsApp end-to-end encryption practices

WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.

Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.

WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.

The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.

The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s Cyber Centre flags rising ransomware risks for 2025 to 2027

The national cyber authority of Canada has warned that ransomware will remain one of the country’s most serious cyber threats through 2027, as attacks become faster, cheaper and harder to detect.

The Canadian Centre for Cyber Security, part of Communications Security Establishment Canada, says ransomware now operates as a highly interconnected criminal ecosystem driven by financial motives and opportunistic targeting.

According to the outlook, threat actors are increasingly using AI and cryptocurrency while expanding extortion techniques beyond simple data encryption.

Businesses, public institutions and critical infrastructure in Canada remain at risk, with attackers continuously adapting their tactics, techniques and procedures to maximise financial returns.

The Cyber Centre stresses that basic cyber hygiene still provides strong protection. Regular software updates, multi-factor authentication and vigilance against phishing attempts significantly reduce exposure, even as attack methods evolve.

A report that also highlights the importance of cooperation between government bodies, law enforcement, private organisations and the public.

Officials conclude that while ransomware threats will intensify over the next two years, early warnings, shared intelligence and preventive measures can limit damage.

Canada’s cyber authorities say continued investment in partnerships and guidance remains central to building national digital resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Scam emails impersonating JFSC target island businesses

Island businesses have been alerted to scam emails impersonating an employee of the Jersey Financial Services Commission. The fraudulent messages use the fake address ‘thomas.niederberger@jerseyfsc.org.cliopost.com’ and falsely claim to relate to an internal review of a company’s profile and activity.

According to the JFSC, the emails attempt to pressure recipients into clicking a link to access supposed documents delivered via a so-called ‘CLIOPOST eFAX Delivery’ service.

The regulator has confirmed that these messages are a scam and are not connected to the JFSC in any way. Businesses are urged not to respond, click on links, or open attachments.

To verify genuine contact from the JFSC, organisations are advised to use only the official website and ensure emails come from the @jerseyfsc.org domain.

Anyone unsure about a message’s authenticity can contact the JFSC directly by phone. Additional guidance on preventing and responding to scams is available on the Jersey Fraud Prevention Forum’s social media channels.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!