Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada invests in AI and quantum technology firms in British Columbia

Gregor Robertson, Minister of Housing and Infrastructure and Minister responsible for Pacific Economic Development Canada (PacifiCan), announced more than C$17.3 million in funding for eight British Columbia technology companies to accelerate the commercialisation and adoption of AI and quantum technologies.

Through PacifiCan, the federal government is supporting projects focused on robotics, semiconductor manufacturing, AI infrastructure, and quantum supply chains as part of a broader strategy to strengthen domestic innovation and sovereign technology capabilities.

A major share of the investment will support Human in Motion Robotics, which received CAD$3 million to commercialise its AI-powered XoMotion wearable robotic exoskeleton. The company plans to integrate AI into mobility systems, expand manufacturing, and move the technology beyond clinical environments into homes and community settings for people with spinal cord injuries and neurological conditions.

Another funded company, Dream Photonics, will receive more than CAD$1.1 million to establish pilot manufacturing for optical interconnect technologies used in AI and quantum chips. The project aims to strengthen Canada’s domestic semiconductor and quantum ecosystem while creating skilled technology jobs in British Columbia.

The announcement also highlighted the rapid expansion of British Columbia’s AI ecosystem, which now includes nearly 600 AI companies. Canadian officials linked the investments to broader efforts to secure domestic compute infrastructure, strengthen AI supply chains, and position Canada competitively in emerging technologies ahead of events such as Web Summit Vancouver.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada advances sovereign AI data centre strategy with TELUS

The Canadian government and TELUS are advancing plans to develop large-scale sovereign AI infrastructure as part of Ottawa’s broader strategy to strengthen domestic compute capacity and support the country’s AI ecosystem.

The initiative was announced by Evan Solomon (Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario) and focuses on a proposed AI data centre project in British Columbia designed to support researchers, businesses, and academic institutions.

A project that forms part of Canada’s ‘Enabling large-scale sovereign AI data centres’ initiative, which was introduced under Budget 2025. Ottawa stated that sovereign compute infrastructure is increasingly important for maintaining national competitiveness in AI while ensuring Canadian data, intellectual property, and economic value remain within the country.

The government also confirmed that no formal funding commitments have yet been distributed, with discussions currently progressing through non-binding memoranda of understanding with selected industry participants.

Local officials argued that large-scale compute infrastructure has become a strategic economic requirement as governments worldwide race to expand AI processing capabilities. Canada believes it holds competitive advantages due to its colder climate, sustainable energy resources, and network infrastructure, all of which could help attract future AI investment and hyperscale data centre development.

Why does it matter?

The race for sovereign AI infrastructure is rapidly becoming one of the most important geopolitical and economic competitions of the digital era. The Canada-TELUS partnership illustrates how countries are moving beyond AI model development alone and shifting focus towards the physical infrastructure required to sustain future AI ecosystems, including data centres, energy capacity, semiconductors, and domestic compute networks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stablecoin rules updated in revised US Senate proposal

The US Senate Banking Committee has released a revised 309-page draft of the Digital Asset Market Clarity Act ahead of a markup vote, reopening debate on stablecoin rewards, DeFi protections and the regulation of digital asset markets.

The draft, proposed by Committee Chair Tim Scott, seeks to provide a federal framework for digital asset market structure, including provisions on securities innovation, illicit finance, decentralised finance, banking innovation, regulatory sandboxes, software developers and customer protection.

A key section addresses stablecoin rewards. The draft would prohibit digital asset service providers from paying interest or yield on payment stablecoin balances in a way that is economically or functionally equivalent to bank deposit interest. However, it would permit certain activity-based or transaction-based rewards and incentives, provided they are not equivalent to interest or yield on a bank deposit.

The text also includes provisions affecting decentralised finance. It covers rules on non-decentralised finance trading protocols, illicit finance obligations for distributed ledger messaging systems, temporary holds for certain digital asset transactions, voluntary cybersecurity programmes for DeFi trading protocols and studies on digital asset mixers, foreign intermediaries and financial stability risks.

Software developer protections are also included in the draft. The bill contains a dedicated title on protecting software developers and software innovation, including provisions on non-fungible tokens, self-custody and blockchain regulatory certainty.

The draft still faces further negotiation before any final vote. Lawmakers continue to debate the balance between consumer protection, illicit finance controls, innovation, stablecoin incentives and the treatment of decentralised finance. At the same time, the legislation needs to be aligned with other Senate work on digital asset market structure.

Why does it matter?

The revised Clarity Act is another step towards a federal framework for digital asset markets in the United States, with rules that could shape how crypto firms, stablecoin platforms and decentralised finance projects operate. Its provisions on stablecoin rewards, DeFi and software developers show lawmakers trying to balance innovation, consumer protection and oversight in one of the world’s most important financial markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Dubai opens government payments to crypto users

Dubai residents will be able to pay government fees using virtual assets after Crypto.com’s UAE entity, Foris DAX Middle East FZE, received a Stored Value Facilities licence from the Central Bank of the UAE.

Crypto.com said the approval makes it the first Virtual Asset Service Provider in the UAE to receive the licence. It allows the company to activate its partnership with the Dubai Department of Finance, enabling virtual asset payments for government services.

Financial settlements will be conducted in UAE dirhams or Central Bank-approved dirham-backed stablecoins through the regulated Stored Value Facilities framework. Crypto.com said the arrangement supports the Dubai Cashless Strategy.

Users wishing to access the service will need to be onboarded through Crypto.com’s VARA-licensed platform. The company also said that, subject to further Central Bank approvals, the licence could support crypto payment integrations with Emirates and Dubai Duty Free.

Crypto.com executives described the approval as a step towards regulated digital asset adoption in the UAE, while linking it to the country’s wider push for compliant crypto infrastructure and digital payments innovation.

Why does it matter?

The development shows how Dubai is moving virtual asset payments closer to public-sector infrastructure, rather than treating them only as investment products or private-sector payment experiments. By routing payments through a regulated Stored Value Facilities framework and settling them in dirhams or approved dirham-backed stablecoins, the model links crypto access with conventional payment oversight, financial regulation and the emirate’s cashless economy strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings

The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform.

In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value.

The briefing notes that the UK’s 2025 Spending Review committed to ‘a step change in investment in digital and AI across public services’, informed by estimates of potential savings and productivity benefits that run as high as £45 billion per year.

Many current estimates rely on limited or uncertain evidence, the institute argues. Studies often measure first-order effects, such as time savings or cost reductions, while paying less attention to outcomes that matter for public services, including service quality, equity, citizen experience, institutional capacity and worker well-being.

The briefing also warns that productivity claims often fail to fully account for implementation costs, trade-offs, transition periods and the opportunity cost of prioritising AI investment over other public spending.

Several methodological concerns are identified in AI productivity research, including reliance on task automation models, self-reported surveys and limited triangulation across methods. The institute also highlights the growing use of large language models to assess which tasks they can perform, warning that this creates a circular dynamic in which AI systems are used to judge their own capabilities.

Headline figures can obscure mixed evidence, with productivity estimates varying widely and positive findings often receiving more attention than contradictory or null results. Industry involvement can also shape what gets researched and how results are framed, particularly when AI companies fund studies, provide tools or publish their own reports.

To improve the evidence base, the Ada Lovelace Institute calls for productivity research to reflect uncertainty, report ranges rather than single headline numbers and measure outcomes that matter for public services. It recommends more independent research, transparent methodologies, longer-term studies and measurement built into AI deployments from the start, including tracking service quality, error rates, staff well-being and citizen satisfaction.

Why does it matter?

Public-sector AI is increasingly being justified through promises of efficiency, savings and productivity growth. If those claims are based on weak or narrow evidence, governments risk making major investment and workforce decisions before understanding the real costs, trade-offs and effects on service quality.

The briefing is important because it shifts the question from whether AI can save time in isolated tasks to whether AI improves public services in practice. That includes outcomes such as fairness, reliability, staff well-being, citizen experience and institutional capacity, which are harder to measure than headline savings but central to public value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!