Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Health New Zealand issues guidance on use of generative AI and large language models

Health New Zealand has published new guidance on generative AI and large language models for healthcare settings.

The guidance states that the National Artificial Intelligence and Algorithm Expert Advisory Group evaluates the use of generative AI tools and LLMs and recommends caution in their application across Health New Zealand environments. It notes that further data is needed to assess risks and benefits in the New Zealand health context.

Employees and contractors are prohibited from entering personal, confidential or sensitive patient or organisational information into unapproved LLMs or generative AI tools. The guidance also says such tools must not be used for clinical decisions or personalised patient advice.

Staff using generative AI tools in other contexts must take full responsibility for checking the information generated and acknowledge when generative AI has been used to create content. Anyone planning to use generative AI or LLMs is also asked to seek advice from the advisory group.

The guidance highlights potential risks including privacy breaches, inaccurate or misleading outputs, bias in training data, lack of transparency in model outputs, data sovereignty concerns and intellectual property risks. It also notes that generative AI systems may not adequately support te reo Māori and other minority languages spoken in Aotearoa New Zealand.

Why does it matter?

The guidance shows how health systems are beginning to set practical boundaries for generative AI before its use becomes routine in clinical and administrative settings. By prohibiting unapproved tools for patient data, clinical decisions and personalised advice, Health New Zealand is drawing a clear line between limited productivity uses and high-risk healthcare applications. In contrast, its references to Māori data sovereignty and language support widen the governance frame to include equity, cultural rights and data protection concerns that standard technology policies may not fully address.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young users’ reliance on ChatGPT raises questions over AI advice and autonomy

Sam Altman has described a generational divide in how people use ChatGPT, saying younger users are integrating the tool more deeply into learning, planning and everyday decision-making.

Speaking at Sequoia Capital’s AI Ascent 2025, the OpenAI CEO said older users tend to treat ChatGPT more like a search tool, while people in their 20s and 30s often use it as a personal advisor. College students, he said, are going further by treating ChatGPT almost like an operating system, connecting it to files, tasks and complex workflows.

The remarks point to a shift in how AI tools are being embedded into daily routines, particularly among students and younger adults. Business Insider reported that a February 2025 OpenAI report found US college students were among the platform’s most frequent users, while a Pew Research Centre survey found that 26% of US teens aged 13 to 17 used ChatGPT for schoolwork in 2024, double the share recorded in 2023.

Altman’s comments also raise questions about dependence, accuracy and boundaries as AI systems move closer to advisory roles. While users may benefit from private spaces to test ideas, organise tasks and prepare decisions, concerns remain over over-reliance, data privacy and the shifting role of human relationships in decision-making.

Why does it matter?

The trend suggests that AI is becoming more than an information tool for younger users. As ChatGPT and similar systems become part of studying, planning and personal decision-making, they influence not only how information is consumed, but also how habits, confidence and judgement develop.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU briefing warns AI health benefits need safeguards

A European Parliamentary Research Service briefing says AI could improve healthcare, disease prevention and well-being across the EU, but warns that its growing use in health advice, AI companions and tools used by children, young people and older adults requires strong safeguards and human oversight.

The briefing, focused on health and well-being in the age of AI, says AI is already supporting diagnostics, personalised treatment, health-risk forecasting, hospital management, pharmaceutical development and disease surveillance. It points to use cases in areas such as radiology, oncology, cardiology, rare diseases and cross-border health data exchange.

AI-powered health chatbots and virtual assistants can help people access health information, understand complex topics and prepare for medical consultations. However, the briefing warns that such tools may also create privacy risks, spread inaccurate or misleading information, and encourage users to delay or replace professional medical advice.

AI companions are presented as another area where benefits and risks coexist. They may support social interaction and alert caregivers when people are at risk of isolation, but cannot replace human relationships and may deepen loneliness or worsen mental health risks for vulnerable users.

For older adults, AI-enabled wearables, in-home sensors, assistive technologies and smart care platforms could support independent living and improve care. At the same time, the briefing warns of privacy and data security concerns, emotional dependency and the risk that technology could replace rather than complement personal interaction.

Young people and children face different risks as AI becomes part of daily life, learning, health advice and social interaction. The briefing highlights possible exposure to harmful content, cyberbullying, emotional dependency, privacy violations, reduced critical thinking, sleep disruption, sedentary behaviour and social withdrawal.

The research service says the EU AI Act, the General Data Protection Regulation, the European Health Data Space, and sector-specific rules on medical devices and diagnostics form part of the EU framework for managing these risks. It concludes that AI’s health benefits can be realised only if innovation is balanced with safeguards, digital skills and a commitment to keeping human care and social connection at the centre.

Why does it matter?

AI is becoming part of healthcare not only through clinical tools, but also through consumer-facing chatbots, companions, wearables and support systems used by vulnerable groups. That widens the policy challenge from medical safety to privacy, misinformation, emotional dependency, digital skills and the preservation of human care.

The briefing shows why health-related AI governance cannot rely only on innovation or efficiency gains. Trustworthy use will depend on safeguards that protect patients, children, older adults and other vulnerable users while ensuring AI supports, rather than replaces, professional care and social connection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot