European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.
Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.
Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.
Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.
The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.
Why does it matter?
Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!
The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.
According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.
The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.
Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.
The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.
In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.
The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.
Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.
The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.
Why does it matter?
As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.
The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.
The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.
The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.
The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.
OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.
Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.
If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.
OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.
The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.
Why does it matter?
Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.
The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.
Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.
The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Privacy rights group noyb has filed a complaint against LinkedIn, alleging that the platform restricts access to certain user data by placing it behind a paid Premium subscription.
The complaint centres on LinkedIn’s ‘Who’s viewed your profile’ feature, which shows users who have visited their profile. According to noyb, LinkedIn tracks profile visits and makes detailed visitor information available to Premium subscribers, while refusing to provide the same data free of charge when users submit an access request under Article 15 of the GDPR.
Noyb argues that users have the right to receive their own personal data free of charge under the EU data protection rules. The organisation claims that LinkedIn has cited data protection concerns when refusing access requests, despite making similar information available through its paid subscription service.
The complaint was lodged with the Austrian Data Protection Authority and seeks enforcement action requiring LinkedIn to provide the data requested, as well as potential penalties. Noyb also questions whether LinkedIn’s tracking of profile visits complies with the EU consent requirements.
LinkedIn has reportedly denied the allegations, saying it complies with applicable rules and provides relevant information in accordance with its privacy policies.
The case adds to ongoing scrutiny of how digital platforms handle data access rights in the EU, particularly when information collected about users is also used for paid services.
Why does it matter?
The complaint tests whether platforms can monetise access to information that may also fall under users’ GDPR right of access. If regulators side with noyb, the case could affect how subscription-based platforms structure premium features that involve personal data, especially when the same data is withheld from non-paying users who make formal access requests.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is reportedly preparing a major expansion of Apple Intelligence that could allow users to choose which AI model powers Siri and other system features. According to recent reports, iOS 27, iPadOS 27, and macOS 27 may introduce a new ‘Extensions’ framework designed to integrate third-party AI systems directly into Apple’s software ecosystem.
The reported feature would allow applications such as Gemini and Claude to connect with Siri through their App Store apps. Users may be able to select different AI providers for different tasks, while Apple is also said to be testing separate Siri voices for responses generated by external models rather than Apple’s own systems.
The move would expand Apple’s broader AI partnership strategy rather than replace existing integrations. ChatGPT already supports selected Apple Intelligence functions, and earlier reporting suggested Google Gemini could eventually power parts of Siri itself. The new framework appears aimed at turning Apple devices into a wider AI platform that supports multiple large language models rather than a single assistant stack.
Apple is expected to present further details during its Worldwide Developers Conference on 8 June 2026. If the reported changes materialise, they could significantly reshape how users interact with AI assistants by giving them more control over which models handle tasks such as search, writing, and image generation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has agreed to pay $250 million to settle a class action lawsuit alleging that it misled consumers about the readiness and availability of AI-powered Siri features promoted ahead of the iPhone 16 launch. Under the proposed agreement, eligible US customers who bought supported iPhone models between 10 June 2024 and 29 March 2025 may receive between $25 and $95 per device, depending on the number of claims. Apple denied wrongdoing and settled the case without admitting liability.
The complaint argued that consumers who purchased supported iPhone 15 and iPhone 16 models expected advanced Apple Intelligence features and a significantly upgraded Siri experience that were not available at the time of sale. Plaintiffs said Apple’s marketing created the impression that the new capabilities would arrive sooner and with broader functionality than users ultimately received.
The settlement comes shortly before Apple’s annual Worldwide Developers Conference, where the company is widely expected to present further updates to Siri and its wider AI strategy.
Why does it matter?
The case shows how AI product marketing is becoming a legal and regulatory risk, not just a branding issue. As technology companies use generative AI features to drive device sales and platform adoption, courts and consumers are paying closer attention to whether those capabilities are actually available when products reach the market. The Apple settlement suggests that overstating AI readiness can create liability even before regulators step in, making transparency around launch claims increasingly important across the sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
DeepSeek has again placed itself at the centre of the global AI race. After drawing worldwide attention with its R1 reasoning model in early 2025, the Chinese company has recently released DeepSeek V4, a new model designed to compete not only on performance, but also on price, openness and efficiency.
The hype around DeepSeek V4 is not based on a single feature. The model comes with a 1 million-token context window, open weights, two versions for different use cases and a strong focus on agentic workflows such as coding, research, document analysis and long-running tasks. In a market still dominated by expensive closed models, DeepSeek is trying to prove that powerful AI does not need to remain locked behind trademarked systems.
A model built for long memory
The most immediate difference between DeepSeek V4 and other models is context length. Both DeepSeek-V4-Pro and DeepSeek-V4-Flash support a 1-million-token context window, meaning they can process inputs far longer than those of older generations of mainstream models. According to DeepSeek’s official release, one million tokens is now the default across all official DeepSeek services.
For ordinary users, that may sound technical. In practice, it matters because a longer context allows models to work with large documents, long conversations, full codebases, legal materials, research archives or complex project histories without losing track as quickly.
That is why DeepSeek V4 is not just another chatbot release. It is aimed at the next stage of AI use, where models are expected to act less like question-answering tools and more like assistants that can follow long processes over time.
Two models for two different needs
DeepSeek V4 comes in two main versions. DeepSeek-V4-Pro is a larger and more capable model, with 1.6 trillion total parameters and 49 billion active parameters. DeepSeek-V4-Flash is a smaller model, with 284 billion total parameters and 13 billion active parameters, designed for faster and more cost-effective workloads.
That distinction is important. Not every user needs the strongest model for every task. A company summarising documents, routing queries or running basic support may choose Flash. A developer working on complex coding tasks, long-context agents or advanced reasoning may prefer Pro.
DeepSeek’s release reflects a broader trend in AI. The best model is no longer always the biggest one. Cost, speed, context size and deployment flexibility are now as important as raw benchmark performance.
Why the price matters
One reason DeepSeek attracts so much attention is its aggressive pricing. DeepSeek’s API page lists V4-Flash at USD 0.14 per 1 million input tokens on a cache miss and USD 0.28 per 1 million output tokens. V4-Pro is listed at USD 1.74 per 1 million input tokens and USD 3.48 per 1 million output tokens before the temporary 75% discount.
For developers and companies, that changes the calculation. High-performing AI models are useful only if they can be deployed at scale. If every long document, coding session or agentic workflow becomes too expensive, adoption slows down.
DeepSeek’s challenge to the market is therefore not only technical. It is economic. The company is pushing the idea that frontier-level AI should be cheaper to run, easier to access and less dependent on closed ecosystems.
The architecture behind the hype
DeepSeek V4 uses a mixture-of-experts approach, meaning only part of the model is active during each response. That helps explain why the model can be very large on paper, yet still more efficient to run than a dense model of similar overall size.
The more interesting part is how DeepSeek handles long context. NVIDIA’s technical overview explains that DeepSeek V4 uses hybrid attention, combining compression and selective attention techniques to reduce the cost of processing very long prompts. NVIDIA says these changes are designed to cut per-token inference FLOPs by 73% and reduce KV cache memory burden by 90% compared with DeepSeek-V3.2.
For a non-technical audience, the point is simple. DeepSeek V4 is trying to solve one of the biggest problems in modern AI: how to make models remember and process much more information without becoming too slow or too expensive.
That is where much of the hype comes from. The model is not merely larger. It is designed around the economics of long-context AI.
Why NVIDIA is still in the picture
NVIDIA’s role in the DeepSeek V4 story is especially interesting. DeepSeek is often discussed as part of China’s effort to build a more independent AI ecosystem, but NVIDIA has also been quick to move forward to support developers who want to build with the model.
In its technical blog, NVIDIA describes DeepSeek V4 as a model family designed for efficient inference of million-token contexts. The company says DeepSeek-V4-Pro and V4-Flash are available through NVIDIA GPU-accelerated endpoints, while developers can also use NVIDIA Blackwell, NIM containers, SGLang and vLLM deployment options.
NVIDIA also reports that early tests of DeepSeek-V4-Pro on the GB200 NVL72 platform showed more than 150 tokens per second per user. That matters because long-context models place heavy memory pressure, as well as on compute and networking infrastructure. The model may be efficient by design, but serving it at scale still requires serious hardware.
So, DeepSeek V4 does not remove NVIDIA from the story – it complicates it. The model is part of a broader push towards more efficient AI, but the infrastructure race remains central.
The chip question behind the model
DeepSeek V4 also arrives at a time when AI infrastructure is becoming just as important as model performance. MIT Technology Review frames the release partly through that lens, noting that DeepSeek’s new model reflects China’s broader attempt to reduce reliance on foreign AI hardware and build a more self-sufficient technology stack.
That detail matters because the AI race is no longer only about who builds the most capable model. It is also about who controls the chips, software frameworks and data centres needed to run it.
Replacing NVIDIA, however, remains difficult. Its advantage lies not just in its chips, but also in the software ecosystem developers have built around its platforms over many years. Moving to alternative hardware means adapting code, rebuilding tools and proving that the new systems are stable enough for serious use.
DeepSeek V4, however, sits between two realities. It points towards China’s ambition to build a more independent AI stack, while NVIDIA’s rapid support for the model shows that frontier AI still depends heavily on established infrastructure.
Open weights as a strategic move
DeepSeek V4 is also important because the model weights are available through Hugging Face under the MIT License. That gives developers more freedom to inspect, adapt and deploy the model than they would have with a fully closed commercial system.
Open-weight models are becoming a major pressure point in the AI race. Closed models may still lead in some areas, especially in polished consumer products, enterprise support and safety layers. However, open models offer something different: flexibility.
For universities, start-ups, smaller companies and developers outside the largest AI ecosystems, that flexibility matters. It means advanced AI can be tested, modified and integrated without relying entirely on a handful of dominant providers.
Benchmarks need caution
DeepSeek presents V4-Pro as highly competitive across reasoning, coding, long-context and agentic benchmarks. Hugging Face lists results including 80.6 on SWE-bench Verified, 90.1 on GPQA Diamond and 87.5 on MMLU-Pro for DeepSeek-V4-Pro.
Those numbers are impressive, but they should not be treated as the full story. Benchmarks are useful, but they rarely capture every real-world use case. A model can score well on coding tests and still struggle with reliability, factual accuracy, safety or complex multi-step workflows in production.
That caution is important. The AI industry often turns benchmarks into headlines, while real performance depends on deployment, prompting, safety controls and the specific task at hand.
More than just another model release
DeepSeek V4 matters because it combines several trends into one release: long context, lower prices, open weights, agentic workflows and geopolitical competition. It also shows that the AI race is no longer fought only in labs, benchmarks and data centres. Visibility now matters too. Tools such as Diplo’s Digital Footprints show how digital presence shapes the way technology actors and media narratives are discovered, ranked and understood. At this stage, the competition is not only about who has the smartest model. It is also about who can make intelligence cheaper, more available and easier to deploy.
That does not mean DeepSeek has solved every problem. Questions remain around independent benchmarking, safety, data governance, infrastructure and the broader political context of Chinese AI development. Still, the release does show where the market is heading.
The next phase of AI may not be defined solely by the most powerful model. It may be defined by the model that is powerful enough, affordable enough and open enough to change how people build products, services and tools with AI.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!