Privacy rights group noyb has filed a complaint against LinkedIn, alleging that the platform restricts access to certain user data by placing it behind a paid Premium subscription.
The complaint centres on LinkedIn’s ‘Who’s viewed your profile’ feature, which shows users who have visited their profile. According to noyb, LinkedIn tracks profile visits and makes detailed visitor information available to Premium subscribers, while refusing to provide the same data free of charge when users submit an access request under Article 15 of the GDPR.
Noyb argues that users have the right to receive their own personal data free of charge under the EU data protection rules. The organisation claims that LinkedIn has cited data protection concerns when refusing access requests, despite making similar information available through its paid subscription service.
The complaint was lodged with the Austrian Data Protection Authority and seeks enforcement action requiring LinkedIn to provide the data requested, as well as potential penalties. Noyb also questions whether LinkedIn’s tracking of profile visits complies with the EU consent requirements.
LinkedIn has reportedly denied the allegations, saying it complies with applicable rules and provides relevant information in accordance with its privacy policies.
The case adds to ongoing scrutiny of how digital platforms handle data access rights in the EU, particularly when information collected about users is also used for paid services.
Why does it matter?
The complaint tests whether platforms can monetise access to information that may also fall under users’ GDPR right of access. If regulators side with noyb, the case could affect how subscription-based platforms structure premium features that involve personal data, especially when the same data is withheld from non-paying users who make formal access requests.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European policymakers are increasingly examining how traditional payment systems could evolve in response to the rise of digital assets. Bank of Italy Deputy Governor Chiara Scotti has suggested that the Single Euro Payments Area (SEPA) could be extended to tokenised payments to support Europe’s digital finance infrastructure.
Scotti described a tokenised SEPA framework as an important area for consideration, highlighting that Europe’s existing payments system already offers strong interoperability and shared standards.
Her remarks align with broader efforts by the European Central Bank to integrate distributed ledger technology into settlement systems.
The European Central Bank is currently developing initiatives such as Pontes, a pilot linking blockchain-based market platforms with central bank settlement infrastructure, alongside a longer-term roadmap known as Appia.
These projects aim to ensure euro-denominated settlement remains central as tokenised deposits and digital assets expand.
Policymakers warn that widespread stablecoin adoption could shift deposits away from banks, weakening financial stability and reducing the euro’s influence in digital markets. As a result, central bank money is being considered as a key anchor for future tokenised financial systems.
Why does it matter?
The debate reflects Europe’s effort to maintain control over its monetary system as payments move toward tokenised and blockchain-based infrastructure. Without central bank money integrated into these systems, risks include weaker financial stability, fragmented payment networks, and greater reliance on external stablecoin ecosystems, potentially reducing the euro’s role in digital finance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Norway joins a group of 14 participating countries, including the USA, Japan, the UK and India. Norwegian officials said participation could improve market access for domestic companies operating in advanced technological sectors and strengthen economic security cooperation with strategic partners.
Minister of Trade and Industry, Cecilie Myrseth, said the initiative aligns with Norway’s goal of expanding cooperation with leading countries in AI and emerging technologies. Norwegian ambassador to the USA, Anniken Huitfeldt, is expected to formally sign the agreement on behalf of the country.
The move also complements broader Norwegian and European efforts to secure access to critical technologies and supply chains. The government highlighted initiatives linked to the European Chips Act and the EU Critical Raw Materials Act as part of a wider strategy to strengthen technology resilience and industrial competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is reportedly preparing a major expansion of Apple Intelligence that could allow users to choose which AI model powers Siri and other system features. According to recent reports, iOS 27, iPadOS 27, and macOS 27 may introduce a new ‘Extensions’ framework designed to integrate third-party AI systems directly into Apple’s software ecosystem.
The reported feature would allow applications such as Gemini and Claude to connect with Siri through their App Store apps. Users may be able to select different AI providers for different tasks, while Apple is also said to be testing separate Siri voices for responses generated by external models rather than Apple’s own systems.
The move would expand Apple’s broader AI partnership strategy rather than replace existing integrations. ChatGPT already supports selected Apple Intelligence functions, and earlier reporting suggested Google Gemini could eventually power parts of Siri itself. The new framework appears aimed at turning Apple devices into a wider AI platform that supports multiple large language models rather than a single assistant stack.
Apple is expected to present further details during its Worldwide Developers Conference on 8 June 2026. If the reported changes materialise, they could significantly reshape how users interact with AI assistants by giving them more control over which models handle tasks such as search, writing, and image generation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has agreed to pay $250 million to settle a class action lawsuit alleging that it misled consumers about the readiness and availability of AI-powered Siri features promoted ahead of the iPhone 16 launch. Under the proposed agreement, eligible US customers who bought supported iPhone models between 10 June 2024 and 29 March 2025 may receive between $25 and $95 per device, depending on the number of claims. Apple denied wrongdoing and settled the case without admitting liability.
The complaint argued that consumers who purchased supported iPhone 15 and iPhone 16 models expected advanced Apple Intelligence features and a significantly upgraded Siri experience that were not available at the time of sale. Plaintiffs said Apple’s marketing created the impression that the new capabilities would arrive sooner and with broader functionality than users ultimately received.
The settlement comes shortly before Apple’s annual Worldwide Developers Conference, where the company is widely expected to present further updates to Siri and its wider AI strategy.
Why does it matter?
The case shows how AI product marketing is becoming a legal and regulatory risk, not just a branding issue. As technology companies use generative AI features to drive device sales and platform adoption, courts and consumers are paying closer attention to whether those capabilities are actually available when products reach the market. The Apple settlement suggests that overstating AI readiness can create liability even before regulators step in, making transparency around launch claims increasingly important across the sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The New South Wales Civil and Administrative Tribunal has issued guidance on the acceptable use of generative AI in tribunal proceedings as part of Privacy Awareness Week NSW 2026, which this year focuses on personal information risks in the age of AI.
According to NCAT, generative AI tools may be used to assist with administrative and organisational tasks such as summarising material, organising information, or preparing chronologies. At the same time, the tribunal warns that such tools can create privacy risks if users enter personal, sensitive, or confidential information.
The guidance is set out in NCAT Procedural Direction 7 on the use of generative AI, together with an accompanying fact sheet. NCAT says the aim is to clarify when generative AI may be used in tribunal-related work while reinforcing obligations to protect personal and confidential information.
The tribunal also draws a clear line around evidentiary material. Generative AI must not be used to generate or alter evidence in tribunal proceedings, including statements, affidavits, statutory declarations, character references, or other evidentiary documents.
NCAT further states that generative AI must not be used to generate content for an expert report unless the tribunal has given permission. It is encouraging parties and their representatives to review the guidance before using such tools in proceedings.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The International Labour Organization has warned that governments must place lifelong learning at the centre of economic and social policy as AI, digitalisation and demographic shifts continue transforming labour markets worldwide. The organisation said stronger and more inclusive learning systems are necessary to prevent widening inequality between workers, industries and countries.
According to the ILO’s new report, titled ‘Lifelong learning and skills for the future’, only 16% of people aged between 15 and 64 participated in structured training during the previous year. Access remains significantly higher among full-time employees in formal companies, where employer-supported training reaches 51%.
The ILO report warns that workers in informal jobs and smaller enterprises continue relying mainly on learning through experience instead of structured education programmes. Furthermore, the study found that employers increasingly seek combinations of digital, socio-emotional, communication and problem-solving skills rather than narrow technical expertise alone.
While demand for AI-related capabilities is expected to increase, the report noted that most workers currently use ready-made AI tools that require broader digital literacy, critical thinking and collaborative abilities instead of specialist engineering knowledge.
The ILO also highlighted the growing importance of green and care economy skills. It estimates that 32% of workers globally already perform environmentally relevant tasks, while demand for long-term care workers could almost double by 2050.
The organisation called for greater public investment, stronger institutional coordination and inclusive lifelong learning strategies capable of supporting workers throughout rapidly changing technological and economic transitions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bank of America convened its fifth Breakthrough Technology Dialogue in Singapore, bringing together leaders from business, academia, and science to discuss emerging technologies shaping the future. The event focused on areas including AI, quantum computing, energy, MedTech, and space.
The forum also highlighted the growing importance of the Asia Pacific in driving technological development and deployment. According to Bank of America, the region’s strong research base, advanced manufacturing capacity, and expanding digital infrastructure are helping position it at the centre of global innovation.
Designed as a high-level platform for discussion, the dialogue explored how emerging technologies are reshaping industries and economies. Participants also examined longer-term investment approaches and the need to connect innovation with practical use cases that can scale across markets.
The initiative reflects Bank of America’s wider approach to technology investment, combining large-scale spending with a stated focus on client and employee needs and on solutions that can be delivered at scale. The event is increasingly being presented as a global forum for shaping views on the next generation of technological change.
Why does it matter?
The significance of the dialogue lies less in any single announcement than in the way it brings together investors, executives, academics, and technologists around the sectors likely to shape future industrial and economic power. The emphasis on Asia Pacific also reflects a broader recognition that leadership in AI, quantum, and other frontier technologies will depend not only on research breakthroughs, but also on where they are manufactured, financed, and deployed at scale.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Government of Canada has announced plans to spin off the National Research Council of Canada’s Canadian Photonics Fabrication Centre into a commercially operated entity to expand domestic semiconductor manufacturing and strengthen the country’s AI infrastructure.
The initiative forms part of Ottawa’s broader strategy to reinforce technological sovereignty and reduce dependence on foreign supply chains in critical technologies. Located in Ottawa, the Canadian Photonics Fabrication Centre is currently North America’s only end-to-end pure-play compound semiconductor facility and has supported photonics development for more than two decades through wafer design, fabrication, and testing services.
Minister of Industry and Minister responsible for Canada Economic Development for Quebec Regions Mélanie Joly said the spin-off is intended to attract private-sector investment, support Canadian innovation, and expand the country’s role in advanced manufacturing sectors, including defence, aerospace, automotive technologies, and AI.
The government also links the initiative to growing global demand for AI computing infrastructure, where photonic semiconductors are increasingly seen as important for improving energy efficiency, heat management, and data-transfer performance in large-scale data centres. Ottawa says the future commercial entity will remain anchored in Canada while helping domestic firms scale photonic and quantum technologies.
The expected result is a stronger Canadian supply chain for advanced semiconductor manufacturing and better support for fast-growing small and medium-sized enterprises working on AI and quantum systems. In that sense, the move is less about volume chip production and more about securing a specialised domestic capability in a strategically important part of the semiconductor stack. This final sentence is an inference based on the government’s framing of CPFC’s role and Canada’s wider AI and photonics strategy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
DeepSeek has again placed itself at the centre of the global AI race. After drawing worldwide attention with its R1 reasoning model in early 2025, the Chinese company has recently released DeepSeek V4, a new model designed to compete not only on performance, but also on price, openness and efficiency.
The hype around DeepSeek V4 is not based on a single feature. The model comes with a 1 million-token context window, open weights, two versions for different use cases and a strong focus on agentic workflows such as coding, research, document analysis and long-running tasks. In a market still dominated by expensive closed models, DeepSeek is trying to prove that powerful AI does not need to remain locked behind trademarked systems.
A model built for long memory
The most immediate difference between DeepSeek V4 and other models is context length. Both DeepSeek-V4-Pro and DeepSeek-V4-Flash support a 1-million-token context window, meaning they can process inputs far longer than those of older generations of mainstream models. According to DeepSeek’s official release, one million tokens is now the default across all official DeepSeek services.
For ordinary users, that may sound technical. In practice, it matters because a longer context allows models to work with large documents, long conversations, full codebases, legal materials, research archives or complex project histories without losing track as quickly.
That is why DeepSeek V4 is not just another chatbot release. It is aimed at the next stage of AI use, where models are expected to act less like question-answering tools and more like assistants that can follow long processes over time.
Two models for two different needs
DeepSeek V4 comes in two main versions. DeepSeek-V4-Pro is a larger and more capable model, with 1.6 trillion total parameters and 49 billion active parameters. DeepSeek-V4-Flash is a smaller model, with 284 billion total parameters and 13 billion active parameters, designed for faster and more cost-effective workloads.
That distinction is important. Not every user needs the strongest model for every task. A company summarising documents, routing queries or running basic support may choose Flash. A developer working on complex coding tasks, long-context agents or advanced reasoning may prefer Pro.
DeepSeek’s release reflects a broader trend in AI. The best model is no longer always the biggest one. Cost, speed, context size and deployment flexibility are now as important as raw benchmark performance.
Why the price matters
One reason DeepSeek attracts so much attention is its aggressive pricing. DeepSeek’s API page lists V4-Flash at USD 0.14 per 1 million input tokens on a cache miss and USD 0.28 per 1 million output tokens. V4-Pro is listed at USD 1.74 per 1 million input tokens and USD 3.48 per 1 million output tokens before the temporary 75% discount.
For developers and companies, that changes the calculation. High-performing AI models are useful only if they can be deployed at scale. If every long document, coding session or agentic workflow becomes too expensive, adoption slows down.
DeepSeek’s challenge to the market is therefore not only technical. It is economic. The company is pushing the idea that frontier-level AI should be cheaper to run, easier to access and less dependent on closed ecosystems.
The architecture behind the hype
DeepSeek V4 uses a mixture-of-experts approach, meaning only part of the model is active during each response. That helps explain why the model can be very large on paper, yet still more efficient to run than a dense model of similar overall size.
The more interesting part is how DeepSeek handles long context. NVIDIA’s technical overview explains that DeepSeek V4 uses hybrid attention, combining compression and selective attention techniques to reduce the cost of processing very long prompts. NVIDIA says these changes are designed to cut per-token inference FLOPs by 73% and reduce KV cache memory burden by 90% compared with DeepSeek-V3.2.
For a non-technical audience, the point is simple. DeepSeek V4 is trying to solve one of the biggest problems in modern AI: how to make models remember and process much more information without becoming too slow or too expensive.
That is where much of the hype comes from. The model is not merely larger. It is designed around the economics of long-context AI.
Why NVIDIA is still in the picture
NVIDIA’s role in the DeepSeek V4 story is especially interesting. DeepSeek is often discussed as part of China’s effort to build a more independent AI ecosystem, but NVIDIA has also been quick to move forward to support developers who want to build with the model.
In its technical blog, NVIDIA describes DeepSeek V4 as a model family designed for efficient inference of million-token contexts. The company says DeepSeek-V4-Pro and V4-Flash are available through NVIDIA GPU-accelerated endpoints, while developers can also use NVIDIA Blackwell, NIM containers, SGLang and vLLM deployment options.
NVIDIA also reports that early tests of DeepSeek-V4-Pro on the GB200 NVL72 platform showed more than 150 tokens per second per user. That matters because long-context models place heavy memory pressure, as well as on compute and networking infrastructure. The model may be efficient by design, but serving it at scale still requires serious hardware.
So, DeepSeek V4 does not remove NVIDIA from the story – it complicates it. The model is part of a broader push towards more efficient AI, but the infrastructure race remains central.
The chip question behind the model
DeepSeek V4 also arrives at a time when AI infrastructure is becoming just as important as model performance. MIT Technology Review frames the release partly through that lens, noting that DeepSeek’s new model reflects China’s broader attempt to reduce reliance on foreign AI hardware and build a more self-sufficient technology stack.
That detail matters because the AI race is no longer only about who builds the most capable model. It is also about who controls the chips, software frameworks and data centres needed to run it.
Replacing NVIDIA, however, remains difficult. Its advantage lies not just in its chips, but also in the software ecosystem developers have built around its platforms over many years. Moving to alternative hardware means adapting code, rebuilding tools and proving that the new systems are stable enough for serious use.
DeepSeek V4, however, sits between two realities. It points towards China’s ambition to build a more independent AI stack, while NVIDIA’s rapid support for the model shows that frontier AI still depends heavily on established infrastructure.
Open weights as a strategic move
DeepSeek V4 is also important because the model weights are available through Hugging Face under the MIT License. That gives developers more freedom to inspect, adapt and deploy the model than they would have with a fully closed commercial system.
Open-weight models are becoming a major pressure point in the AI race. Closed models may still lead in some areas, especially in polished consumer products, enterprise support and safety layers. However, open models offer something different: flexibility.
For universities, start-ups, smaller companies and developers outside the largest AI ecosystems, that flexibility matters. It means advanced AI can be tested, modified and integrated without relying entirely on a handful of dominant providers.
Benchmarks need caution
DeepSeek presents V4-Pro as highly competitive across reasoning, coding, long-context and agentic benchmarks. Hugging Face lists results including 80.6 on SWE-bench Verified, 90.1 on GPQA Diamond and 87.5 on MMLU-Pro for DeepSeek-V4-Pro.
Those numbers are impressive, but they should not be treated as the full story. Benchmarks are useful, but they rarely capture every real-world use case. A model can score well on coding tests and still struggle with reliability, factual accuracy, safety or complex multi-step workflows in production.
That caution is important. The AI industry often turns benchmarks into headlines, while real performance depends on deployment, prompting, safety controls and the specific task at hand.
More than just another model release
DeepSeek V4 matters because it combines several trends into one release: long context, lower prices, open weights, agentic workflows and geopolitical competition. It also shows that the AI race is no longer fought only in labs, benchmarks and data centres. Visibility now matters too. Tools such as Diplo’s Digital Footprints show how digital presence shapes the way technology actors and media narratives are discovered, ranked and understood. At this stage, the competition is not only about who has the smartest model. It is also about who can make intelligence cheaper, more available and easier to deploy.
That does not mean DeepSeek has solved every problem. Questions remain around independent benchmarking, safety, data governance, infrastructure and the broader political context of Chinese AI development. Still, the release does show where the market is heading.
The next phase of AI may not be defined solely by the most powerful model. It may be defined by the model that is powerful enough, affordable enough and open enough to change how people build products, services and tools with AI.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!