EU Advocate General backs limited seizure of work emails in competition probes

An Advocate General of the Court of Justice of the European Union has said national competition authorities may lawfully seize employee emails during investigations without prior judicial approval. The opinion applies only when a strict legal framework and effective safeguards against abuse are in place.

The case arose after Portuguese medical companies challenged the competition authority’s seizure of staff emails, arguing it breached the right to privacy and correspondence under the EU Charter of Fundamental Rights. The authority acted under authorisation from the Public Prosecutor’s Office.

According to the Advocate General, such seizures may limit privacy and data protection rights under Articles 7 and 8 of the Charter, but remain lawful if proportionate and justified. The processing of personal data is permitted under the GDPR where it serves the public interest in enforcing competition law.

The opinion emphasised that access to business emails did not undermine the essence of data protection rights, as the investigation focused on professional communications. The final judgment from the CJEU is expected to clarify how privacy principles apply in competition law enforcement across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ByteDance cuts use of Claude after Anthropic blocks China access

An escalating tech clash has emerged between ByteDance and Anthropic over AI access and service restrictions. ByteDance has halted use of Anthropic’s Claude model on its infrastructure after the US firm imposed access limitations for Chinese users.

The suspension follows Anthropic’s move to restrict China-linked deployments and aligns with broader geopolitical tensions in the AI sector. ByteDance reportedly said it would now rely on domestic alternatives, signalling a strategic pivot away from western-based AI models.

Industry watchers view the dispute as a marker of how major tech firms are navigating export controls, national security concerns and sovereignty in AI. Observers warn the rift may prompt accelerated investment in home-grown AI ecosystems by Chinese companies.

While neither company has detailed all operational impacts, the episode highlights AI’s fraught position at the intersection of technology and geopolitics. US market reaction may hinge on whether other firms follow suit or partnerships are redefined around regional access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI becomes fastest-growing business platform in history

OpenAI has surpassed 1 million business customers, becoming the fastest-growing business platform in history. Companies in healthcare, finance, retail, and tech use ChatGPT for Work or API access to enhance operations, customer experiences, and team workflows.

Consumer familiarity is driving enterprise adoption. With over 800 million weekly ChatGPT users, rollouts face less friction. ChatGPT for Work now has more than 7 million seats, growing 40% in two months, while ChatGPT Enterprise seats have increased ninefold year-over-year.

Businesses are reporting strong ROI, with 75% seeing positive results from AI deployment.

New tools and integrations are accelerating adoption. Company knowledge lets AI work across Slack, SharePoint, and GitHub. Codex accelerates engineering workflows, while AgentKit facilitates rapid enterprise agent deployment.

Multimodal models now support text, images, video, and audio, allowing richer workflows across industries.

Many companies are building applications directly on OpenAI’s platform. Brands like Canva, Spotify, and Shopify are integrating AI into apps, and the Agentic Commerce Protocol is bringing conversational commerce to everyday experiences.

OpenAI aims to continue expanding capabilities in 2026, reimagining enterprise workflows with AI at the core.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kraken Pro unlocks crypto-collateralized futures for EU traders

Kraken Pro has expanded its offerings in the EU by allowing clients to use crypto, including BTC, ETH, and certain stablecoins, as collateral for more than 150 perpetual futures markets.

The move positions the platform among the first regulated venues in Europe to provide crypto-collateralised, USD-margined futures contracts. It combines flexibility, speed, and capital efficiency with compliance under MiFID II.

Using crypto as collateral enables traders to maintain exposure to their digital assets while accessing leveraged positions. Clients can post BTC, ETH, or stablecoins without converting to fiat, avoiding fees and delays.

The system also supports cross-asset hedging and stablecoin-backed trades, allowing users to manage risk and diversify strategies more efficiently.

Kraken Pro’s regulated futures comply with EU rules, offering up to 10x leverage, multi-asset collateral, and supervision under MiCA and MiFID II. The platform offers deep liquidity, tight spreads, and reliable execution for both individual and institutional traders, even during volatile market conditions.

To begin trading, clients must enable futures on Kraken EU, fund their accounts with crypto assets, select their preferred collateral, and then open or manage leveraged perpetual positions. The update enhances strategic options for both hedging and directional trades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SoftBank and OpenAI bring enterprise AI revolution to Japan

SoftBank and OpenAI have announced the launch of SB OAI Japan, a new joint venture established to deliver an advanced enterprise AI solution known as Crystal Intelligence. Unveiled on 5 November 2025, the initiative aims to transform Japan’s corporate management through tailored AI solutions.

SB OAI Japan will exclusively market Crystal Intelligence in Japan starting in 2026. The platform integrates OpenAI’s latest models with local implementation, system integration, and ongoing support.

Designed to enhance productivity and streamline management, Crystal Intelligence will help Japanese companies adopt AI tools suited to their specific operational needs.

SoftBank Corp. will be the first to deploy Crystal intelligence, testing and refining the technology before wider release. The company plans to share insights through SB OAI Japan to drive AI-powered transformation across industries.

The partnership underscores SoftBank’s vision of becoming an AI-native organisation. The group has already developed around 2.5 million custom GPTs for internal use.

OpenAI CEO Sam Altman stated that the venture marks a significant step in bringing advanced AI to global enterprises. At the same time, SoftBank’s Masayoshi Son described it as the beginning of a new era where AI agents autonomously collaborate to achieve business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity’s Comet hits Amazon’s policy wall

Amazon removed Perplexity’s Comet after receiving warnings that it was shopping without identifying itself. Perplexity says an agent inherits a user’s permissions. The fight turns a header detail into a question of who gets to intermediate online buying.

Amazon likens agents to delivery or travel intermediaries that announce themselves, and hints at blocking non-compliant bots. With its own assistant, Rufus, critics fear rules as competitive moats; Perplexity calls it gatekeeping.

Beneath this is a business-model clash. Retailers monetise discovery with ads and sponsored placement. Neutral agents promise price-first buying and fewer impulse ads. If bots dominate, incumbents lose margin and control of merchandising levers.

Interoperability likely requires standards, including explicit bot IDs, rate limits, purchase scopes, consented data access, and auditable logs. Stores could ship agent APIs for inventory, pricing, and returns, with 2FA and fraud checks for transactions.

In the near term, expect fragmentation as platforms favour native agents and restrictive terms, while regulators weigh transparency and competition. A workable truce: disclose the agent, honour robots and store policies, and use clear opt-in data contracts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Blackwell stance on China exports holds as Washington weighs tech pace

AI export policy in Washington remains firm, with officials saying the most advanced Nvidia Blackwell chips will not be sold to China. A White House spokesperson confirmed the stance during a briefing. The position follows weeks of speculation about scaled-down variants.

Senior economic officials floated the possibility of a shift later, citing the rapid pace of chip development. If Blackwell quickly becomes superseded, future sales could be reconsidered. Any change would depend on achieving parity in technology, licensing, and national security assessments.

Nvidia’s chief executive signalled hope that parts for Blackwell family products could be supplied from China, while noting there are no current plans to do so. Company guidance emphasises both commercial and research applications. Analysts say licensing clarity will dictate data centre buildouts and training roadmaps.

Policy hawks argue that cutting-edge accelerators should remain in US allied markets to protect strategic advantages. Others counter that export channels can be reopened once hardware is no longer state-of-the-art. The debate now centres on timelines measured in product cycles.

Diplomatic calendars may influence further discussions, with potential leader-level meetings next year alongside major international gatherings. Officials portrayed the broader bilateral relationship as steadier. The industry will track any signals that link geopolitical dialogue to chip export regulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!