WhatsApp to support cross-app messaging

Meta is launching a ‘third-party chats’ feature on WhatsApp in Europe, allowing users to send and receive messages from other interoperable messaging apps.

Initially, only two apps, BirdyChat and Haiket, will support this integration, but users will be able to send text, voice, video, images and files. The rollout will begin in the coming months for iOS and Android users in the EU.

Meta emphasises that interoperability is opt-in, and messages exchanged via third-party apps will retain end-to-end encryption, provided the other apps match WhatsApp’s security requirements. Users can choose whether to display these cross-app conversations in a separate ‘third-party chats’ folder or mix them into their main inbox.

By opening up its messaging to external apps, WhatsApp is responding to the EU’s Digital Markets Act (DMA), which requires major tech platforms to allow interoperability. This move could reshape how messaging works in Europe, making it easier to communicate across different apps, though it also raises questions about privacy, spam risk and how encryption is enforced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France reportedly hit by data breach

Eurofiber France has suffered a data breach affecting its internal ticket management system and ATE customer portal, reportedly discovered on 13 November. The incident allegedly involved unauthorised access via a software vulnerability, with the full extent still unclear.

Sources indicate that approximately 3,600 customers could be affected, including major French companies and public institutions. Reports suggest that some of the allegedly stolen data, ranging from documents to cloud configurations, may have appeared on the dark web for sale.

Eurofiber has emphasised that Dutch operations are not affected.

The company moved quickly to secure affected systems, increasing monitoring and collaborating with cybersecurity specialists to investigate the incident. The French privacy regulator, CNIL, has been informed, and Eurofiber states that it will continue to update customers as the investigation progresses.

Founded in 2000, Eurofiber provides fibre optic infrastructure across the Netherlands, Belgium, France, and Germany. Primarily owned by Antin Infrastructure Partners and partially by Dutch pension fund PGGM, the company remains operational while assessing the impact of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital ID arrives for Apple users

Apple has introduced Digital ID, a new feature that lets users create an identification card in Apple Wallet using information from a US passport.

The feature launches in beta at Transportation Security Administration checkpoints across more than two hundred and fifty airports for domestic travel, instead of relying solely on physical documentation.

It offers an alternative for users who lack a Real ID-compliant card while not replacing a physical passport for international journeys.

Users set up a Digital ID by scanning the passport’s photo page, reading the chip on the back of the document, and completing facial movements for verification.

Once added, the ID can be presented with an iPhone or Apple Watch by holding the device near an identity reader and confirming the request with Face ID or Touch ID. New verification options for in-person checks at selected businesses, apps and online platforms are planned.

The company highlights privacy protection by storing passport data only on the user’s device, instead of Apple’s servers. Digital ID information is encrypted and cannot be viewed by Apple, and biometric authentication ensures that only the owner can present the identity.

Only the required information is shared during each transaction, and the user must approve it before it is released.

The launch expands Apple Wallet’s existing support for driver’s licences and state IDs, which are already available in twelve states and Puerto Rico. Recent months have added Montana, North Dakota and West Virginia, and Japan adopted the feature with the My Number Card.

Apple expects Digital ID to broaden secure personal identification across more services over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden freeze controls uncovered across major blockchains

Bybit’s Lazarus Security Lab says 16 major blockchains embed fund-freezing mechanisms. An additional 19 could adopt them with modest protocol changes, according to the study. The review covered 166 networks using an AI-assisted scan plus manual validation.

Whilst using AI, researchers describe three models: hardcoded blacklists, configuration-based freezes, and on-chain system contracts. Examples cited include BNB Chain, Aptos, Sui, VeChain and HECO in different roles. Analysts argue that emergency tools can curb exploits yet concentrate control.

Case studies show freezes after high-profile attacks and losses. Sui validators moved to restore about 162 million dollars post-Cetus hack, while BNB Chain halted movement after a 570 million bridge exploit. VeChain blocked 6.6 million in 2019.

New blockchain debates centre on transparency, governance and user rights when freezes occur. Critics warn about centralisation risks and opaque validator decisions, while exchanges urge disclosure of intervention powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!