EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Major crypto fraud network dismantled across Europe

European authorities have dismantled one of the continent’s largest cryptocurrency fraud and money laundering schemes, arresting nine suspects across Cyprus, Spain, and Germany. The network allegedly defrauded hundreds of investors through fake crypto platforms, stealing over €600 million.

The scammers reportedly created websites that mimicked legitimate trading platforms, luring victims through social media, cold calls, and fabricated celebrity endorsements. Once deposits were made, the funds were laundered through blockchain technology, making recovery nearly impossible.

During the operation, investigators seized €800,000 in bank accounts, €415,000 in cryptocurrencies, €300,000 in cash, and luxury watches worth over €100,000. Authorities stated that several properties linked to the network remain under evaluation as investigations continue.

French prosecutors said the suspects face fraud and money laundering charges, carrying sentences of up to ten years. The case underscores the growing cross-border nature of crypto-related crime, with Eurojust’s coordination proving key to dismantling the network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare chief warns AI is redefining the internet’s business model

AI is inserting itself between companies and customers, Cloudflare CEO Matthew Prince warned in Toronto. More people ask chatbots before visiting sites, dulling brands’ impact. Even research teams lose revenue as investors lean on AI summaries.

Frontier models devour data, pushing firms to chase exclusive sources. Cloudflare lets publishers block unpaid crawlers to reclaim control and compensation. The bigger question, said Prince, is which business model will rule an AI-mediated internet.

Policy scrutiny focuses on platforms that blend search with AI collection. Prince urged governments to separate Google’s search access from AI crawling to level the field. Countries that enforce a split could attract publishers and researchers seeking predictable rules and payment.

Licensing deals with news outlets, Reddit, and others coexist with scraping disputes and copyright suits. Google says it follows robots.txt, yet testimony indicated AI Overviews can use content blocked by robots.txt for training. Vague norms risk eroding incentives to create high-quality online content.

A practical near-term playbook combines technical and regulatory steps. Publishers should meter or block AI crawlers that do not pay. Policymakers should require transparency, consent, and compensation for high-value datasets, guiding the shift to an AI-mediated web that still rewards creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU invests €2.9 billion to drive net-zero industrial transformation

The European Commission has approved €2.9 billion in funding for 61 large-scale net-zero technology projects, marking one of the EU’s most significant investments in clean innovation to date.

Financed through revenues from the EU Emissions Trading System, the initiative aims to accelerate Europe’s path towards climate neutrality by 2050.

The selected projects cover 19 industrial sectors across 18 Member States and target areas such as renewable energy, energy storage, zero-emission mobility, and industrial carbon management.

Collectively, they are expected to cut more than 220 million tonnes of CO₂ over the next decade, reinforcing Europe’s global leadership in sustainable technologies instead of relying on imports.

Funded under the Innovation Fund, which draws on an estimated €40 billion in ETS revenues, the initiative highlights the EU’s industrial readiness for decarbonisation. The latest call attracted 359 applications requesting €21.7 billion in support, underscoring the rapid growth of the continent’s cleantech sector.

Commissioner Wopke Hoekstra described the announcement as proof that the EU is turning its climate ambitions into industrial reality, creating green jobs and strengthening economic resilience. The next round of Innovation Fund calls will open in December 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NYPD sued over Microsoft-linked surveillance system

The New York Police Department is facing a lawsuit from the Surveillance Technology Oversight Project (S.T.O.P.), which accuses it of running an invasive citywide surveillance network built with Microsoft technology.

The system, known as the Domain Awareness System (DAS), has operated since 2012 and connects more than a dozen surveillance tools, including video cameras, biometric scanners, license plate readers, and financial analytics, into one centralised network. According to court filings, the system collects location data, social media activity, vehicle information, and even banking details to create ‘digital profiles’ of millions of residents.

S.T.O.P. argues that the network captures and stores data on all New Yorkers, including those never suspected of a crime, amounting to a ‘web of surveillance’ that violates constitutional rights. The group says newly obtained records show that DAS integrates citywide cameras, 911 and 311 call logs, police databases, and feeds from drones and helicopters into a single monitoring platform.

Calling DAS ‘an unprecedented violation of American life’, the organisation has asked the US District Court for the Southern District of New York to declare the city’s surveillance practices unconstitutional.

This is not the first time Microsoft’s technology has drawn scrutiny this year over data tracking and storing, its recently announced ‘Recall’ feature also raised alarm over potential privacy issues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Former Meta lobbyist’s appointment to Irish data watchdog triggers conflict-of-interest complaint

Rights group the Irish Council for Civil Liberties (ICCL) has asked the European Commission to review Ireland’s appointment of former Meta lobbyist Niamh Sweeney to the Data Protection Commission (DPC), alleging the process breaches EU rules on independent regulators. ICCL argues the law requires authorities to be ‘above any suspicion of partiality’.

Sweeney, appointed on 25 September, is now one of three commissioners. Her profile shows roles at Meta from 2015–2021, including leading WhatsApp public policy across Europe, Africa and the Middle East. Before that, she lobbied for Facebook in Ireland. ICCL also notes that Leo Moore, a lawyer whose clients include major tech and social media firms, and, according to ICCL, the only panellist with data-protection expertise, sat on the five-member panel that selected Sweeney.

The Commission said it is ‘not empowered to take action with respect to appointments’, indicating the complaint may fall outside its remit. This latest development comes amid growing scrutiny of the DPC. In a previous case on Meta’s behavioural advertising practices, the European Data Protection Board overturned the DPC’s decision not to impose a fine and ordered stricter enforcement measures against the tech giant.

This move is the latest in a series of complaints against the independence of the DPC. More than 40 civil society organisations asked the European Commission to investigate Ireland’s privacy regulator earlier this month.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!


China outlines plan to expand high-tech industries

China has pledged to expand its high-tech industries over the next decade. Officials said emerging sectors such as quantum computing, hydrogen energy, nuclear fusion, and brain-computer interfaces will receive major investment and policy backing.

Development chief Zheng Shanjie told reporters that the coming decade will redefine China’s technology landscape, describing it as a ‘new scale’ of innovation. The government views breakthroughs in science and AI as key to boosting economic resilience amid a slowing property market and demographic decline.

The plan underscores Beijing’s push to rival Washington in cutting-edge technology, with billions already channelled into state-led innovation programmes. Public opinion in Beijing appears supportive, with many citizens expressing optimism that China could lead the next technological revolution.

Economists warn, however, that sustained progress will require tackling structural issues, including low domestic consumption and reduced investor confidence. Analysts said Beijing’s long-term success will depend on whether it can balance rapid growth with stable governance and transparent regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Big Tech ramps up Brussels lobbying as EU considers easing digital rules

Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.

Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.

Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.

As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!