Gamescom showcases EU support for cultural and digital innovation

The European Commission will convene video game professionals in Cologne for the third consecutive year on August 20 and 21. The visit aims to follow developments in the industry, present the future EU budget, and outline opportunities under the upcoming AgoraEU programme.

EU Officials will also discuss AI adoption, new investment opportunities, and ways to protect minors in gaming. Renate Nikolay, Deputy Director-General of DG CONNECT, will deliver a keynote speech and join a panel titled ‘Investment in games – is it finally happening?’.

The European Commission highlights the role of gaming in Europe’s cultural diversity and innovation. Creative Europe MEDIA has already supported nearly 180 projects since 2021. At Gamescom, its booth will feature 79 companies from 24 countries, offering fresh networking opportunities to video game professionals.

The engagement comes just before the release of the second edition of the ‘European Media Industry Outlook’ report. The updated study will provide deeper insights into consumer behaviour and market trends, with a dedicated focus on the video games sector.

Gamescom remains the world’s largest gaming event, with 1,500 exhibitors from 72 nations in 2025. The event celebrates creative and technological achievements, highlighting the industry’s growing importance for Europe’s competitiveness and digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seedbox.AI backs re-training AI models to boost Europe’s competitiveness

Germany’s Seedbox.AI is betting on re-training large language models (LLMs) rather than competing to build them from scratch. Co-founder Kai Kölsch believes this approach could give Europe a strategic edge in AI.

The Stuttgart-based startup adapts models like Google’s Gemini and Meta’s Llama for medical chatbots and real estate assistant applications. Kölsch compares Europe’s role in AI to improving a car already on the road, rather than reinventing the wheel.

A significant challenge, however, is access to specialised chips and computing power. The European Union is building an AI factory in Stuttgart, Germany, which Seedbox hopes will expand its capabilities in multilingual AI training.

Kölsch warns that splitting the planned EU gigafactories too widely will limit their impact. He also calls for delaying the AI Act, arguing that regulatory uncertainty discourages established companies from innovating.

Europe’s AI sector also struggles with limited venture capital compared to the United States. Kölsch notes that while the money exists, it is often channelled into safer investments abroad.

Talent shortages compound the problem. Seedbox is hiring, but top researchers are lured by Big Tech salaries, far above what European firms typically offer. Kölsch says talent inevitably follows capital, making EU funding reform essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU targets eight members states over cybersecurity directive implementation delay

Eight EU countries, including Ireland, Spain, France, Bulgaria, Luxembourg, the Netherlands, Portugal, and Sweden, have been warned by the European Commission for failing to meet the deadline on the implementation of the NIS2 Directive.

What is the NIS2 Directive about?

The NIS2 Directive, adopted by the EU in 2022, is an updated legal framework designed to strengthen the cybersecurity and resilience of critical infrastructure and essential services. Essentially, this directive replaces the 2016 NIS Directive, the EU’s first legislation to improve cybersecurity across crucial sectors such as energy, transport, banking, and healthcare. It set baseline security and incident reporting requirements for critical infrastructure operators and digital service providers to enhance the overall resilience of network and information systems in the EU.

With the adoption of the NIS2 Directive, the EU aims to broaden the scope to include not only traditional sectors like energy, transport, banking, and healthcare, but also public administration, space, manufacturing of critical products, food production, postal services, and a wide range of digital service providers.

NIS2 introduces stricter risk management, supply-chain security requirements, and enhanced incident reporting rules, with early warnings due within 24 hours. It increases management accountability, requiring leadership to oversee compliance and undergo cybersecurity training.

It also imposes heavy penalties for violations, including up to €10 million or 2% of global annual turnover for essential entities. The Directive also aims to strengthen EU-level cooperation through bodies like ENISA and EU-CyCLONe.

Member States were expected to transpose NIS2 into national law by 17 October 2024, making timely compliance preparation critical.

What is a directive?

There are two main types of the EU laws: regulations and directives. Regulations apply automatically and uniformly across all member states once adopted by the EU.

In contrast, directives set specific goals that member states must achieve but leave it up to each country to decide how to implement them, allowing for different approaches based on each member state’s capacities and legal systems.

So, why is there a delay in implementing the NIS2 Directive?

According to Insecurity Magazine, the delay is due to member states’ implementation challenges, and many companies across the EU are ‘not fully ready to comply with the directive.’ Six critical infrastructure sectors are facing challenges, including:

  • IT service management is challenged by its cross-border nature and diverse entities
  • Space, with limited cybersecurity knowledge and heavy reliance on commercial off-the-shelf components
  • Public administrations, which “lack the support and experience seen in more mature sectors”
  • Maritime, facing operational technology-related challenges and needing tailored cybersecurity risk management guidance
  • Health, relying on complex supply chains, legacy systems, and poorly secured medical devices
  • Gas, which must improve incident readiness and response capabilities

The deadline for the implementation was 17 October 2024. In May 2025, the European Commission warned 19 member states about delays, giving them two months to act or risk referral to the Court of Justice of the EU. It remains unclear whether the eight remaining holdouts will face further legal consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US urges Asia-Pacific to embrace open AI innovation over strict regulation

A senior White House official has urged Asia-Pacific economies to support an AI future built on US technology, warning against adopting Europe’s heavily regulated model. Michael Kratsios remarked during the APEC Digital and AI Ministerial Meeting in Incheon.

Kratsios said countries now choose between embracing American-led innovation or falling behind under regulatory burdens. He framed the US approach as one driven by freedom and open-source innovation rather than centralised control.

The US is offering partnerships with South Korea to respect data concerns while enabling shared progress. Kratsios noted that open-weight models could soon shape industry standards worldwide.

He met South Korea’s science minister in bilateral talks to discuss AI cooperation. The US reaffirmed its commitment to supporting nations in building trustworthy AI systems based on mutual economic benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DW Weekly #223 – AI race heats: The US AI Action Plan, China’s push for a global AI cooperation organisation, and the EU’s regulatory response

 Logo, Text

25 July – 1 August 2025


 Book, Comics, Publication, Adult, Male, Man, Person, Face, Head, Clothing, Coat, Hat, James Montgomery Flagg

Dear readers,

Over the past week, the White House has launched a sweeping AI initiative through its new publication Winning the Race: America’s AI Action Plan, an ambitious strategy to dominate global AI leadership by promoting open-source technology and streamlining regulatory frameworks. America’s ‘open-source gambit’, analysed in detail by Dr Jovan Kurbalija in Diplo’s blog, signals a significant shift in digital policy, intending to democratise AI innovation to outpace competitors, particularly China.

Supporting this bold direction, major tech giants have endorsed President Trump’s AI deregulation plans, despite widespread public concerns regarding potential societal impacts. Trump’s policies notably include an explicit push for ‘anti-woke’ AI frameworks within US government contracts, raising contentious debates about the ideological neutrality and ethical implications of AI systems in governance.

In parallel, China has responded with its own global AI governance plan, proposing the establishment of an international AI cooperation organisation to enhance worldwide coordination and standard-setting. Thus, it is not hard to conclude that there is an escalating AI governance competition between the two technological superpowers, each advocating distinctly different visions for the future of global AI development.

On the multilateral stage, the UN’s Economic and Social Council (ECOSOC) adopted a resolution: ‘Assessment of the progress made in the implementation of and follow-up to the outcomes of the World Summit on the Information Society’, through the Commission on Science and Technology for Development (CSTD), reaffirming commitments to implement the outcomes of the World Summit on the Information Society (WSIS).

Corporate strategies have also reflected these geopolitical undercurrents. Samsung Electronics has announced a landmark $16.5 billion chip manufacturing deal with Tesla, generating optimism about Samsung’s capability to revive its semiconductor foundry business. Yet, execution risks remain substantial, prompting Samsung’s Chairman Jay Y. Lee to promptly travel to Washington to solidify bilateral trade relations and secure the company’s position amid potential trade tensions.

Similarly, Nvidia has placed a strategic order for 300,000 chipsets from Taiwanese giant TSMC, driven by robust Chinese demand and shifting US trade policies.

Meanwhile, the EU has intensified regulatory scrutiny, accusing e-commerce platform Temu of failing mandatory Digital Services Act (DSA) checks, citing serious risks related to counterfeit and unsafe goods.

In the USA, similar scrutiny arose as Senator Maggie Hassan urged Elon Musk to take decisive action against Southeast Asian criminal groups using Starlink services to defraud American citizens.

Finally, the EU’s landmark AI Act commenced its implementation phase this week, despite considerable pushback from tech firms concerned about regulatory compliance burdens.

Diplo Blog – The open-source gambit: How America plans to outpace AI rivals by democratising tech

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security. The plan addresses labour displacement, AI biases, and cybersecurity threats, advocating for reskilling workers and maintaining tech leadership through private sector flexibility. Additionally, it aims to align US allies within an AI framework while expressing scepticism toward multilateral regulations. Overall, the plan positions open-source AI as a strategic asset amid geopolitical competition. Read the full blog!

For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.

Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.

DW Team


RADAR

Highlights from the week of 25 July – 1 August 2025

registration 3938434 1280

But worries rise as many free VPNs exploit users or carry hidden malware

australia flag is depicted on the screen with the program code

From December, YouTube must block accounts for Australians under 16 or face massive fines.

aeroflot cyberattack silent crow belarus cyber partisans ukraine conflict

Belarusian and Ukrainian hackers claim responsibility for strategic cyber sabotage of Aeroflot.

2025%2F07%2Fbeautiful silhouette port machinery sunset

A NATO policy brief warns that civilian ports across Europe face increasing cyber threats from state-linked actors and calls for updated maritime strategies to strengthen cybersecurity and civil–military coordination.

whatsapp Italy Meta antitrust EU AGCM

AGCM says Meta may have harmed competition by embedding AI features into WhatsApp.

eu and google

The EU AI Code could add €1.4 trillion to Europe’s economy, Google says.

0a560533 72a4 4acc a3a6 84f2162044df

Tether and Circle dominate the fiat-backed stablecoin market, now valued at over $227 billion combined.

microsoft logo png seeklogo 168319

Brussels updates Microsoft terms to curb risky data transfers

66b3be5a4f47d87bdc945227 image1 min

AI use in schools is weakening the connection between students and teachers by permitting students to bypass genuine effort through shortcuts.

AI Depression Design MT 1200x900 1

Use of AI surveillance, including monitoring software, intensifies burnout, micromanagement feelings, and disengagement.

cybersecurity risks of generative ai

A majority of Fortune 500 companies now mention AI in their annual reports as a risk factor instead of citing its benefits.

man using laptop night workplace top view

The platforms lost more than $3.1 billion in the first half of 2025, with AI-powered hacks and phishing scams leading the surge.

US AI jobs Brookings Lightcast survey

AI jobs now span marketing, finance, and HR—not just tech.

jojickajoja27 quantum computing 91572e2a e7d0 40d4 b6e5 12dc1bca48c6 11zon

Google and Microsoft lead investment in advanced AI and quantum infrastructure.


READING CORNER
BLOG featured image 2025 The open source gambit

On 23 July, the US unveiled an AI Action Plan featuring 103 recommendations focused on winning the AI race against China. Key themes include promoting open-source AI to establish global standards, reducing regulations to support tech firms, and emphasising national security.

ChatGPT Image Jul 28 2025 at 10 13 23 PM

Tracking technologies shape our online experience in often invisible ways, yet profoundly impactful, raising important questions about transparency, control, and accountability in the digital age.

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!