Macron calls for investment and simplified AI rules

At the AI summit in Paris, French President Emmanuel Macron announced that Europe would reduce regulations to foster the growth of AI in the region. He called for more investment, particularly in France, and highlighted the importance of simplifying rules to stay competitive globally. Macron drew comparisons to the rapid reconstruction of the Notre-Dame cathedral, stating that a similar streamlined approach would be adopted for AI and data centre projects across Europe.

European Union digital chief Henna Virkkunen echoed Macron’s comments, promising to cut red tape and implement business-friendly policies. With the US pushing ahead with lighter AI regulations, there is increasing pressure on Europe to follow suit. Sundar Pichai, CEO of Alphabet, emphasised the need for more ecosystems of AI innovation, similar to the one emerging in France. The EU had previously passed the AI Act, which is the world’s first comprehensive set of AI regulations, but many at the summit urged a more flexible approach.

At the summit, France announced a major push for AI investment, including €109 billion from the private sector, and the launch of the Current AI partnership. This initiative, backed by countries like France and Germany, aims to ensure AI remains inclusive and sustainable. However, not all voices at the summit supported reducing regulations. Concerns were raised about the potential risks of weakening safeguards, particularly for workers whose jobs might be affected by AI advancements.

For more information on these topics, visit diplomacy.edu.

Data Protection Day 2025: A new mandate for data protection

This analysis will be a detailed summary of Data Protection Day, providing the most relevant aspects from each session. The event welcomed people to Brussels, as well as virtually, to celebrate Data Protection Day 2025 together.

Filled with a tight schedule, the event programme kicked off with opening remarks by the Secretary General of the European Data Protection Supervisor (EDPS), followed by a day of panels, speeches and side sessions from the brightest of minds in the data protection field.

Keynote speech by Leonardo Cervera Navas

Given the recent political turmoil in the EU, specifically the repealing of the Romanian elections a few months ago, it was no surprise that the first keynote speech addressed how algorithms are used to destabilise democracies and threaten them. Navas explained how third-country algorithms are used against EU democracies to target their values.

He then went on to discuss how there is a big power imbalance when certain wealthy people with their companies dominate the tech world and end up violating our privacy. However, he turned towards a hopeful future when he spoke about how the crisis in Europe is making us Europeans stronger. ‘Our values are what unite us, and part of them are the data protection values the EDPB strongly upholds’, he emphasised.

He acknowledged the evident overlap of rules and regulations between different legal instruments but also highlighted the creation of tools that can help uphold our privacy, such as the Digital Clearing House 2.0.

Organiser’s panel moderated by Kait Bolongaro

This panel discussed a wide variety of data protection topics, such as the developments on the ground, how international cooperation played a role in the fight against privacy violations, and what each panellist’s priorities were for the upcoming years. That last question was especially interesting to hear given the professional affiliations of each panellist.

What is interesting about these panels, is the fact that the organisers spent a lot of time curating a diverse panel. They had people from academia, private industry, public bodies, and even the EDPS. This ensures that a panel’s topic is discussed from more than one point of view, which is much more engaging.

Wojciech Wiewiorowski, the current European Data Protection Supervisor, reminded us of the important role that data protection authorities (DPAs) play in the effective enforcement of the GDPR. Matthias Kloth, Head of Digital Governance and Sport, CoE, showed us a broader perspective. As his work surrounds the evolved Convention 108, now known as Convention 108+, he shed some light on the advancements of updating and bringing past laws into today’s modern age.

Regarding international cooperation, each panellist had their own unique take on how to facilitate and streamline it. Wiewiorowski correctly stated that data has no borders and that cooperation with everyone is needed, as a global effort. However, he reminded, that in the age of cooperation, we cannot have a low level of protection by following the ‘lowest common denominator level of protection’.

Jo Pierson, Professor at the Vrije University Brussels and the Hasselt University, said that international cooperation is very challenging. He gave the example that country’s values may change overnight, giving the example of Trump’s recent re-election victory.

Audience questions

A member of the audience posed a very relevant question regarding the legal field as a whole.
He asked the panellists what they thought of the fact that enforcing one’s rights is a difficult and
costly process. To provide context, he explained how a person must be legally literate and bear their own costs for litigation to litigate or filing an appeal.

Wiewiorowski of the EDPS pointed out that changing the procedural rules of the GDPR is not feasible to tackle this issue. There is the option for small-scale procedural amendments, but he does not foresee the GDPR being opened up in the coming years.

However, Pierson had a more practical take on the matter and suggested that this is where individuals and civil society organisations can join forces. Individuals can approach organisations such as noyb, Privacy International, and EDRi for help or advice on the matter. But then it begs the question, on whose shoulders should this burden rest?

One last question from the audience was about the bombshell new Chinese AI ‘DeepSeek’ recently dropped onto the market. The panellists were asked whether this new AI is an enemy or a friend to us Europeans. Each panellist avoided calling Chinese AI an enemy or a friend, but they did find common ground on the fact that we need international cooperation and that an open-source AI is not a bad thing if it can be trained by Europeans.

The last remark regarding this panel was Wiewiorowski’s comment on Chinese AI and how he compared it to ‘Sputnik Day’ (the 1950s space race between the United States and the USSR). Are we in a new technological gap? Will non-Western allies and foes beat us in this digital arms race?

Data protection in a changing world: What lies ahead? Moderated by Anna Buchta

This session also had a series of interesting questions for high-profile panellists. The range of this panel was impressive as it regrouped opinions from the European Commission, the Polish Minister of Digital Affairs, the European Parliament, the UK’s Information Commissioner, and DIGITALEUROPE.

Notably, Marina Kaljurand from LIBE and her passion for cyber matters. She revealed that many people in the European Parliament are not tech literate. On the other hand, some people are extremely well-versed in how the technology is used. There seems to be a big information asymmetry within the European Parliament that needs to be addressed if they are to vote on digital regulations.

She gave an important overview of the state of data transfers with the UK and the USA. The UK has in place an adequacy decision that has raised multiple flags in the European Parliament and is set to expire in June 2025.

The future of data transfer in the UK is very uncertain. As for the USA, she mentioned that there will be difficult times due to the actions of the recently re-elected President Trump that are degrading US-EU relations. Regarding her views on the child sexual abuse material regulation, she stresses how important it is to protect children and that the debate is not about whether or not to protect them or not, but that it is difficult to find out ‘how’ to protect them.

The current proposed regulations will put too much stress on violating one’s privacy, but on the other hand, it is difficult to find alternatives to protect children. This reflects how difficult regulating can be even when everyone at the table may have the same goals.

Irena Moozova, the Deputy Director-General of DG JUST at the European Commission, said that her priorities for the upcoming years are to cut red tape, simplify guidelines for businesses to work and support business compliance efforts for small and medium-sized enterprises. She mentions the public consultation phases that will be held for the upcoming Digital Fairness Act this summer.

John Edwards, the UK Information Commissioner, highlighted the transformative impact of emerging technologies, particularly Chinese AI, and how disruptive innovations can rapidly reshape markets. He discussed the ICO’s evolving strategies, noting their alignment with ideas shared by other experts. The organisation’s focus for the next two years includes key areas such as AI’s role in biometrics and tracking, as well as safeguarding children’s privacy. To address these priorities, the ICO has published an online tracking strategy and conducted research on children’s data privacy, including the development of systems tailored to protect young users.

Alberto Di Felice, Legal Counsel to DIGITALEUROPE, stressed the importance of simplifying regulations. He repeatedly stated numerous times that there is too much bureaucracy and too many actors involved in regulation. For example, if a company wants to operate in the EU market, they will have to consult DPAs, AI Act authority, data from the public sector (Data Governance Act), manufacturers or digital products (authorities for this), and financial sector authorities.

He advocated for a single regulator. He also mentioned how the quality of regulation in Europe
is poor and that sometimes regulations are too long. For example, some AI Act articles are 17 lines long with exceptions and sub-exceptions that lawyers cannot even make sense of. He suggested reforms such as having one regulator and proposing changes to streamline legal compliance.

Keynote speech by Beatriz de Anchorena on global data protection

Beatriz de Anchorena, Head of Argentina’s DPA and current Chair of the Convention 108+ Committee, delivered a compelling address on the importance of global collaboration in data protection. Representing a non-European perspective, she emphasised Argentina’s unique contribution to the Council of Europe (CoE).

Argentina was the first country outside Europe to receive an EU adequacy decision, which has since been renewed. Despite having data protection laws originating in the 2000s, Argentina remains a leader in promoting modernised frameworks.

Anchorena highlighted Argentina’s role as the 23rd state to ratify the Convention 108+, noting that only seven more countries need to ratify it to come into force fully. She advocated Convention 108+ as a global standard for data protection, capable of upgrading current data protection standards without demanding complete homogeneity. Instead, it offers a common ground for nations to align on privacy matters.

What’s on your mind: Neuroscience and data protection moderated by Ella Mein

Marcello Ienca, a Professor of Ethics of AI and Neuroscience at the University of Munich, gave everyone in the audience a breakdown of how data and neuroscience intersect and the real-world implications for people’s privacy.

The brain, often described as the largest data repository in the world, presents a vast opportunity for exploration and AI is acting as a catalyst in this process. Large-scale language models are helping researchers in decoding the brain’s ‘hardware’ and ‘software’, although the full ‘language of thought’ remains unclear and uncertain.

Neurotechnology raises real privacy and ethical concerns. For instance, the ability to biomark
conditions like schizophrenia or dementia introduces new vulnerabilities, such as the risk of
‘neuro discrimination’, where predicting one’s illness might lead to stigmatisation or unequal
treatment.

However, it is argued that understanding and predicting neurological conditions is important, as nearly every individual is expected to experience at least one neurological condition in their lifetime. As one panellist put it, ‘We cannot cure what we don’t understand, and we cannot understand what we don’t measure.’

This field also poses questions about data ownership and access. Who should have the ‘right to read brains’, and how can we ensure that access to such sensitive data, particularly emotions and memories unrelated to clinical goals, is tightly controlled? With the data economy in an ‘arms race’, there is a push to extract information directly from its source: the human brain.

As neurotechnology advances, balancing its potential benefits with safeguards will be important to ensure that innovation does not come at the cost of individual privacy and autonomy as mandated by law.

In addition to this breakdown, Jurisconsult Anna Austin explained to us the ECtHR’s legal background surrounding this. A jurisconsult plays a key role in keeping the court informed by maintaining a network that monitors relevant case law from member states and central to this discussion are questions of consent and waiver.

Current ECtHR case law states that any waiver must be unequivocal, fully informed, and fully understand its consequences, which can be challenging to meet. This high standard exists to safeguard fundamental rights, such as protection from torture and inhumane treatment and ensuring the right to a fair trial. As it stands, she stated that there is no fully comprehensive waiver mechanism.

The right to a fair trial is an absolute right that needs to be understood in this context. One nuance in this context is therapeutic necessity where forced medical interventions can be justified under strict conditions with safeguards to ensure proportionality.

Yet concerns remain regarding self-incrimination under Article 6. Particularly in scenarios where reading one’s mind could improperly compel evidence, raising questions about the abuse of such technologies.

Alessandra Pierucci from the Italian DPA made a relevant case for whether new laws should be
created for this matter or whether existing ones are sufficient. Within the context of her work, they are developing a mental privacy risk assessment.

Beyond privacy unveiling the true stakes of data protection. Moderated by Romain Robert

Nathalie Laneret, Vice President of Government Affairs and Public Policy at Criteo, presented her viewpoint on the role of AI and data protection. Addressing the balance between data protection and innovation, Laneret explained that these areas must work together.

She stressed the importance of finding a ways to use pseudonymised data and clear codes of conduct for businesses to use when pointing out that innovation is high on the European Commission’s political agenda.

Laneret addressed concerns about sensitive data, such as children’s data, highlighting Criteo’s proactive approach. With an internal ethics team, the company anticipated potential regulatory challenges around sensitive data, ensuring it stayed ahead of ethical and compliance issues.

In contrast, Max Schrems, Chair of noyb, offered a more critical perspective on data practices. He pointed out the economic disparity in the advertising model, explaining that while advertisers generate minimal revenue per user annually, they often charge users huge fees for their data. Schrems highlighted the importance of individuals having the right to freely give up their privacy if they choose, provided that consent is genuinely voluntary and given.

Forging the future: reinventing data protection? Moderated by Gabriela Zanfir-Fortuna

In this last panel, Johnny Ryan from the Irish Council for Civil Liberties painted a stark picture of
the societal challenges tied to data misuse. He described a crisis fuelled by external influence,
misunderstandings, and data being weaponised against individuals.

However, Ryan argued that the core issue is not merely the problems themselves but the fact that the EU lacks an effective and immediate response strategy. He stated the need for swift protective measures, criticising the current underuse of interim tools that could mitigate harm in real-time.

Nora Ni Loideain, a Lecturer and Director at the University of London’s Information Law and Policy Centre, discussed the impact of the GDPR on data protection enforcement. Explaining how DPAs had limited powers in the past and, for example, in events like the Cambridge Analytica scandal, she noted that the UK’s Data Protection Authority could only fine Facebook £500,000 due to a lack of resources and authority.

This is where the GDPR has allowed for DPAs to step up with independence, greater resources, and stronger enforcement capabilities, significantly improving their ability to hold companies accountable for their privacy violations.

Happy Data Protection Day 2025!

Greece to launch AI tool for personalised education

Greece‘s Ministry of Education is developing an AI-powered digital assistant aimed at helping students bridge learning gaps. Set to launch in the 2025-2026 school year, the tool will analyse student responses to exercises, identifying areas where they struggle and recommending targeted study materials. Initially focused on middle and senior high school students, it may eventually expand to lower elementary grades as well.

The AI assistant uses machine-learning algorithms to assess students’ strengths and weaknesses, tailoring study plans accordingly. Integrated with Greece’s Digital Tutoring platform, it will leverage over 15,000 interactive exercises and 7,500 educational videos. Teachers will also have access to the data, allowing them to better support their students.

Education Minister Kyriakos Pierrakakis highlighted that the project, part of the “Enhancing the Digital School” initiative, is designed to complement, not replace, traditional teaching methods. The initiative, which aims to modernise Greece’s education system, will be funded through the EU Recovery and Resilience Facility. Approval is expected in March, after which competitive bidding will begin for the project’s implementation.

For more information on these topics, visit diplomacy.edu.

EU seeks private investment for AI gigafactories

The European Union is looking to the private sector to help fund large-scale AI computing infrastructure, known as ‘AI Gigafactories,’ to support the development of advanced AI models. Speaking at the AI Action Summit in Paris, EU President Ursula von der Leyen emphasised the need for powerful computing resources to enable European startups to compete globally.

To accelerate AI adoption, the EU has pledged €50 billion in funding, adding to a €150 billion commitment from private sector companies under the EU AI Champions initiative. The goal is to mobilise €200 billion in total investment, making it the largest public-private partnership for AI development in the world.

With the US and China heavily investing in AI infrastructure, Europe is under pressure to keep pace. Von der Leyen argued that Europe’s collaborative approach to AI, focused on shared computing resources and federated data, could provide a competitive advantage. She stressed that AI Gigafactories would be accessible to researchers, startups, and industries, ensuring that Europe remains a key player in the AI race.

For more information on these topics, visit diplomacy.edu.

JD Vance takes on Europe’s AI regulations in Paris

US Vice President JD Vance is set to speak at the Paris AI summit on Tuesday, where he is expected to address Europe’s regulation of artificial intelligence and the moderation of content on major tech platforms. As AI continues to grow, the global discussion has shifted from safety concerns to intense geopolitical competition, with nations vying to lead the technology’s development. On the first day of the summit, French President Emmanuel Macron emphasised the need for Europe to reduce regulatory barriers to foster AI growth, in contrast to the regulatory divergence between the US, China, and Europe.

Vance, a vocal critic of content moderation on tech platforms, has voiced concerns over Europe’s approach, particularly in relation to Elon Musk’s platform X. Ahead of his trip, he stressed that free speech should be a priority for the US under President Trump, suggesting that European content moderation could harm these values. While Vance’s main focus in Paris is expected to be Russia’s invasion of Ukraine, he will lead the American delegation in discussions with nearly 100 countries, including China and India, to navigate competing national interests in the AI sector.

Macron and European Commission President Ursula von der Leyen are also expected to present a new AI strategy, aimed at simplifying regulations and accelerating Europe’s progress. At the summit, Macron highlighted the region’s shift to carbon-free nuclear energy to meet the growing energy demands of AI. German Chancellor Olaf Scholz called on European companies to unite in strengthening AI efforts within the continent. Meanwhile, OpenAI CEO Sam Altman is scheduled to speak, following a significant bid from a consortium led by Musk to purchase OpenAI.

The summit also anticipates discussions on a draft statement proposing an inclusive, human rights-based approach to AI, with an emphasis on avoiding market concentration and ensuring sustainability for both people and the planet. However, it remains unclear whether nations will support this approach as they align their strategies.

For more information on these topics, visit diplomacy.edu.

Microsoft offers price change to avoid EU antitrust fine

Microsoft has proposed increasing the price difference between its Office product with the Teams app and the version without it, to avoid a potential EU antitrust fine. This comes after complaints from rivals like Salesforce-owned Slack and German competitor alfaview regarding Microsoft’s practice of bundling Teams with Office. Since Teams became a part of Office 365 in 2017, it gained widespread use during the pandemic, largely due to its video conferencing capabilities.

To address concerns, Microsoft unbundled Teams from Office in 2023, offering Office without Teams for €2 less and a standalone Teams subscription for €5 per month. The European Commission is currently gathering feedback from companies, with a decision on whether to conduct a formal market test expected soon. As part of its offer, Microsoft has also proposed better interoperability terms to make it easier for competitors to challenge its products.

The EU has previously fined Microsoft €2.2 billion for similar antitrust issues in the past. If the Commission accepts Microsoft’s proposal without issuing a fine or finding wrongdoing, it would likely allow the EU to focus resources on ongoing investigations into other tech giants like Apple and Google.

For more information on these topics, visit diplomacy.edu.

EU lawmakers to negotiate next data protection supervisor

Lawmakers are set to negotiate with EU member states to determine the next European Data Protection Supervisor (EDPS), following the expiration of the current EDPS, Wojciech Wiewiórowski’s mandate in December. The decision on his successor is expected in March at the earliest, with both the European Parliament and member states backing different candidates. The Parliament’s Civil Liberties, Justice and Home Affairs Committee (LIBE) voted to appoint Bruno Gencarelli, an Italian Commission official, while member states are supporting Wiewiórowski for another term.

The European Parliament’s group leaders have recently backed the LIBE decision, but a joint committee with the Council of the EU needs to be set up to finalise the appointment. The configuration of the committee is still under discussion. Meanwhile, privacy experts have expressed concern over Gencarelli’s candidacy, arguing that the next EDPS should not come from within the Commission due to potential conflicts of interest, citing past decisions such as the EDPS ruling against Microsoft 365’s use by the EU executive.

The EDPS role, while unable to fine Big Tech companies directly, is significant in shaping EU privacy law, as it publishes opinions on legislative proposals. The new appointee will play a crucial role in overseeing the data protection practices of EU institutions and ensuring that privacy rights are upheld.

ECB pushes for faster digital euro launch

The European Central Bank (ECB) is keen to accelerate the creation of the digital euro, particularly following US President Donald Trump’s endorsement of stablecoins linked to the US dollar. ECB board member Piero Cipollone highlighted that Trump’s backing could push European lawmakers to fast-track the legislation for the digital euro. The ECB envisions the digital euro as a central bank-backed online wallet, offering an alternative to major US payment providers like Visa and PayPal.

Despite the European Commission’s proposal for digital euro legislation in June 2023, progress has been slow due to some scepticism in the political and banking sectors. Cipollone remains optimistic that recent developments, including the rise of US stablecoins, will prompt greater urgency from EU lawmakers. He expressed hope that the digital euro legislation could be finalised by summer, allowing for negotiations with the Commission to be wrapped up before November.

Cipollone also raised concerns over the growing use of US stablecoins in Europe, warning that it could lead to a shift of deposits from European banks to the US. He acknowledged bankers’ fears that a digital euro could have a similar effect. Still, he reassured that the ECB would likely limit the amount of digital euros users can hold to prevent destabilisation. Several countries, including Nigeria and China, have already launched central bank digital currencies, while many others, such as Russia and Brazil, are in the testing phase.

EU supports OpenEuroLLM for open-source AI innovation

The European Commission has launched the OpenEuroLLM Project, a new initiative aimed at developing open-source, multilingual AI models. The project, which began on February 1, is supported by a consortium of 20 European research institutions, companies, and EuroHPC centres. Coordinated by Jan Hajič from Charles University and co-led by Peter Sarlin of AMD Silo AI, the project is designed to produce large language models (LLMs) that are proficient in all EU languages and comply with the bloc’s regulatory framework.

The OpenEuroLLM Project has been awarded the Strategic Technologies for Europe Platform (STEP) Seal, a recognition granted to high-quality initiatives under the Digital Europe Programme. This endorsement highlights the project’s importance as a critical technology for Europe. The LLMs developed will be open-sourced, allowing their use for commercial, industrial, and public sector purposes. The project promises full transparency, with public access to documentation, training codes, and evaluation metrics once the models are released.

The initiative aims to democratise access to high-quality AI technologies, helping European companies remain competitive globally and empowering public organisations to deliver impactful services. While the timeline for model release and specific focus areas have not yet been detailed, the European Commission has already committed funding and anticipates attracting further investors in the coming weeks.

EU bans AI tracking of workers’ emotions and manipulative online tactics

The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.

The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.

Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.