Study warns of AI’s role in fueling bank runs

A new study from the UK has raised concerns about the risks of bank runs fueled by AI-generated fake news spread on social media. The research, published by Say No to Disinfo and Fenimore Harper, highlights how generative AI can create false stories or memes suggesting that bank deposits are at risk, leading to panic withdrawals. The study found that a significant portion of UK bank customers would consider moving their money after seeing such disinformation, especially with the speed at which funds can be transferred through online banking.

The issue is gaining traction globally, with regulators and banks worried about the growing role of AI in spreading malicious content. Following the collapse of Silicon Valley Bank in 2023, which saw $42 billion in withdrawals within a day, financial institutions are increasingly focused on detecting disinformation that could trigger similar crises. The study estimates that a small investment in social media ads promoting fake content could cause millions in deposit withdrawals.

The report calls for banks to enhance their monitoring systems, integrating social media tracking with withdrawal monitoring to better identify when disinformation is impacting customer behaviour. Revolut, a UK fintech, has already implemented real-time monitoring for emerging threats, urging financial institutions to be prepared for potential risks. While banks remain optimistic about AI’s potential, the financial stability challenges it poses are still a growing concern for regulators.

As financial institutions work to mitigate AI-related risks, the broader industry is also grappling with how to balance the benefits of AI with the threats it may pose. UK Finance, the industry body, emphasised that banks are making efforts to manage these risks, while regulators continue to monitor the situation closely.

For more information on these topics, visit diplomacy.edu.

Hackers target Trump-linked crypto project with fake Barron meme coin

Zach Witkoff, co-founder of the Trump-affiliated crypto project World Liberty Financial, had his X account hacked on Wednesday. The hacker used the account to promote a fake memecoin project involving Barron Trump, claiming that the news would soon be confirmed by the Trump family.

World Liberty Financial quickly confirmed the hack, urging users to ignore the fraudulent Barron Trump project. This incident is part of a wider trend of crypto scams, as Ivanka Trump also warned earlier this year about a fake memecoin using her likeness to defraud investors.

World Liberty Financial, a decentralised finance project, launched its own token, WLFI, in October 2024. Despite these security issues, the project continues to operate with the Trump family’s name associated with its team.

For more information on these topics, visit diplomacy.edu

China tops global data breach rankings in 2024, experts warn

In 2024, three countries entered the top 10 for the highest number of breached accounts. China topped the list, rising from 12th place in 2023, Germany moved up to fifth from 16th, and Poland secured the tenth spot, up from 17th, according to Surfshark, a cybersecurity firm. Despite these changes, Russia, the US, France, India, Brazil, Italy, and the UK remained in the top 10 for both years.

Brazil and Italy saw significant increases, climbing two spots each in 2024. Brazil experienced a 24-fold rise in breached accounts, while Italy saw a 21-fold surge. Russia and France maintained their positions in second and fourth place, though both saw dramatic increases, with Russia’s breaches rising 11 times and France’s nearly 14 times.

In 2024, regional data breach statistics show that Europe had the highest share, accounting for 29% of all breached accounts, totalling over 1.6 billion, with Russia leading the region. Asia followed as the second-most affected region, contributing 23% to the global total, or nearly 1.3 billion breached accounts, with China at the forefront. North America ranked third, representing 14% of all breaches, or about 770 million compromised accounts, primarily from the US.

The US, India, and the UK dropped in rankings in 2024, but the number of breached accounts in these regions still rose. The US saw a 39% increase, ranking third globally, while India recorded five times more breaches than in 2023, and the UK experienced a 14-fold surge. China had the most dramatic increase, with breached accounts jumping nearly 340 times compared to the previous year.

In 2024, Australian users also faced a cyber attack every second, marking a twelvefold increase compared to the previous year. This contributed to a global rise in data breaches, with 5.6 billion accounts compromised worldwide, averaging 176 breaches per second. This global figure represents an eightfold increase from 2023, when 23 accounts were breached per second.

Data Protection Day 2025: A new mandate for data protection

This analysis will be a detailed summary of Data Protection Day, providing the most relevant aspects from each session. The event welcomed people to Brussels, as well as virtually, to celebrate Data Protection Day 2025 together.

Filled with a tight schedule, the event programme kicked off with opening remarks by the Secretary General of the European Data Protection Supervisor (EDPS), followed by a day of panels, speeches and side sessions from the brightest of minds in the data protection field.

Keynote speech by Leonardo Cervera Navas

Given the recent political turmoil in the EU, specifically the repealing of the Romanian elections a few months ago, it was no surprise that the first keynote speech addressed how algorithms are used to destabilise democracies and threaten them. Navas explained how third-country algorithms are used against EU democracies to target their values.

He then went on to discuss how there is a big power imbalance when certain wealthy people with their companies dominate the tech world and end up violating our privacy. However, he turned towards a hopeful future when he spoke about how the crisis in Europe is making us Europeans stronger. ‘Our values are what unite us, and part of them are the data protection values the EDPB strongly upholds’, he emphasised.

He acknowledged the evident overlap of rules and regulations between different legal instruments but also highlighted the creation of tools that can help uphold our privacy, such as the Digital Clearing House 2.0.

Organiser’s panel moderated by Kait Bolongaro

This panel discussed a wide variety of data protection topics, such as the developments on the ground, how international cooperation played a role in the fight against privacy violations, and what each panellist’s priorities were for the upcoming years. That last question was especially interesting to hear given the professional affiliations of each panellist.

What is interesting about these panels, is the fact that the organisers spent a lot of time curating a diverse panel. They had people from academia, private industry, public bodies, and even the EDPS. This ensures that a panel’s topic is discussed from more than one point of view, which is much more engaging.

Wojciech Wiewiorowski, the current European Data Protection Supervisor, reminded us of the important role that data protection authorities (DPAs) play in the effective enforcement of the GDPR. Matthias Kloth, Head of Digital Governance and Sport, CoE, showed us a broader perspective. As his work surrounds the evolved Convention 108, now known as Convention 108+, he shed some light on the advancements of updating and bringing past laws into today’s modern age.

Regarding international cooperation, each panellist had their own unique take on how to facilitate and streamline it. Wiewiorowski correctly stated that data has no borders and that cooperation with everyone is needed, as a global effort. However, he reminded, that in the age of cooperation, we cannot have a low level of protection by following the ‘lowest common denominator level of protection’.

Jo Pierson, Professor at the Vrije University Brussels and the Hasselt University, said that international cooperation is very challenging. He gave the example that country’s values may change overnight, giving the example of Trump’s recent re-election victory.

Audience questions

A member of the audience posed a very relevant question regarding the legal field as a whole.
He asked the panellists what they thought of the fact that enforcing one’s rights is a difficult and
costly process. To provide context, he explained how a person must be legally literate and bear their own costs for litigation to litigate or filing an appeal.

Wiewiorowski of the EDPS pointed out that changing the procedural rules of the GDPR is not feasible to tackle this issue. There is the option for small-scale procedural amendments, but he does not foresee the GDPR being opened up in the coming years.

However, Pierson had a more practical take on the matter and suggested that this is where individuals and civil society organisations can join forces. Individuals can approach organisations such as noyb, Privacy International, and EDRi for help or advice on the matter. But then it begs the question, on whose shoulders should this burden rest?

One last question from the audience was about the bombshell new Chinese AI ‘DeepSeek’ recently dropped onto the market. The panellists were asked whether this new AI is an enemy or a friend to us Europeans. Each panellist avoided calling Chinese AI an enemy or a friend, but they did find common ground on the fact that we need international cooperation and that an open-source AI is not a bad thing if it can be trained by Europeans.

The last remark regarding this panel was Wiewiorowski’s comment on Chinese AI and how he compared it to ‘Sputnik Day’ (the 1950s space race between the United States and the USSR). Are we in a new technological gap? Will non-Western allies and foes beat us in this digital arms race?

Data protection in a changing world: What lies ahead? Moderated by Anna Buchta

This session also had a series of interesting questions for high-profile panellists. The range of this panel was impressive as it regrouped opinions from the European Commission, the Polish Minister of Digital Affairs, the European Parliament, the UK’s Information Commissioner, and DIGITALEUROPE.

Notably, Marina Kaljurand from LIBE and her passion for cyber matters. She revealed that many people in the European Parliament are not tech literate. On the other hand, some people are extremely well-versed in how the technology is used. There seems to be a big information asymmetry within the European Parliament that needs to be addressed if they are to vote on digital regulations.

She gave an important overview of the state of data transfers with the UK and the USA. The UK has in place an adequacy decision that has raised multiple flags in the European Parliament and is set to expire in June 2025.

The future of data transfer in the UK is very uncertain. As for the USA, she mentioned that there will be difficult times due to the actions of the recently re-elected President Trump that are degrading US-EU relations. Regarding her views on the child sexual abuse material regulation, she stresses how important it is to protect children and that the debate is not about whether or not to protect them or not, but that it is difficult to find out ‘how’ to protect them.

The current proposed regulations will put too much stress on violating one’s privacy, but on the other hand, it is difficult to find alternatives to protect children. This reflects how difficult regulating can be even when everyone at the table may have the same goals.

Irena Moozova, the Deputy Director-General of DG JUST at the European Commission, said that her priorities for the upcoming years are to cut red tape, simplify guidelines for businesses to work and support business compliance efforts for small and medium-sized enterprises. She mentions the public consultation phases that will be held for the upcoming Digital Fairness Act this summer.

John Edwards, the UK Information Commissioner, highlighted the transformative impact of emerging technologies, particularly Chinese AI, and how disruptive innovations can rapidly reshape markets. He discussed the ICO’s evolving strategies, noting their alignment with ideas shared by other experts. The organisation’s focus for the next two years includes key areas such as AI’s role in biometrics and tracking, as well as safeguarding children’s privacy. To address these priorities, the ICO has published an online tracking strategy and conducted research on children’s data privacy, including the development of systems tailored to protect young users.

Alberto Di Felice, Legal Counsel to DIGITALEUROPE, stressed the importance of simplifying regulations. He repeatedly stated numerous times that there is too much bureaucracy and too many actors involved in regulation. For example, if a company wants to operate in the EU market, they will have to consult DPAs, AI Act authority, data from the public sector (Data Governance Act), manufacturers or digital products (authorities for this), and financial sector authorities.

He advocated for a single regulator. He also mentioned how the quality of regulation in Europe
is poor and that sometimes regulations are too long. For example, some AI Act articles are 17 lines long with exceptions and sub-exceptions that lawyers cannot even make sense of. He suggested reforms such as having one regulator and proposing changes to streamline legal compliance.

Keynote speech by Beatriz de Anchorena on global data protection

Beatriz de Anchorena, Head of Argentina’s DPA and current Chair of the Convention 108+ Committee, delivered a compelling address on the importance of global collaboration in data protection. Representing a non-European perspective, she emphasised Argentina’s unique contribution to the Council of Europe (CoE).

Argentina was the first country outside Europe to receive an EU adequacy decision, which has since been renewed. Despite having data protection laws originating in the 2000s, Argentina remains a leader in promoting modernised frameworks.

Anchorena highlighted Argentina’s role as the 23rd state to ratify the Convention 108+, noting that only seven more countries need to ratify it to come into force fully. She advocated Convention 108+ as a global standard for data protection, capable of upgrading current data protection standards without demanding complete homogeneity. Instead, it offers a common ground for nations to align on privacy matters.

What’s on your mind: Neuroscience and data protection moderated by Ella Mein

Marcello Ienca, a Professor of Ethics of AI and Neuroscience at the University of Munich, gave everyone in the audience a breakdown of how data and neuroscience intersect and the real-world implications for people’s privacy.

The brain, often described as the largest data repository in the world, presents a vast opportunity for exploration and AI is acting as a catalyst in this process. Large-scale language models are helping researchers in decoding the brain’s ‘hardware’ and ‘software’, although the full ‘language of thought’ remains unclear and uncertain.

Neurotechnology raises real privacy and ethical concerns. For instance, the ability to biomark
conditions like schizophrenia or dementia introduces new vulnerabilities, such as the risk of
‘neuro discrimination’, where predicting one’s illness might lead to stigmatisation or unequal
treatment.

However, it is argued that understanding and predicting neurological conditions is important, as nearly every individual is expected to experience at least one neurological condition in their lifetime. As one panellist put it, ‘We cannot cure what we don’t understand, and we cannot understand what we don’t measure.’

This field also poses questions about data ownership and access. Who should have the ‘right to read brains’, and how can we ensure that access to such sensitive data, particularly emotions and memories unrelated to clinical goals, is tightly controlled? With the data economy in an ‘arms race’, there is a push to extract information directly from its source: the human brain.

As neurotechnology advances, balancing its potential benefits with safeguards will be important to ensure that innovation does not come at the cost of individual privacy and autonomy as mandated by law.

In addition to this breakdown, Jurisconsult Anna Austin explained to us the ECtHR’s legal background surrounding this. A jurisconsult plays a key role in keeping the court informed by maintaining a network that monitors relevant case law from member states and central to this discussion are questions of consent and waiver.

Current ECtHR case law states that any waiver must be unequivocal, fully informed, and fully understand its consequences, which can be challenging to meet. This high standard exists to safeguard fundamental rights, such as protection from torture and inhumane treatment and ensuring the right to a fair trial. As it stands, she stated that there is no fully comprehensive waiver mechanism.

The right to a fair trial is an absolute right that needs to be understood in this context. One nuance in this context is therapeutic necessity where forced medical interventions can be justified under strict conditions with safeguards to ensure proportionality.

Yet concerns remain regarding self-incrimination under Article 6. Particularly in scenarios where reading one’s mind could improperly compel evidence, raising questions about the abuse of such technologies.

Alessandra Pierucci from the Italian DPA made a relevant case for whether new laws should be
created for this matter or whether existing ones are sufficient. Within the context of her work, they are developing a mental privacy risk assessment.

Beyond privacy unveiling the true stakes of data protection. Moderated by Romain Robert

Nathalie Laneret, Vice President of Government Affairs and Public Policy at Criteo, presented her viewpoint on the role of AI and data protection. Addressing the balance between data protection and innovation, Laneret explained that these areas must work together.

She stressed the importance of finding a ways to use pseudonymised data and clear codes of conduct for businesses to use when pointing out that innovation is high on the European Commission’s political agenda.

Laneret addressed concerns about sensitive data, such as children’s data, highlighting Criteo’s proactive approach. With an internal ethics team, the company anticipated potential regulatory challenges around sensitive data, ensuring it stayed ahead of ethical and compliance issues.

In contrast, Max Schrems, Chair of noyb, offered a more critical perspective on data practices. He pointed out the economic disparity in the advertising model, explaining that while advertisers generate minimal revenue per user annually, they often charge users huge fees for their data. Schrems highlighted the importance of individuals having the right to freely give up their privacy if they choose, provided that consent is genuinely voluntary and given.

Forging the future: reinventing data protection? Moderated by Gabriela Zanfir-Fortuna

In this last panel, Johnny Ryan from the Irish Council for Civil Liberties painted a stark picture of
the societal challenges tied to data misuse. He described a crisis fuelled by external influence,
misunderstandings, and data being weaponised against individuals.

However, Ryan argued that the core issue is not merely the problems themselves but the fact that the EU lacks an effective and immediate response strategy. He stated the need for swift protective measures, criticising the current underuse of interim tools that could mitigate harm in real-time.

Nora Ni Loideain, a Lecturer and Director at the University of London’s Information Law and Policy Centre, discussed the impact of the GDPR on data protection enforcement. Explaining how DPAs had limited powers in the past and, for example, in events like the Cambridge Analytica scandal, she noted that the UK’s Data Protection Authority could only fine Facebook £500,000 due to a lack of resources and authority.

This is where the GDPR has allowed for DPAs to step up with independence, greater resources, and stronger enforcement capabilities, significantly improving their ability to hold companies accountable for their privacy violations.

Happy Data Protection Day 2025!

Ancient history can bring clarity to AI regulation and digital diplomacy

In his op-ed, From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle of legal accountability in modern AI regulation and governance. Dating back 4,000 years, Hammurabi’s Code established that builders were responsible for damages caused by their work—a principle Kurbalija believes should apply to AI developers, deployers, and beneficiaries today.

While this may seem like common sense, current legal frameworks, particularly Section 230 of the 1996 US Communications Decency Act, have created a loophole. The provision, designed to protect early internet platforms, grants them immunity for user-generated content, allowing AI companies nowadays to evade responsibility for causing harm such as deepfakes, fraud, and cyber crimes. The legal anomaly complicates global AI governance and digital diplomacy efforts, as inconsistent accountability standards hinder international cooperation.

Kurbalija emphasises that existing legal rules—applied by courts, as seen in internet regulation—should suffice for AI governance. New AI-specific rules should only be introduced in exceptional cases, such as when addressing apparent legal gaps, similar to how cybercrime and data protection laws emerged in the internet era.

He concludes that AI, like hammers, is ultimately a tool—albeit a powerful one. Legal responsibility must lie with humans, not machines. By discarding the immunity shield of Section 230 and reaffirming principles of accountability, transparency, and justice, policymakers can draw on 4,000 years of legal wisdom to govern AI effectively. That approach strengthens AI governance and advances digital diplomacy by creating a foundation for global norms and cooperation in the digital age.

For more information on these topics, visit diplomacy.edu.

Italian government denies role in spyware targeting critics

The Italian government is under increasing pressure to explain its links to Israeli spyware firm Paragon, following reports that the company severed ties with Rome over allegations of misuse. The controversy erupted after WhatsApp revealed that Paragon spyware had been used to target multiple users, including a journalist and a human rights activist critical of Prime Minister Giorgia Meloni.

While the government has confirmed that seven people in Italy were affected, it denies any involvement in the hacking and has called for an investigation. However, reports from The Guardian and Haaretz claim Paragon cut ties with Italy due to doubts over the government’s denial. Opposition politicians have demanded clarity, with former Prime Minister Matteo Renzi insisting that those responsible be held accountable.

Deputy Prime Minister Matteo Salvini initially suggested that internal disputes within the intelligence services might be behind the scandal, though he later retracted his comment, claiming he was referring to unrelated cases. Meanwhile, critics argue that the government cannot ignore the growing concerns over the potential misuse of surveillance tools against political opponents.

With mounting calls for transparency, the affair has intensified debate over government accountability and digital surveillance, raising broader questions about the ethical use of spyware within democratic nations.

French authorities scrutinise X’s algorithms for potential bias

French prosecutors have launched an investigation into X, formerly known as Twitter, over alleged algorithmic bias. The probe was initiated after a lawmaker raised concerns that biased algorithms on the platform may have distorted automated data processing. The Paris prosecutor’s office confirmed that cybercrime specialists are analysing the issue and conducting technical checks.

The investigation comes just days before a major AI summit in Paris, where global leaders and tech executives from companies like Microsoft and Alphabet will gather. X has not responded to requests for comment. The case highlights growing scrutiny of the platform, which has been criticised for its role in shaping political discourse. Elon Musk’s vocal support for right-wing parties in Europe has raised fears of foreign interference.

France‘s J3 cybercrime unit, which is leading the investigation, has previously targeted major tech platforms, including Telegram. Last year, it played a key role in the arrest of Telegram’s founder and pressured the platform to remove illegal content. X has also faced legal challenges in other countries, including Brazil, where it was temporarily blocked for failing to curb misinformation.

UK gambling websites breach data protection laws

Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.

Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.

The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.

The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.

Sony extends PlayStation Plus after global network disruption

PlayStation Plus subscribers will receive an automatic five-day extension after a global outage disrupted the PlayStation Network for around 18 hours on Friday and Saturday. Sony confirmed on Sunday that network services had been fully restored and apologised for the inconvenience but did not specify the cause of the disruption.

The outage, which started late on Friday, left users unable to sign in, play online games or access the PlayStation Store. By Saturday evening, Sony announced that services were back online. At its peak, Downdetector.com recorded nearly 8,000 affected users in the US and over 7,300 in the UK.

PlayStation Network plays a vital role in Sony’s gaming division, supporting millions of users worldwide. Previous disruptions have been more severe, including a cyberattack in 2014 that shut down services for several days and a major 2011 data breach affecting 77 million users, leading to a month-long shutdown and regulatory scrutiny.

South Korea accuses DeepSeek of excessive data collection

South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes. The agency warned government bodies last week to take security measures, highlighting that unlike other AI services, DeepSeek collects sensitive data such as keyboard input patterns and transfers it to Chinese servers. Some South Korean government ministries have already blocked access to the app due to these security concerns.

The NIS also pointed out that DeepSeek grants advertisers unrestricted access to user data and stores South Korean users’ data in China, where it could be accessed by the Chinese government under local laws. The agency also noted discrepancies in the app’s responses to sensitive questions, such as the origin of kimchi, which DeepSeek claimed was Chinese when asked in Chinese, but Korean when asked in Korean.

DeepSeek has also been accused of censoring political topics, such as the 1989 Tiananmen Square crackdown, prompting the app to suggest changing the subject. In response to these concerns, China’s foreign ministry stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. DeepSeek has not yet commented on the allegations.