The Eurosystem, including the European Central Bank (ECB) and national central banks of the EU area, is advancing the digital euro project aimed to modernize central bank money. Following an initial investigation phase launched in 2021, the ECB’s Governing Council approved a two-year preparation phase starting 18 October 2023 and concluding by 31 October 2025. This phase will finalise the digital euro rulebook, select potential platform and infrastructure providers, and conduct further testing, particularly its offline functionality.
A cornerstone of the digital euro project is “privacy by design” approach. Technological measures like pseudonymisation, hashing, and encryption will ensure that online transactions remain unlinked to specific individuals. Payment service providers will access only the necessary transaction data for EU law compliance, with user consent required for any additional commercial uses. The digital euro is also designed for offline use, allowing payments without an internet connection, akin to cash transactions. This offline functionality will enhance privacy and usability in areas with limited network coverage or during power outages.
Legislative and stakeholder engagement continues in parallel, with the European Parliament and Council of the European Union working on the legislative framework proposed in 2023. Stakeholder involvement ensures the digital euro meets high standards of quality, security, and usability. Fraud prevention remains a priority, with ongoing assessments indicating that current technologies can effectively detect and prevent fraud using pseudonymised information.
By the end of 2025, the ECB will decide whether to proceed further with the digital euro, contingent on the legislative process completion.
A federal judge in Illinois dismissed a class action lawsuit against the social network X, ruling that the photos it collected did not constitute biometric data under the state’s Biometric Information Privacy Act (BIPA). The lawsuit alleged that X violated BIPA by using Microsoft’s PhotoDNA software to scan for offensive images without proper disclosure and consent.
The judge concluded that the plaintiff failed to prove that the PhotoDNA tool involved facial geometry scanning or could identify specific individuals. Instead, the software analysed uploaded photos to detect nudity or pornographic content, which did not qualify as a scan of facial geometry under BIPA.
The ruling mirrors a recent case involving Facebook, where allegations of illegally collecting biometric data were dismissed. Both cases clarified that a digital signature generated from a photograph, known as a ‘hash’ or face signature, did not violate BIPA’s definition of biometric identifiers.
The judge emphasised that BIPA aims to regulate specific biometric identifiers like retina scans or fingerprints, excluding photographs to avoid an overly broad scope. Applying BIPA to any face geometry scan that cannot identify individuals would contradict the law’s purpose of ensuring notice and consent.
BIPA’s private right of action has been a significant deterrent for biometrics companies, allowing users to sue for damages in cases of non-compliance.
The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.
A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.
Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.
Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.
Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.
Why does it matter?
These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.
In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.
Worldcoin, a cryptocurrency startup co-founded by OpenAI’s Sam Altman, has been permitted to resume its iris-scanning operations in Kenya after a year-long investigation into privacy and regulatory concerns was concluded. The Kenyan Directorate of Criminal Investigations (DCI) officially closed its probe, citing no further police action as necessary. However, Worldcoin must now register its business in Kenya, secure requisite licences, and vet its vendors to maintain operations.
Worldcoin’s activities had been suspended nearly a year ago due to compliance issues with Kenyan security, financial services, and data protection laws. A parliamentary committee recommended shutting down the company altogether, citing violations of the Computer Misuse and Cybercrimes Act, and labelling its activities as potential espionage. It was also found that Worldcoin and its parent entity, Tools for Humanity, were unregistered in Kenya, and had not received approval to use the Orbs, considered telecommunications equipment.
Thomas Scott, Chief Legal Officer of Tools for Humanity, expressed gratitude for the fair investigation and said this is merely a new beginning. He highlighted the company’s commitment to working with Kenyan authorities to advance Worldcoin’s mission and create economic opportunities. While Worldcoin has resolved its immediate regulatory hurdles in Kenya, it continues to face significant scrutiny in other countries, including ongoing investigations in Germany, Spain, Portugal, and Italy.
The situation has highlighted challenges in regulating new technologies, particularly around privacy and compliance. In response, Kenya is developing a regulatory framework for virtual assets, aiming to provide clearer guidelines for crypto startups like Worldcoin. The outcome could pave the way for more structured compliance pathways amid the rapid advancements in digital finance and identity systems.
In a bold move highlighting the intersection of technology and politics, businessman Steve Endacott is running in the 4 July national election in Britain, aiming to become a member of parliament (MP) with the aid of an AI-generated avatar. The campaign leaflet for Endacott features not his own face but that of an AI avatar dubbed ‘AI Steve.’ The initiative, if successful, would result in the world’s first AI-assisted lawmaker.
Endacott, founder of Neural Voice, presented his AI avatar to the public in Brighton, engaging with locals on various issues through real-time interactions. The AI discusses topics like LGBTQ rights, housing, and immigration and then offers policy ideas, seeking feedback from citizens. Endacott aims to demonstrate how AI can enhance voter access to their representatives, advocating for a reformed democratic process where people are more connected to their MPs.
Despite some scepticism, with concerns about the effectiveness and trustworthiness of an AI MP, Endacott insists that the AI will serve as a co-pilot, formulating policies reviewed by a group of validators to ensure security and integrity. The Electoral Commission clarified that the elected candidate would remain the official MP, not the AI. While public opinion is mixed, the campaign underscores the growing role of AI in various sectors and sparks an important conversation about its potential in politics.
Victor Miller, 42, has stirred controversy by filing to run for mayor of Cheyenne, Wyoming, using a customised AI chatbot named VIC (virtual integrated citizen). Miller argued that VIC, powered by OpenAI technology, could effectively make political decisions and govern the city. However, OpenAI quickly shut down Miller’s access to their tools for violating policies against AI use in political campaigning.
The emergence of AI in politics underscores ongoing debates about its responsible use as technology outpaces legal and regulatory frameworks. Wyoming Secretary of State Chuck Gray clarified that state law requires candidates to be ‘qualified electors,’ meaning VIC, as an AI bot, does not meet the criteria. Despite this setback, Miller intends to continue promoting VIC’s capabilities using his own ChatGPT account.
Meanwhile, similar AI-driven campaigns have surfaced globally, including in the UK, where another candidate utilises AI models for parliamentary campaigning. Critics, including experts like Jen Golbeck from the University of Maryland, caution that while AI can support decision-making and manage administrative tasks, ultimate governance decisions should remain human-led. Despite the attention these AI candidates attract, observers like David Karpf from George Washington University dismiss them as gimmicks, highlighting the serious nature of elections and the need for informed human leadership.
Miller remains optimistic about the potential for AI candidates to influence politics worldwide. Still, the current consensus suggests that AI’s role in governance should be limited to supportive functions rather than decision-making responsibilities.
Butterflies, a new social network where humans and AI interact, has launched publicly on iOS and Android after five months in beta. Founded by former Snap engineering manager Vu Tran, the app allows users to create AI personas, called Butterflies, that post, comment, and message like real users. Each Butterfly has unique backstories, opinions, and emotions, enhancing the interaction beyond typical AI chatbots.
Tran developed Butterflies to provide a more creative and substantial AI experience. Unlike other AI chatbots from companies like Meta and Snap, Butterflies aims to integrate AI personas into a traditional social media feed, where AI and human users can engage with each other’s content. The app’s beta phase attracted tens of thousands of users, with some spending hours creating and interacting with hundreds of AI personas.
Butterflies’ unique approach has led to diverse user interactions, from creating alternate universe personas to role-playing in popular fictional settings. Vu Tran believes the app offers a wholesome way to interact with AI, helping people form connections that might be difficult in traditional social settings due to social anxiety or other barriers.
Initially free, Butterflies may introduce a subscription model and brand interactions in the future. Backed by a $4.8 million seed round led by Coatue and other investors, Butterflies aims to expand its functionality and continue to offer a novel way for users to explore AI and social interaction.
The Australian Department of Communications is set to start a trial for age verification technologies to ensure age-restricted online content is only accessible to appropriate individuals. This initiative aims to protect minors from harmful material.
The trial will focus on verifying users’ ages on platforms such as gambling sites, adult games, entertainment, and possibly social media. The Department will manage the trial’s logistics, while an independent third-party expert will evaluate the technology’s effectiveness. The selection process for this expert will begin next month, inviting proposals from qualified organisations and individuals.
Participation in the trial is voluntary for digital platform companies, but the Department encourages major tech firms to join, given their obligations under the Online Safety Act, which is currently under review. The trial will explore various age verification methods, including biometric age estimation, ID document verification, and AI-driven age inference.
Why does it matter?
In line with these efforts to address harmful content, the Australian Government has enhanced the Basic Online Safety Expectations (BOSE) determination for online services providers, including social media platforms, to adhere to higher online safety standards. Previously, the Albanese government decided against a mandatory age verification system for online pornography and adult content due to the immaturity of current technologies, as per the BiometricUpdate.
The National Identification Registry (NIR) and the Civil Service Agency (CSA) in Liberia have partnered to issue biometric ID cards to civil servants to combat financial fraud in the public sector. The agreement will provide ID cards to employees in 103 government agencies to reduce payroll fraud and prevent identity duplication. CSA Director General Hon. Josiah F. Joekai stressed that this initiative will improve the verification process for public servants, which, in turn, is expected to enhance service delivery.
The Memorandum of Understanding (MoU) establishes a comprehensive biometric verification system, allowing the CSA ongoing access to the NIR’s e-verification platform to ensure all Government of Liberia employees’ National Identification Numbers are included on their ID cards. NIR Director General Andrew Peters noted that this collaboration will improve the collection of civil servants’ data, helping to identify those committing fraud.
Additionally, the government is making efforts to expand ID coverage among citizens by launching a mass biometric enrollment exercise this month, as per The Biometric Update.
Why does it matter?
The initiative comes after CSA Director General Josiah Joekai reported uncovering significant fraud and discrepancies in various government spending entities. At a press briefing, Joekai stressed that these issues have led to an average monthly wage expenditure of over $23.5 million and caused the past administration to spend $6.1 million on consulting services last year. Regular audits revealed fraudulent payments, ghost employees, and other financial mismanagement.