EU faces controversy over proposed AI scanning law

The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.

A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.

Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.

Ukrainian student’s identity misused by AI on Chinese social media platforms

Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.

Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.

Why does it matter?

These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.

In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.

Worldcoin allowed to resume operations in Kenya after year-long probe

Worldcoin, a cryptocurrency startup co-founded by OpenAI’s Sam Altman, has been permitted to resume its iris-scanning operations in Kenya after a year-long investigation into privacy and regulatory concerns was concluded. The Kenyan Directorate of Criminal Investigations (DCI) officially closed its probe, citing no further police action as necessary. However, Worldcoin must now register its business in Kenya, secure requisite licences, and vet its vendors to maintain operations.

Worldcoin’s activities had been suspended nearly a year ago due to compliance issues with Kenyan security, financial services, and data protection laws. A parliamentary committee recommended shutting down the company altogether, citing violations of the Computer Misuse and Cybercrimes Act, and labelling its activities as potential espionage. It was also found that Worldcoin and its parent entity, Tools for Humanity, were unregistered in Kenya, and had not received approval to use the Orbs, considered telecommunications equipment.

Thomas Scott, Chief Legal Officer of Tools for Humanity, expressed gratitude for the fair investigation and said this is merely a new beginning. He highlighted the company’s commitment to working with Kenyan authorities to advance Worldcoin’s mission and create economic opportunities. While Worldcoin has resolved its immediate regulatory hurdles in Kenya, it continues to face significant scrutiny in other countries, including ongoing investigations in Germany, Spain, Portugal, and Italy.

The situation has highlighted challenges in regulating new technologies, particularly around privacy and compliance. In response, Kenya is developing a regulatory framework for virtual assets, aiming to provide clearer guidelines for crypto startups like Worldcoin. The outcome could pave the way for more structured compliance pathways amid the rapid advancements in digital finance and identity systems.

UK parliamentary candidate introduces AI lawmaker concept

In a bold move highlighting the intersection of technology and politics, businessman Steve Endacott is running in the 4 July national election in Britain, aiming to become a member of parliament (MP) with the aid of an AI-generated avatar. The campaign leaflet for Endacott features not his own face but that of an AI avatar dubbed ‘AI Steve.’ The initiative, if successful, would result in the world’s first AI-assisted lawmaker.

Endacott, founder of Neural Voice, presented his AI avatar to the public in Brighton, engaging with locals on various issues through real-time interactions. The AI discusses topics like LGBTQ rights, housing, and immigration and then offers policy ideas, seeking feedback from citizens. Endacott aims to demonstrate how AI can enhance voter access to their representatives, advocating for a reformed democratic process where people are more connected to their MPs.

Despite some scepticism, with concerns about the effectiveness and trustworthiness of an AI MP, Endacott insists that the AI will serve as a co-pilot, formulating policies reviewed by a group of validators to ensure security and integrity. The Electoral Commission clarified that the elected candidate would remain the official MP, not the AI. While public opinion is mixed, the campaign underscores the growing role of AI in various sectors and sparks an important conversation about its potential in politics.

AI chatbot’s mayoral bid halted by legal and ethical concerns in Wyoming

Victor Miller, 42, has stirred controversy by filing to run for mayor of Cheyenne, Wyoming, using a customised AI chatbot named VIC (virtual integrated citizen). Miller argued that VIC, powered by OpenAI technology, could effectively make political decisions and govern the city. However, OpenAI quickly shut down Miller’s access to their tools for violating policies against AI use in political campaigning.

The emergence of AI in politics underscores ongoing debates about its responsible use as technology outpaces legal and regulatory frameworks. Wyoming Secretary of State Chuck Gray clarified that state law requires candidates to be ‘qualified electors,’ meaning VIC, as an AI bot, does not meet the criteria. Despite this setback, Miller intends to continue promoting VIC’s capabilities using his own ChatGPT account.

Meanwhile, similar AI-driven campaigns have surfaced globally, including in the UK, where another candidate utilises AI models for parliamentary campaigning. Critics, including experts like Jen Golbeck from the University of Maryland, caution that while AI can support decision-making and manage administrative tasks, ultimate governance decisions should remain human-led. Despite the attention these AI candidates attract, observers like David Karpf from George Washington University dismiss them as gimmicks, highlighting the serious nature of elections and the need for informed human leadership.

Miller remains optimistic about the potential for AI candidates to influence politics worldwide. Still, the current consensus suggests that AI’s role in governance should be limited to supportive functions rather than decision-making responsibilities.

New social network app blends AI personas with user interactions

Butterflies, a new social network where humans and AI interact, has launched publicly on iOS and Android after five months in beta. Founded by former Snap engineering manager Vu Tran, the app allows users to create AI personas, called Butterflies, that post, comment, and message like real users. Each Butterfly has unique backstories, opinions, and emotions, enhancing the interaction beyond typical AI chatbots.

Tran developed Butterflies to provide a more creative and substantial AI experience. Unlike other AI chatbots from companies like Meta and Snap, Butterflies aims to integrate AI personas into a traditional social media feed, where AI and human users can engage with each other’s content. The app’s beta phase attracted tens of thousands of users, with some spending hours creating and interacting with hundreds of AI personas.

Butterflies’ unique approach has led to diverse user interactions, from creating alternate universe personas to role-playing in popular fictional settings. Vu Tran believes the app offers a wholesome way to interact with AI, helping people form connections that might be difficult in traditional social settings due to social anxiety or other barriers.

Initially free, Butterflies may introduce a subscription model and brand interactions in the future. Backed by a $4.8 million seed round led by Coatue and other investors, Butterflies aims to expand its functionality and continue to offer a novel way for users to explore AI and social interaction.

Australia to trial age verification technologies for online safety

The Australian Department of Communications is set to start a trial for age verification technologies to ensure age-restricted online content is only accessible to appropriate individuals. This initiative aims to protect minors from harmful material.

The trial will focus on verifying users’ ages on platforms such as gambling sites, adult games, entertainment, and possibly social media. The Department will manage the trial’s logistics, while an independent third-party expert will evaluate the technology’s effectiveness. The selection process for this expert will begin next month, inviting proposals from qualified organisations and individuals.

Participation in the trial is voluntary for digital platform companies, but the Department encourages major tech firms to join, given their obligations under the Online Safety Act, which is currently under review. The trial will explore various age verification methods, including biometric age estimation, ID document verification, and AI-driven age inference.

Why does it matter?

In line with these efforts to address harmful content, the Australian Government has enhanced the Basic Online Safety Expectations (BOSE) determination for online services providers, including social media platforms, to adhere to higher online safety standards. Previously, the Albanese government decided against a mandatory age verification system for online pornography and adult content due to the immaturity of current technologies, as per the BiometricUpdate.

Liberia to issue biometric ID cards to civil servants to combat payroll fraud

The National Identification Registry (NIR) and the Civil Service Agency (CSA) in Liberia have partnered to issue biometric ID cards to civil servants to combat financial fraud in the public sector. The agreement will provide ID cards to employees in 103 government agencies to reduce payroll fraud and prevent identity duplication. CSA Director General Hon. Josiah F. Joekai stressed that this initiative will improve the verification process for public servants, which, in turn, is expected to enhance service delivery.

The Memorandum of Understanding (MoU) establishes a comprehensive biometric verification system, allowing the CSA ongoing access to the NIR’s e-verification platform to ensure all Government of Liberia employees’ National Identification Numbers are included on their ID cards. NIR Director General Andrew Peters noted that this collaboration will improve the collection of civil servants’ data, helping to identify those committing fraud.

Additionally, the government is making efforts to expand ID coverage among citizens by launching a mass biometric enrollment exercise this month, as per The Biometric Update.

Why does it matter? 

The initiative comes after CSA Director General Josiah Joekai reported uncovering significant fraud and discrepancies in various government spending entities. At a press briefing, Joekai stressed that these issues have led to an average monthly wage expenditure of over $23.5 million and caused the past administration to spend $6.1 million on consulting services last year. Regular audits revealed fraudulent payments, ghost employees, and other financial mismanagement.

Zambia reaches key milestone in digital ID transformation

Zambia has taken a step forward in modernising its legal and digital identity system by digitising the records of around 7 million people. This milestone is part of an effort across African nations to enhance their digital public infrastructure (DPI) and ID systems. Government initiatives were presented during the ID4Africa annual event, focusing on DPI, held in Cape Town, South Africa.

A senior technical advisor for the World Bank reported on LinkedIn that Zambia had digitised 81 percent of its paper ID cards in three months. This digitisation aims for completion by July, and it is expected to reduce enrollment time and costs, simplify identity verification, and strengthen the biometric database. Zambia has also collected biometric records for 1.3 million people despite delays due to a severe drought.

Why does it matter?

Among other African nations, Namibia and Tanzania are also expanding their DPI and broadening the use of their national IDs across more sectors, though at a different pace than Zambia. According to Etienne Maritz, Executive Director of the Ministry of Home Affairs of Namibia, legal identity enables inclusive development and access to financial services. Since February, Namibia’s national registration campaign has already issued ID documents to 38,000 people. In Tanzania, the government integrates its digital ID and civil registration systems to improve governance, involving the merger of responsible government bodies.

Bermuda halts facial recognition plans amid privacy concerns and project delays

Bermuda has halted plans to add facial recognition to its CCTV system due to “practical challenges,” the National Security Ministry announced. As reported by BiometricUpdate.com, this follows criticism from rights groups and the political opposition, who raised concerns about privacy and constitutional issues of the public surveillance project.

The Bermuda Human Rights Commission is currently investigating the technology’s implications in line with UN directives. In addition, the Free Democratic Movement, a new political party, criticised the camera system for potentially infringing on freedom of association and constituting unlawful searches.

Despite these concerns, Minister of National Security Michael Weeks and Police Commissioner Darrin Simons assured the public that privacy will not be compromised. However, implementation of the project may be delayed, with only 60 out of 247 cameras operational as of April due to heavy rains and a lack of asphalt. The Bermuda Safe City project aims for completion by July 2024.

Why does it matter?

Recently, the Royal Gazette inquired about the accuracy and type of software in Bermuda’s new CCTV system, especially concerning identification errors. This follows reports around the world of racially biassed mistakes in facial recognition technology, with error rates up to 35% for Black females. Bermuda’s police have used cameras for decades. However, the new system promises enhanced tracking and recognition capabilities. Despite police assurances, studies and incidents, including a lawsuit against Macy’s and a wrongful arrest in Detroit, unveiled significant bias in the technology.