English school reprimanded for facial recognition misuse

Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.

The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.

Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.

The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.

Meta removes 63,000 Nigerian Instagram accounts for sextortion scams

Meta Platforms announced on Wednesday that it had removed approximately 63,000 Instagram accounts in Nigeria involved in financial sexual extortion scams, primarily targeting adult men in the United States. These Nigerian fraudsters, often called ‘Yahoo boys,’ are infamous for various scams, including posing as individuals in financial distress or as Nigerian princes.

In addition to the Instagram accounts, Meta also took down 7,200 Facebook accounts, pages, and groups that provided tips on how to scam people. Among the removed accounts, around 2,500 were part of a coordinated network linked to about 20 individuals. These scammers used fake accounts to conceal their identities and engage in sextortion, threatening victims with the release of compromising photos unless they paid a ransom.

Meta’s investigation revealed that most of the scammers’ attempts were unsuccessful. While adult men were the primary targets, there were also attempts against minors, which Meta reported to the National Centre for Missing and Exploited Children in the US. The company employed new technical measures to identify and combat sextortion activities.

Online scams have increased in Nigeria, where economic hardships have led many to engage in fraudulent activities from various settings, including university dormitories and affluent neighbourhoods. Meta noted that some of the removed accounts were not only participating in scams but also sharing guides, scripts, and photos to assist others in creating fake accounts for similar fraudulent purposes.

Hackers leak documents of Pentagon’s IT service provider

As per Bloomberg News, hackers have reportedly leaked internal documents from Leidos Holdings Inc., a major IT services provider to the US government. Leidos recently discovered the issue and suspects the documents were stolen during an earlier breach of a Diligent Corp. system it used. The matter is currently under investigation.

The Virginia-based company, primarily serving the US Department of Defense, utilised Diligent’s system for internal investigation data. Leidos spokesperson stated, ‘We have confirmed that this stems from a previous incident affecting a third-party vendor for which all necessary notifications were made in 2023’. Leidos further emphasised that their network and sensitive customer data were not compromised.

Meanwhile, a Diligent spokesperson explained that a 2022 hack of its subsidiary, Steele Compliance Solutions, led to the leak. At the time, fewer than 15 customers, including Leidos, were using the affected product. Impacted customers were promptly notified, and corrective actions were taken to address the breach.

Social media platforms asked to tackle cybercrimes in Malaysia

Malaysia is urging social media platforms to strengthen their efforts in combating cybercrimes, including scams, cyberbullying, and child pornography. The government has seen a significant rise in harmful online content and has called on companies like Meta and TikTok to enhance their monitoring and enforcement practices.

In the first quarter of 2024 alone, Malaysia reported 51,638 cases of harmful content referred to social media platforms, surpassing the 42,904 cases from the entire previous year. Communications Minister Fahmi Fadzil noted that some platforms are more cooperative than others, with Meta showing the highest compliance rates—85% for Facebook, 88% for Instagram, and 79% for WhatsApp. TikTok followed with a 76% compliance rate, while Telegram and X had lower rates.

The government has directed social media firms to address these issues more effectively, but it is up to the platforms to remove content that violates their community guidelines. Malaysia’s communications regulator continues highlighting problematic content to these firms, aiming to curb harmful online activity.

FTC investigates AI-powered pricing practices

The US Federal Trade Commission (FTC) announced a probe into eight companies using AI-powered ‘surveillance service pricing’ to evaluate its impact on privacy, competition, and consumer protection. The companies under scrutiny include Mastercard, JPMorgan Chase, Revionics, Bloomreach, Task Software, PROS, Accenture, and McKinsey & Co. These firms use AI to adjust pricing based on consumer behaviour, location, and personal data, potentially leading to different prices for different customers.

The FTC’s investigation aims to uncover the types of surveillance pricing services developed by these companies and their current applications. The agency seeks to understand how these AI-driven pricing models affect consumer pricing and whether they exploit personal data to charge higher prices. FTC Chair Lina M. Khan emphasised the risks to privacy and the potential exploitation of personal data in her statement, highlighting the need for transparency in how businesses use consumer information.

This inquiry reflects growing concerns about using AI and other technologies to set personalised prices based on detailed consumer data. The FTC’s actions aim to shed light on these practices and ensure consumer protection in an increasingly data-driven market.

Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

US, EU, UK pledge to protect generative AI market fairness

Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.

The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.

The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.

Nigeria imposes $220 million fine on Meta for data protection violations

Nigeria’s Federal Competition and Consumer Protection Commission (FCCPC) has imposed a fine of $220 million on Meta Platforms Inc., the parent company of Facebook, for ‘multiple and repeated’ breaches of local consumer data protection laws in a move to enforce data privacy regulations. 

The FCCPC’s investigation into Meta began last year following Nigerian consumers’ complaints regarding personal data mishandling. The investigation revealed that Meta had failed to comply with several provisions of Nigeria’s data protection regulations, including obtaining proper consent from users before collecting their data and ensuring the security of the information gathered, a direct violation of the Nigeria Data Protection Regulation (NDPR), following a 38 months investigation. The NDPR, enacted in 2019, mandates that organisations must seek explicit consent from individuals before collecting their personal information, aiming to safeguard the privacy of their citizens.

The fine is one of the largest penalties imposed by an African regulator on a global tech company. It signals a growing trend among nations to assert digital sovereignty and enforce stringent data protection measures. The action against Meta is expected to have far-reaching implications, prompting other multinational companies to reassess their data practices in Nigeria and potentially other African markets.

Why does this matter?

The company has faced similar regulatory challenges worldwide, including a $5 billion fine by the US Federal Trade Commission in 2019 for privacy violations, a €265 million fine by the Irish Data Protection Commission in 2022 for breaches of the EU’s General Data Protection Regulation (GDPR) and a $37 million fine by the competition board.

The following development highlights the regulatory pressure on technology companies to prioritise data protection. As digital services expand globally, enforcing stringent data privacy laws is becoming more critical. For Nigeria, the fine against Meta expresses the country’s commitment to holding multinational companies accountable and protecting the rights of its citizens in the digital landscape.

Google reverses plan to drop third-party cookies

Google announced on Monday that it will continue to support third-party cookies in its Chrome browser, reversing its previous plans to phase out the tracking technology. The decision comes amid concerns from advertisers, Google’s primary revenue source, who feared the loss of cookies would hinder their ability to collect data for personalised ads and increase their reliance on Google’s user databases. The UK’s Competition and Markets Authority had also scrutinised the initial plan, worried about its potential impact on competition in digital advertising.

In a blog post, Anthony Chavez, vice president of the Privacy Sandbox initiative, stated that instead of eliminating third-party cookies, Google will introduce a new feature in Chrome, allowing users to make informed choices about their web browsing privacy. The initiative aims to balance enhancing online privacy with supporting digital businesses. Since 2019, Google’s Privacy Sandbox has worked towards this goal, with cookies playing a crucial role in identifying and tracking web users’ browsing habits.

The use of cookies is regulated by laws such as the EU’s General Data Protection Regulation (GDPR), which requires explicit user consent for storing cookies. Major browsers also offer options to delete cookies. Chavez mentioned that Google is collaborating with regulators, publishers, and privacy groups to develop this new approach while continuing to invest in the Privacy Sandbox program.

The announcement received mixed reactions. Evelyn Mitchell-Wolf, an analyst at eMarketer, noted that advertisers no longer need to prepare for a sudden shift away from third-party cookies. However, Lena Cohen from the Electronic Frontier Foundation criticised the decision, highlighting potential consumer harms and attributing Google’s stance to its advertising-driven business model.

AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.