EU prepares hefty fine for Meta’s Marketplace practices

Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.

Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.

A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.

In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.

Grindr limits location features to protect Olympic athletes

Grindr, the LGBTQ+ dating app, has deactivated some of its location-sharing features during the Olympics in Paris to protect athletes from harassment or prosecution. The ‘Explore’ feature, which allows users to change their location and view profiles, has been turned off in the Olympic Village to prevent athletes from being outed by curious individuals. That move aims to safeguard athletes, especially those from countries with strict LGBTQ+ laws, from potential risks.

Approximately 155 LGBTQ+ athletes are attending the Paris Olympics, a small fraction of the over 10,000 participants. Grindr has also turned off the ‘show distance’ feature by default in the Village, allowing athletes to connect without revealing their whereabouts. Additional temporary measures include free unlimited disappearing messages and the ability to unsend messages, while private video sharing and screenshot functions have been turned off within the Village radius.

These changes follow a precedent set after the 2016 Rio Olympics, where a journalist’s report on using Grindr to meet athletes led to accusations of outing gay athletes. Grindr’s adjustments aim to ensure privacy and safety for athletes while still allowing them to connect during the games. Meanwhile, Grindr is expanding its services to promote long-term relationships and in-person events, with its stock seeing significant growth this year.

OpenAI CEO emphasises democratic control in the future of AI

Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.

Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.

He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.

European Parliament forms joint working group to monitor AI Act implementation

Two European Parliament committees have formed a joint working group to oversee the implementation of the AI Act, according to sources familiar with the matter. The committees involved, Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE), are concerned about the transparency of the AI Office’s staffing and the role of civil society in the implementation process.

The European Commission’s AI Office is responsible for coordinating the implementation of the AI Act, which will come into force on 1 August. The Act prohibits certain AI applications, like real-time biometric identification, which will be enforced six months later. Full implementation is set for two years after the Act’s commencement when the Commission must clarify key provisions.

Traditionally, the European Parliament has had a limited role in regulatory implementation, but MEPs focused on tech policy are pushing for greater involvement, especially with recent digital regulations. The Parliament already monitors the implementation of the Digital Services and Digital Markets Acts, aiming to ensure effective oversight and transparency in these critical areas.

China cracks down on unauthorised ChatGPT access

The Cyberspace Administration of China (CAC), China’s internet regulator, has publicly identified and named agents facilitating local ChatGPT access. The latest crackdown comes in the backdrop of OpenAI’s decision to restrict access to its API in ‘unsupported countries and territories’ like mainland China, Hong Kong, and Macau.

Alongside CAC, other local authorities have penalised several website operators this year for providing unauthorised access to generative AI services like ChatGPT. These measures are indicative of the CAC’s commitment to enforcing China’s AI regulations, which mandate rigorous screening and registration of all AI services before they can be publicly made available. Even with these stringent rules, some developers and businesses have managed to sidestep the regulations by using virtual private networks.

Why does this matter?

Despite Beijing’s ambition of leading the world’s AI race, it is stringent about its requirement of GenAI providers upholding core socialist values and avoiding generating content that threatens national security or the socialist system. As of January, about 117 GenAI products have been registered with the CAC, and 14 large language models and enterprise applications have been given formal approval for commercial use.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

US Senate passes bill to combat AI deepfakes

The US Senate has unanimously passed the DEFIANCE Act, allowing victims of nonconsensual intimate images created by AI, known as deepfakes, to sue their creators for damages. The bill enables victims to pursue civil remedies against those who produced or distributed sexually explicit deepfakes with malicious intent. Victims identifiable in these deepfakes can receive up to $150,000 in damages and up to $250,000 if linked to sexual assault, stalking, or harassment.

The legislative move follows high-profile incidents, such as AI-generated explicit images of Taylor Swift appearing on social media and similar cases affecting high school girls across the country. Senate Majority Leader Chuck Schumer emphasised the widespread impact of malicious deepfakes, highlighting the urgent need for protective measures.

Schumer described the DEFIANCE Act as part of broader efforts to implement AI safeguards to prevent significant harm. He called on the House to pass the bill, which has a companion bill awaiting consideration. Schumer assured victims that the government is committed to addressing the issue and protecting individuals from the abuses of AI technology.

English school reprimanded for facial recognition misuse

Chelmer Valley High School in Essex, United Kingdom has been formally reprimanded by the UK’s data protection regulator, the ICO, for using facial recognition technology without obtaining proper consent from students. The school began using the technology for cashless lunch payments in March 2023, but failed to carry out a required data protection impact assessment before implementation. Additionally, the school used an opt-out system for consent, contrary to UK GDPR regulations, which require clear affirmative action.

The incident has reignited the debate over the use of biometric data in schools. The ICO’s action echoes a similar situation from 2021, when schools in Scotland faced scrutiny for using facial recognition for lunch payments. Sweden was the first to issue a GDPR fine for using facial recognition in a school in 2019, highlighting the growing global concern over privacy and biometric data in educational settings.

Mark Johnson of Big Brother Watch criticised the use of facial recognition, emphasising that children should not be treated like ‘walking bar-codes’ and should be taught to protect their personal data. The ICO has chosen to issue a public reprimand rather than a fine, recognising the school’s first offense and the different approach required for public institutions compared to private companies.

The ICO stressed the importance of proper data handling, especially in environments involving children, and urged organisations to prioritise data protection when introducing new technologies. Lynne Currie of the ICO emphasised the need for schools to comply with data protection laws to maintain trust and safeguard children’s privacy rights.

Meta removes 63,000 Nigerian Instagram accounts for sextortion scams

Meta Platforms announced on Wednesday that it had removed approximately 63,000 Instagram accounts in Nigeria involved in financial sexual extortion scams, primarily targeting adult men in the United States. These Nigerian fraudsters, often called ‘Yahoo boys,’ are infamous for various scams, including posing as individuals in financial distress or as Nigerian princes.

In addition to the Instagram accounts, Meta also took down 7,200 Facebook accounts, pages, and groups that provided tips on how to scam people. Among the removed accounts, around 2,500 were part of a coordinated network linked to about 20 individuals. These scammers used fake accounts to conceal their identities and engage in sextortion, threatening victims with the release of compromising photos unless they paid a ransom.

Meta’s investigation revealed that most of the scammers’ attempts were unsuccessful. While adult men were the primary targets, there were also attempts against minors, which Meta reported to the National Centre for Missing and Exploited Children in the US. The company employed new technical measures to identify and combat sextortion activities.

Online scams have increased in Nigeria, where economic hardships have led many to engage in fraudulent activities from various settings, including university dormitories and affluent neighbourhoods. Meta noted that some of the removed accounts were not only participating in scams but also sharing guides, scripts, and photos to assist others in creating fake accounts for similar fraudulent purposes.

Hackers leak documents of Pentagon’s IT service provider

As per Bloomberg News, hackers have reportedly leaked internal documents from Leidos Holdings Inc., a major IT services provider to the US government. Leidos recently discovered the issue and suspects the documents were stolen during an earlier breach of a Diligent Corp. system it used. The matter is currently under investigation.

The Virginia-based company, primarily serving the US Department of Defense, utilised Diligent’s system for internal investigation data. Leidos spokesperson stated, ‘We have confirmed that this stems from a previous incident affecting a third-party vendor for which all necessary notifications were made in 2023’. Leidos further emphasised that their network and sensitive customer data were not compromised.

Meanwhile, a Diligent spokesperson explained that a 2022 hack of its subsidiary, Steele Compliance Solutions, led to the leak. At the time, fewer than 15 customers, including Leidos, were using the affected product. Impacted customers were promptly notified, and corrective actions were taken to address the breach.