EU prepares hefty fine for Meta’s Marketplace practices

Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.

Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.

A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.

In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

Meta’s AI bots aim to support content creators

Meta CEO Mark Zuckerberg has proposed a vision where AI bots assist content creators with audience engagement, aiming to free up their time for more crucial tasks. In an interview with internet personality Rowan Cheung, Zuckerberg discussed how these AI bots could capture the personalities and business objectives of creators, allowing fans to interact with them as if they were the creators themselves.

Zuckerberg’s optimism aligns with many in the tech industry who believe AI can significantly enhance the impact of individuals and organizations. However, there are concerns about whether creators, whose audiences value authenticity, will embrace generative AI. Meta’s initial rollout of AI-powered bots earlier this year faced issues, including bots making false claims and providing misleading information, raising questions about the technology’s reliability.

Meta claims improvements with its latest AI model, Llama 3.1, but challenges such as hallucinations and planning errors persist. Zuckerberg acknowledges the need to address these concerns and build trust with users. Despite these hurdles, Meta continues to focus on integrating AI into its platforms while also pursuing its Metaverse ambitions and competing in the tech space.

Meta’s plans to introduce generative AI to its apps dating back to 2023, along with its increased focus on AI amid Metaverse ambitions highlight the company’s broader strategic vision. However, convincing creators to rely on AI bots for fan interaction remains a significant challenge.

Meta removes 63,000 Nigerian Instagram accounts for sextortion scams

Meta Platforms announced on Wednesday that it had removed approximately 63,000 Instagram accounts in Nigeria involved in financial sexual extortion scams, primarily targeting adult men in the United States. These Nigerian fraudsters, often called ‘Yahoo boys,’ are infamous for various scams, including posing as individuals in financial distress or as Nigerian princes.

In addition to the Instagram accounts, Meta also took down 7,200 Facebook accounts, pages, and groups that provided tips on how to scam people. Among the removed accounts, around 2,500 were part of a coordinated network linked to about 20 individuals. These scammers used fake accounts to conceal their identities and engage in sextortion, threatening victims with the release of compromising photos unless they paid a ransom.

Meta’s investigation revealed that most of the scammers’ attempts were unsuccessful. While adult men were the primary targets, there were also attempts against minors, which Meta reported to the National Centre for Missing and Exploited Children in the US. The company employed new technical measures to identify and combat sextortion activities.

Online scams have increased in Nigeria, where economic hardships have led many to engage in fraudulent activities from various settings, including university dormitories and affluent neighbourhoods. Meta noted that some of the removed accounts were not only participating in scams but also sharing guides, scripts, and photos to assist others in creating fake accounts for similar fraudulent purposes.

Meta introduces largest Llama 3 AI model with enhanced language and math capabilities

Meta Platforms has unveiled its largest version of the Llama 3 AI model, boasting impressive multilingual capabilities and performance metrics that challenge paid models from competitors like OpenAI. The new model can converse in eight languages, write better computer code, and solve complex math problems thanks to its 405 billion parameters. That makes it significantly more powerful than its predecessor, though it still needs to catch up to OpenAI’s GPT-4, which has one trillion parameters, and Amazon’s upcoming two trillion parameter model.

CEO Mark Zuckerberg has high expectations for Llama 3, predicting it will surpass proprietary competitors by next year. Meta’s AI chatbot, powered by these models, is on track to become the most popular AI assistant by the end of this year, already used by hundreds of millions. The release comes amidst a competitive push among tech companies to demonstrate the value of their advanced AI models in solving complex reasoning tasks, justifying the significant investments made.

Meta is also releasing updated versions of its lighter-weight 8 billion and 70 billion parameter Llama 3 models. All versions are multilingual and can handle larger user requests, enhancing their ability to generate computer code. Meta’s head of generative AI, Ahmad Al-Dahle, highlighted improvements in solving math problems by using AI to generate training data. By offering Llama models largely free of charge, Meta aims to foster innovation, reduce dependence on competitors, and increase engagement on its social networks, despite some investor concerns over the costs involved.

Nigeria imposes $220 million fine on Meta for data protection violations

Nigeria’s Federal Competition and Consumer Protection Commission (FCCPC) has imposed a fine of $220 million on Meta Platforms Inc., the parent company of Facebook, for ‘multiple and repeated’ breaches of local consumer data protection laws in a move to enforce data privacy regulations. 

The FCCPC’s investigation into Meta began last year following Nigerian consumers’ complaints regarding personal data mishandling. The investigation revealed that Meta had failed to comply with several provisions of Nigeria’s data protection regulations, including obtaining proper consent from users before collecting their data and ensuring the security of the information gathered, a direct violation of the Nigeria Data Protection Regulation (NDPR), following a 38 months investigation. The NDPR, enacted in 2019, mandates that organisations must seek explicit consent from individuals before collecting their personal information, aiming to safeguard the privacy of their citizens.

The fine is one of the largest penalties imposed by an African regulator on a global tech company. It signals a growing trend among nations to assert digital sovereignty and enforce stringent data protection measures. The action against Meta is expected to have far-reaching implications, prompting other multinational companies to reassess their data practices in Nigeria and potentially other African markets.

Why does this matter?

The company has faced similar regulatory challenges worldwide, including a $5 billion fine by the US Federal Trade Commission in 2019 for privacy violations, a €265 million fine by the Irish Data Protection Commission in 2022 for breaches of the EU’s General Data Protection Regulation (GDPR) and a $37 million fine by the competition board.

The following development highlights the regulatory pressure on technology companies to prioritise data protection. As digital services expand globally, enforcing stringent data privacy laws is becoming more critical. For Nigeria, the fine against Meta expresses the country’s commitment to holding multinational companies accountable and protecting the rights of its citizens in the digital landscape.

Global tech outage hits Meta’s content moderators

A global tech outage on Friday affected some external vendors responsible for content moderation on Meta’s platforms, including Facebook, Instagram, WhatsApp, and Threads. According to a Meta spokesperson, the outage temporarily impacted several tools used by these vendors, causing minimal disruption to Meta’s support operations but not significantly affecting content moderation efforts.

The outage led to a SEV1 alert at Meta, indicating a critical issue that required immediate attention. Meta relies on a combination of AI and human review to moderate the billions of posts made on its platforms. While Meta staff handle some reviews, most are outsourced to vendors like Teleperformance and Concentrix, who employ numerous workers to identify and address rule violations such as hate speech and violence.

Despite the outage disrupting vendor access to key systems that route flagged content for review, operations continued as expected. Concentrix reported monitoring and addressing the impacts of the outage, while Teleperformance did not provide a comment. Meta confirmed that the issues had been resolved earlier in the day, ensuring minimal to no impact on their content moderation processes.

Meta suspends AI use in Brazil amid privacy concerns

Meta has suspended the use of its generative AI (GenAI) tools in Brazil after the country’s data protection authority issued a preliminary ban on its new privacy policy. The suspension follows a decision by Brazil’s National Data Protection Authority (ANPD) to halt Meta’s policy, citing risks to users’ fundamental data rights.

ANPD’s decision arose from concerns over Meta’s use of personal data to train its AI systems without users’ explicit consent. The agency warned of ‘serious and irreparable damage’ to the rights of data subjects and imposed a daily fine of 50,000 reais for non-compliance. Meta expressed disappointment, stating that the decision is a setback for innovation and AI development in Brazil.

The controversy in Brazil reflects broader global challenges for tech companies navigating stringent data privacy laws. In regions like the European Union, similar regulatory hurdles have forced Meta and other tech giants to pause their AI tool rollouts. Human Rights Watch highlighted risks associated with personal data in AI training, noting how personal photos, including those of Brazilian children, have been misused in image datasets, raising significant privacy and ethical concerns.

Meta’s response aligns with its recent actions in Europe, where it withheld its AI models due to regulatory uncertainties. This situation underscores the tension between advancing AI technologies and adhering to evolving data protection regulations.

Meta’s AI models excluded from EU market

Meta will withhold its future multimodal AI models from customers in the EU due to a lack of clear regulatory guidance. This decision reflects a growing tension between US tech giants and EU regulators.

Meta plans to release its multimodal Llama model in the coming months, integrating video, audio, images, and text. However, these models will not be available in the EU, impacting both European companies and those offering products in the region.

The company’s larger, text-only Llama 3 model will be available in the EU. Meta’s concerns stem from compliance with the General Data Protection Regulation (GDPR), despite briefings with EU regulators and attempts to address their feedback.

The UK, with data protection laws similar to the EU, will receive the new model without regulatory delays. Meta argues that delays in Europe harm consumers and competitiveness, pointing out that other tech companies already use European data to train their models.

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.