Formal complaint in Argentina challenges Meta’s data use for AI training

A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.

The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.

The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).

The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.

Meta settles Texas lawsuit for $1.4 billion

Meta Platforms has agreed to a $1.4 billion settlement with Texas over allegations of illegally using facial-recognition technology to collect biometric data without consent. The case marks the largest settlement of its kind by any state. The lawsuit, initiated in 2022, accused Facebook of capturing biometric data from photos and videos uploaded by users through a ‘Tag Suggestions’ feature, which has since been discontinued.

Meta expressed satisfaction with the resolution and hinted at future business investments in Texas, including developing data centres. Despite the settlement, the company continues to deny any wrongdoing. Texas Attorney General Ken Paxton emphasised the state’s dedication to holding the big tech companies accountable for privacy violations.

Why does it matter?

The settlement was reached in May, just before a state court trial began. Previously, Meta paid $650 million to settle a similar biometric privacy class action under Illinois law. Meanwhile, Google also faces a lawsuit in Texas for allegedly violating the state’s biometric privacy law.

Meta unveils AI studio for personalised chatbots

Meta Platforms announced the launch of AI Studio, a tool enabling users to create and design personalised AI chatbots. The new feature allows Instagram creators to develop AI characters to manage direct messages and story replies, enhancing user interaction on the platform. These AI characters can be shared across Meta’s various platforms and are built using Meta’s Llama 3.1 model. This latest version of Meta’s AI model is available in multiple languages and competes with other advanced models like OpenAI’s.

Why does this matter?

Meta’s initiative follows OpenAI’s confidential project, code-named ‘Strawberry,’ aiming to showcase advanced reasoning capabilities. Introducing AI Studio marks Meta’s effort to offer cutting-edge AI tools to its vast user base, leveraging its Llama 3.1 model to provide powerful AI-driven features for content creators and users alike.

EU prepares hefty fine for Meta’s Marketplace practices

Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.

Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.

A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.

In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.

Meta oversight board calls for clearer rules on AI-generated pornography

Meta’s Oversight Board has criticised the company’s rules on sexually explicit AI-generated depictions of real people, stating they are ‘not sufficiently clear.’ That follows the board’s review of two pornographic deepfakes of famous women posted on Meta’s Facebook and Instagram platforms. The board found that both images violated Meta’s policy against ‘derogatory sexualised photoshop,’ which is considered bullying and harassment and should have been promptly removed.

In one case involving an Indian public figure, Meta failed to act on a user report within 48 hours, leading to an automatic ticket closure. The image was only removed after the board intervened. In contrast, Meta’s systems automatically took down the image of an American celebrity. The board recommended that Meta clarify its rules to cover a broader range of editing techniques, including generative AI. It criticised the company for not adding the Indian woman’s image to a database for automatic removals.

Meta has stated it will review the board’s recommendations and update its policies accordingly. The board emphasised the importance of removing harmful content to protect those impacted, noting that many victims of deepfake intimate images are not public figures and struggle to manage the spread of non-consensual depictions.

Meta’s AI bots aim to support content creators

Meta CEO Mark Zuckerberg has proposed a vision where AI bots assist content creators with audience engagement, aiming to free up their time for more crucial tasks. In an interview with internet personality Rowan Cheung, Zuckerberg discussed how these AI bots could capture the personalities and business objectives of creators, allowing fans to interact with them as if they were the creators themselves.

Zuckerberg’s optimism aligns with many in the tech industry who believe AI can significantly enhance the impact of individuals and organizations. However, there are concerns about whether creators, whose audiences value authenticity, will embrace generative AI. Meta’s initial rollout of AI-powered bots earlier this year faced issues, including bots making false claims and providing misleading information, raising questions about the technology’s reliability.

Meta claims improvements with its latest AI model, Llama 3.1, but challenges such as hallucinations and planning errors persist. Zuckerberg acknowledges the need to address these concerns and build trust with users. Despite these hurdles, Meta continues to focus on integrating AI into its platforms while also pursuing its Metaverse ambitions and competing in the tech space.

Meta’s plans to introduce generative AI to its apps dating back to 2023, along with its increased focus on AI amid Metaverse ambitions highlight the company’s broader strategic vision. However, convincing creators to rely on AI bots for fan interaction remains a significant challenge.

Meta removes 63,000 Nigerian Instagram accounts for sextortion scams

Meta Platforms announced on Wednesday that it had removed approximately 63,000 Instagram accounts in Nigeria involved in financial sexual extortion scams, primarily targeting adult men in the United States. These Nigerian fraudsters, often called ‘Yahoo boys,’ are infamous for various scams, including posing as individuals in financial distress or as Nigerian princes.

In addition to the Instagram accounts, Meta also took down 7,200 Facebook accounts, pages, and groups that provided tips on how to scam people. Among the removed accounts, around 2,500 were part of a coordinated network linked to about 20 individuals. These scammers used fake accounts to conceal their identities and engage in sextortion, threatening victims with the release of compromising photos unless they paid a ransom.

Meta’s investigation revealed that most of the scammers’ attempts were unsuccessful. While adult men were the primary targets, there were also attempts against minors, which Meta reported to the National Centre for Missing and Exploited Children in the US. The company employed new technical measures to identify and combat sextortion activities.

Online scams have increased in Nigeria, where economic hardships have led many to engage in fraudulent activities from various settings, including university dormitories and affluent neighbourhoods. Meta noted that some of the removed accounts were not only participating in scams but also sharing guides, scripts, and photos to assist others in creating fake accounts for similar fraudulent purposes.

Meta introduces largest Llama 3 AI model with enhanced language and math capabilities

Meta Platforms has unveiled its largest version of the Llama 3 AI model, boasting impressive multilingual capabilities and performance metrics that challenge paid models from competitors like OpenAI. The new model can converse in eight languages, write better computer code, and solve complex math problems thanks to its 405 billion parameters. That makes it significantly more powerful than its predecessor, though it still needs to catch up to OpenAI’s GPT-4, which has one trillion parameters, and Amazon’s upcoming two trillion parameter model.

CEO Mark Zuckerberg has high expectations for Llama 3, predicting it will surpass proprietary competitors by next year. Meta’s AI chatbot, powered by these models, is on track to become the most popular AI assistant by the end of this year, already used by hundreds of millions. The release comes amidst a competitive push among tech companies to demonstrate the value of their advanced AI models in solving complex reasoning tasks, justifying the significant investments made.

Meta is also releasing updated versions of its lighter-weight 8 billion and 70 billion parameter Llama 3 models. All versions are multilingual and can handle larger user requests, enhancing their ability to generate computer code. Meta’s head of generative AI, Ahmad Al-Dahle, highlighted improvements in solving math problems by using AI to generate training data. By offering Llama models largely free of charge, Meta aims to foster innovation, reduce dependence on competitors, and increase engagement on its social networks, despite some investor concerns over the costs involved.

Nigeria imposes $220 million fine on Meta for data protection violations

Nigeria’s Federal Competition and Consumer Protection Commission (FCCPC) has imposed a fine of $220 million on Meta Platforms Inc., the parent company of Facebook, for ‘multiple and repeated’ breaches of local consumer data protection laws in a move to enforce data privacy regulations. 

The FCCPC’s investigation into Meta began last year following Nigerian consumers’ complaints regarding personal data mishandling. The investigation revealed that Meta had failed to comply with several provisions of Nigeria’s data protection regulations, including obtaining proper consent from users before collecting their data and ensuring the security of the information gathered, a direct violation of the Nigeria Data Protection Regulation (NDPR), following a 38 months investigation. The NDPR, enacted in 2019, mandates that organisations must seek explicit consent from individuals before collecting their personal information, aiming to safeguard the privacy of their citizens.

The fine is one of the largest penalties imposed by an African regulator on a global tech company. It signals a growing trend among nations to assert digital sovereignty and enforce stringent data protection measures. The action against Meta is expected to have far-reaching implications, prompting other multinational companies to reassess their data practices in Nigeria and potentially other African markets.

Why does this matter?

The company has faced similar regulatory challenges worldwide, including a $5 billion fine by the US Federal Trade Commission in 2019 for privacy violations, a €265 million fine by the Irish Data Protection Commission in 2022 for breaches of the EU’s General Data Protection Regulation (GDPR) and a $37 million fine by the competition board.

The following development highlights the regulatory pressure on technology companies to prioritise data protection. As digital services expand globally, enforcing stringent data privacy laws is becoming more critical. For Nigeria, the fine against Meta expresses the country’s commitment to holding multinational companies accountable and protecting the rights of its citizens in the digital landscape.

Global tech outage hits Meta’s content moderators

A global tech outage on Friday affected some external vendors responsible for content moderation on Meta’s platforms, including Facebook, Instagram, WhatsApp, and Threads. According to a Meta spokesperson, the outage temporarily impacted several tools used by these vendors, causing minimal disruption to Meta’s support operations but not significantly affecting content moderation efforts.

The outage led to a SEV1 alert at Meta, indicating a critical issue that required immediate attention. Meta relies on a combination of AI and human review to moderate the billions of posts made on its platforms. While Meta staff handle some reviews, most are outsourced to vendors like Teleperformance and Concentrix, who employ numerous workers to identify and address rule violations such as hate speech and violence.

Despite the outage disrupting vendor access to key systems that route flagged content for review, operations continued as expected. Concentrix reported monitoring and addressing the impacts of the outage, while Teleperformance did not provide a comment. Meta confirmed that the issues had been resolved earlier in the day, ensuring minimal to no impact on their content moderation processes.