Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.

California passes new bill regulating digital replicas of performers

California’s efforts to regulate the use of digital replicas of performers took a significant step forward with the passage of AB 1836 in the state Senate. The new bill mandates that studios obtain explicit consent from the estates of deceased performers before creating digital replicas for use in films, TV shows, video games, and other media. The move comes just days after the California legislature passed AB 2602, which enforces similar consent requirements for living actors.

SAG-AFTRA, the union representing film and television performers, has strongly advocated for these measures, emphasising the importance of protecting performers’ rights in the digital age. In a statement released after the Senate’s approval of AB 1836, the union described the bill as a ‘legislative priority’ and urged Governor Gavin Newsom to sign it into law. The union’s stance highlights the growing concern over the unauthorised use of digital replicas, particularly as technology makes it increasingly easy to recreate performers’ likenesses long after they have passed away, keeping the audience concerned and aware of the issue.

If signed into law, AB 1836 would ensure that the estates of deceased performers have control over how their likenesses are used, potentially setting a precedent for other states to follow. However, the bill also raises practical challenges, such as determining who has the authority to grant consent on behalf of the deceased, which could complicate its implementation. The bill reflects a broader push within the entertainment industry to establish clear legal protections against exploiting living and deceased performers in the rapidly evolving digital landscape.

Alongside the AI bill, the passing of bill AB 1836 underscores California’s role as a leader in entertainment industry legislation, particularly in areas where technology intersects with performers’ rights. As the debate over digital replicas continues, the potential impact of AB 1836 on the industry could have far-reaching implications, keeping the audience engaged and interested in the future of entertainment law.

Delhi High Court directs Google and Microsoft to challenge NCII images removal order

The Delhi High Court has directed Google and Microsoft to file a review petition seeking the recall of a previous order mandating search engines to promptly restrict access to non-consensual intimate images (NCII) without necessitating victims to provide specific URLs repeatedly. Both tech giants argued the technological infeasibility of identifying and proactively taking down NCII images, even with the assistance of AI tools.

The court’s order stems from a 2023 ruling requiring search engines to remove NCII within 24 hours, as per the IT Rules, 2021, or risk losing their safe harbour protections under Section 79 of the IT Act, 2000. It proposed issuing a unique token upon initial takedown, with search engines responsible for turning off any resurfaced content using pre-existing technology to alleviate the burden on victims of tracking and repeatedly reporting specific URLs. Moreover, the court suggested leveraging hash-matching technology and developing a ‘trusted third-party encrypted platform’ for victims to register NCII content or URLs, shifting the responsibility of identifying and removing resurfaced content away from victims and onto the platform while ensuring utmost transparency and accountability standards.

However, Google expressed concerns regarding automated tools’ inability to discern consent in shared sexual content, potentially leading to unintended takedowns and infringing on free speech, echoing Microsoft’s apprehension about the implications of proactive monitoring on privacy and freedom of expression.

CJEU: Search engines to dereference allegedly inaccurate content

At the request of the German Federal Court of Justice, the Court of Justice of the European Union (CJEU) has held that search engine operators shall dereference content that the user shows to be manifestly inaccurate, in the exercise of their right to be forgotten. In the case at hand, two managers of a group of investment companies filed a request with Google asking for dereference of results of searches made with their names that reveal articles containing inaccurate claims about the group. Also, they requested the removal of their photos from the list of results of an image search made on the basis of their names. The burden of proof is on requesting users to provide evidence capable of establishing the inaccuracy of the information. Such evidence does not need to stem from a judicial decision proving the inaccuracy. In regard to the display of photos, the CJEU stated that the search engine operators must conduct a separate balancing of competing rights and that the informative value of photos should be taken into account without taking into consideration the context of their publication on the internet page from which they are taken.