European Commission gives TikTok 24 hours to provide risk assessment of TikTok Lite

European regulators have demanded a risk assessment from TikTok within 24 hours regarding its new app, TikTok Lite, recently launched in France and Spain. The European Commission, under the Digital Services Act (DSA), is concerned about potential impacts on children and users’ mental health. This action follows an investigation initiated two months ago into TikTok for potential breaches of the EU tech rules.

Thierry Breton, the EU industry chief, emphasised the need for TikTok to conduct a risk assessment before launching the app in the 27-country EU. The DSA requires platforms to take stronger actions against illegal and harmful content, with penalties of up to 6% of their global annual turnover for violations. Breton likened the potentially addictive and toxic nature of ‘social media lite’ to ‘cigarettes light,’ underlining the commitment to protecting minors under the DSA.

TikTok Lite, targeted at users aged 18+, includes a ‘Task and Reward Lite’ program that allows users to earn points by engaging in specific platform activities. These points can be redeemed for rewards like Amazon vouchers, PayPal gift cards, or TikTok coins for tipping creators. The Commission expressed concerns about the app’s impact on minors and users’ mental health, particularly potential addictive behaviours.

Why does it matter?

TikTok has been directed to provide the requested risk assessment for TikTok Lite within 24 hours and additional information by 26 April. The Commission will analyse TikTok’s response and determine the next steps. TikTok has acknowledged the request for information and stated that it is in direct contact with the Commission regarding this matter. Additionally, the Commission has asked for details on measures implemented by TikTok to mitigate systemic risks associated with the new app.

X’s compliance with Indian election commission orders sparks disagreement

Social media platform X recently announced it had withheld specific posts in India featuring political content from elected officials, political parties, and candidates, following directives from the country’s election commission.

Despite complying, X expressed disagreement with these orders and urged the commission to make all takedown requests public in the future. This development comes as India prepares for its massive electoral process, involving nearly a billion eligible voters, set to commence on Friday.

Why does it matter? 

In anticipation of India’s 2024 elections, tech giants like Google and X are tackling misinformation and boosting voter education. Elon Musk’s X notably introduced Community Notes for fact-checking. Yet, recent clashes with government orders, exemplified by a spat with Brazilian Supreme Court Justice Alexandre de Moraes, underscore tensions between Musk’s free speech advocacy and social media’s detrimental influence on democratic processes.

Far-right party Chega challenges Meta over 10-year Facebook ban

Portugal’s far-right political party, Chega, has initiated legal action against Meta Platforms, the parent company of Facebook, following a 10-year ban imposed on the party’s Facebook account. The reasons behind the ban remain unspecified, raising concerns about potential political censorship across Meta’s platforms.

Led by André Ventura, Chega has gained traction in Portugal with its anti-immigration and anti-establishment rhetoric. Chega has responded by calling the restrictions ‘clearly illegal and of unspeakable persecution’ in a post on X.

Why does it matter?

Chega’s legal action against Meta Platforms underscores broader issues surrounding content moderation and political speech on social media platforms. The outcome of this case may establish precedents for how such platforms are held accountable for their moderation policies and their impact on political discourse (see Iran’s recent case). However, the need for more transparency regarding the reasons for Chega’s ban raises questions about the fairness and consistency of content moderation practices.

Snap introduces watermarks for AI-generated images

Social media company Snap announced its plans to add watermarks to AI-generated images on its platform, aiming to enhance transparency and protect user content. The watermark, featuring a small ghost with a sparkle icon, will denote images created using AI tools and will appear when the image is exported or saved to the camera roll. However, how Snap intends to detect and address watermark removal remains unclear, raising questions about enforcement methods.

This move aligns with efforts by other tech giants like Microsoft, Meta, and Google, who have implemented measures to label or identify AI-generated images. Snap currently offers AI-powered features like Lenses and a selfie-focused tool called Dreams for paid users, emphasising the importance of transparency and safety in AI-driven experiences.

Why does it matter?

In its commitment to ensuring equitable access and user expectations, Snap has partnered with HackerOne to stress-test its AI image-generation tools and established a review process to address potential biases in AI results. The company’s dedication to transparency extends to providing context cards with AI-generated images and implementing controls in the Family Center to monitor teen interactions with AI, following previous controversies surrounding inappropriate responses from the ‘My AI’ chatbot. As Snap continues to evolve its AI-powered features, its focus on transparency and safety underscores its commitment to fostering a positive and inclusive user experience on its platform.

Concerns raised over TikTok’s US data handling

TikTok’s efforts to separate its US operations and user data from its Chinese parent company, ByteDance, have been scrutinised, as the following reports allege continued collaboration between the two entities. Despite implementing Project Texas, which aimed to enhance data security and independence, former employees claim that data-sharing practices persisted, with US user data being regularly sent to ByteDance executives in China.

Under Project Texas, US user data was supposed to be stored on Oracle’s cloud infrastructure. Still, former employees suggest that the reality differed, with a ‘stealth chain of command’ enabling continued collaboration between US-based staff and ByteDance executives. Allegations of ongoing control from ByteDance’s top management raise questions about TikTok’s claimed independence.

These revelations have significant implications, particularly amidst Congressional efforts to pressure ByteDance to sell TikTok. The House has already passed a bill threatening to ban TikTok unless it severs ties with its parent company. However, TikTok CEO Shou Zi Chew maintains the company’s autonomy, emphasising that American entities store and oversee American data.

Why does it matter?

While some former employees downplay concerns about TikTok’s connections to ByteDance, recent reports suggest that Project Texas may not have effectively insulated US operations from Chinese influence. As scrutiny intensifies, TikTok faces renewed scrutiny over its data practices and the extent of its independence from ByteDance.

Meta oversight board reviews handling of sexually explicit AI-generated images

Meta Platforms’ Oversight Board is currently examining how the company handled two AI-generated sexually explicit images of female celebrities that circulated on Facebook and Instagram. The board, which operates independently but is funded by Meta, aims to evaluate Meta’s policies and enforcement practices surrounding AI-generated pornographic content. To prevent further harm, the board did not disclose the names of the celebrities depicted in the images.

Advancements in AI technology have led to an increase in fabricated content online, particularly explicit images and videos portraying women and girls. This surge in ‘deepfakes’ has posed significant challenges for social media platforms in combating harmful content. Earlier this year, Elon Musk’s social media platform X faced difficulties managing the spread of false explicit images of Taylor Swift, prompting temporary restrictions on related searches.

The Oversight Board highlighted two specific cases: one involving an AI-generated nude image resembling an Indian public figure shared on Instagram and another depicting a nude woman resembling an American public figure in a Facebook group for AI creations. Meta initially removed the latter image for violating its bullying and harassment policy but left the former image up until the board selected it for review.

In response to the board’s scrutiny, Meta acknowledged the cases and committed to implementing the board’s decisions. The prevalence of AI-generated explicit content underscores the need for clearer policies and stricter enforcement measures by tech companies to address the growing issue of ‘deepfakes’ online.

X agrees to comply with Brazilian court orders

Elon Musk’s social media platform, X, assured Brazil’s Supreme Court of its compliance with court rulings following a recent dispute. This declaration comes after Musk challenged Justice Alexandre de Moraes’s directive to block specific accounts in Brazil. In a letter to Moraes last week, X’s Brazilian unit stated its inability to control the parent company’s adherence to Brazilian court orders.

However, X’s lawyers reiterated the platform’s commitment to fully comply with orders from the Supreme Court and the Superior Electoral Court of Brazil. This marks a significant shift from Musk’s earlier stance, where he vowed to reverse restrictions imposed by Moraes, citing constitutional concerns and urging the justice to resign.

Moraes responded by launching an inquiry into Musk for obstruction of justice amidst investigations into digital militias accused of spreading fake news during Jair Bolsonaro’s presidency. Additionally, Moraes leads an inquiry into an alleged coup attempt by Bolsonaro. X, facing further scrutiny, disclosed being subpoenaed by the US House Judiciary Committee for information on Brazilian Supreme Court directives regarding content moderation. The platform’s lawyers assured Moraes of cooperation, indicating compliance with the committee’s request and commitment to informing him of developments.

Mark Zuckerberg wins dismissal in lawsuits over social media harm to children

Meta CEO Mark Zuckerberg has secured the dismissal of certain claims in multiple lawsuits alleging that Facebook and Instagram concealed the harmful effects of their platforms on children. US District Judge Yvonne Gonzalez Rogers in Oakland, California, ruled in favour of Zuckerberg, dismissing claims from 25 cases that sought to hold him personally liable for misleading the public about platform safety.

The lawsuits, part of a broader litigation by children against social media giants like Meta, assert that Zuckerberg’s prominent role and public stature required him to fully disclose the risks posed by Meta’s products to children. However, Judge Rogers rejected this argument, stating it would establish an unprecedented duty to disclose for any public figure.

Despite dismissing claims against Zuckerberg, Meta remains a defendant in the ongoing litigation involving hundreds of lawsuits filed by individual children against Meta and other social media companies like Google, TikTok, and Snapchat. These lawsuits allege that social media use led to physical, mental, and emotional harm among children, including anxiety, depression, and suicide. The plaintiffs seek damages and a cessation of harmful practices by these tech companies.

Why does it matter?

The lawsuits highlight a broader concern about social media’s impact on young users, prompting legal action from states and school districts. Meta and other defendants deny wrongdoing and have emphasised their commitment to addressing these concerns. While some claims against Zuckerberg have been dismissed, the litigation against Meta and other social media giants continues as plaintiffs seek accountability and changes to practices allegedly detrimental to children’s well-being.

The ruling underscores the complex legal landscape surrounding social media platforms and their responsibilities regarding user safety, particularly among younger demographics. The outcome of these lawsuits could have significant implications for the regulation and oversight of social media companies as they navigate concerns related to their platforms’ impact on mental health and well-being.

Meta temporarily suspends Threads in Türkiye

Meta Platforms Inc. announced that it will temporarily suspend its social networking app Threads in Türkiye starting 29 April to comply with an interim order from the Turkish Competition Authority. The decision, detailed in a blog post on Monday, aims to address concerns related to data sharing between Instagram and Threads as the competition watchdog investigates potential market dominance abuses by Meta. Despite this move, Meta reassured users that the shutdown of Threads in Türkiye will not affect other Meta services like Facebook, Instagram, or WhatsApp within the country or Threads in other global locations.

The Turkish Competition Authority initiated an investigation into Meta in December over possible competition law violations stemming from the integration of Instagram with Threads. The interim order, which restricts data merging between the two platforms, will remain effective until the authority reaches a final decision. Meta expressed disagreement with this decision, asserting its compliance with Turkish legal requirements and indicating plans to appeal the ruling.

Threads, Meta’s microblogging venture launched in July 2023, aimed to expand beyond Instagram’s media-centric format by offering a predominantly text-based social platform where users could share photos, links, and short videos. While Threads quickly gained traction in the US and over 100 other countries, its European debut was delayed until December 2023 due to stringent privacy regulations in the region. Despite this setback, Meta remains committed to navigating regulatory challenges while advancing its diverse social networking offerings.

Facebook’s news block sparks debate in Canada and Australia

In response to legislation requiring tech giants to pay for news links, Facebook’s decision to block news sharing in Canada has sparked significant changes in the country’s online landscape. Right-wing pages like Canada Proud have seen a surge in engagement, signalling a shift toward more niche and tribal content consumption. With Facebook likely to take similar actions in Australia, concerns arise about the impact on political discourse, particularly as both countries gear up for elections in 2025.

Studies conducted after the news blockage reveal a concerning trend: a decline in engagement with news and a rise in interaction with opinion-based and non-verified content, notably memes. This shift has prompted fears of undermining political dialogue and increasing the spread of misinformation. While Meta claims users still find value in Facebook and Instagram without news, reports indicate a rise in engagement with unreliable sources, potentially exacerbating the spread of false information, especially during critical events like emergencies or elections.

Why does it matter?

The blocking of news links has prompted criticism from government officials and experts, who argue that access to trusted information is vital. Australian authorities are pressing Meta to support media licensing arrangements, emphasising the importance of fair remuneration for news content.

Meanwhile, Google has opted for a different approach, agreeing to make payments to a fund supporting media outlets in Canada, while its stance remains unchanged in Australia.

Despite declining as a news source over the years, Facebook remains a significant platform for current affairs content, indicating the far-reaching implications of these regulatory battles on the future of online news consumption.