EU threatens TikTok Lite suspension over mental health concerns

The European Commission has warned TikTok that it may suspend a key feature of TikTok Lite in the European Union on Thursday if the company fails to address concerns regarding its impact on users’ mental health. This action is being taken under the EU’s Digital Services Act (DSA), which mandates that large online platforms take action against harmful content or face fines of up to 6% of their global annual turnover.

Thierry Breton, the EU industry chief, emphasised the Commission’s readiness to implement interim measures, including suspending TikTok Lite, if TikTok does not provide compelling evidence of the feature’s safety. Breton highlighted concerns about potential addiction generated by TikTok Lite’s reward program.

TikTok has been given a 24-hour deadline to provide a risk assessment report on TikTok Lite to avoid fines and additional requested information by 3 May to avoid penalties. Despite these demands, TikTok still needs to respond to the Commission’s requests for comment.

The TikTok Lite app, recently launched in France and Spain, includes a reward program where users earn points by engaging in specific tasks on the platform. However, TikTok should have submitted a risk assessment report before the app’s launch, as required by the DSA. The Commission remains firm on enforcing regulations to protect users’ well-being amidst the growing influence of digital platforms.

Kyrgyzstan blocks TikTok over child protection concerns

Kyrgyzstan has banned TikTok following security service recommendations to safeguard children. The decision comes amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

The Kyrgyz digital ministry cited ByteDance’s failure to comply with child protection laws, sparking concerns from advocacy groups about arbitrary censorship. The decision reflects Kyrgyzstan’s broader trend of tightening control over media and civil society, departing from its relatively open stance.

Meanwhile, TikTok continues to face scrutiny worldwide over its data policies and alleged connections to the Chinese government.

Why does it matter?

This decision stems from legislative text approved last summer aimed at curbing the distribution of ‘harmful’ online content accessible to minors. Such content encompasses material featuring ‘non-traditional sexual relationships’ and those that undermine ‘family values,’ as well as promoting illegal conduct, substance abuse, or anti-social behaviours. Chinese officials have not publicly commented on this decision, although in March, Beijing accused the US of ‘bullying’ over similar actions against TikTok.

Meta spokesperson sentenced to six years in Russia

A military court in Moscow has reportedly sentenced Meta Platforms spokesperson Andy Stone to six years in prison in absentia for ‘publicly defending terrorism.’ This ruling comes amid Russia’s crackdown on Meta, which was designated as an extremist organisation in the country, resulting in the banning of Facebook and Instagram in 2022 due to Russia’s conflict with Ukraine.

Meta has yet to comment on the reported sentencing of Stone, who serves as the company’s communications director. Stone himself was unavailable for immediate response following the court’s decision. Stone’s lawyer, Valentina Filippenkova, indicated they intend to appeal the verdict, expressing a request for acquittal.

The Russian interior ministry initiated a criminal investigation against Stone late last year, although the specific charges were not disclosed then. According to state investigators, Stone’s online comments allegedly defended ‘aggressive, hostile, and violent actions’ against Russian soldiers involved in what Russia terms its ‘special military operation’ in Ukraine.

Why does it matter?

Stone’s sentencing underscores Russia’s stringent stance on online content related to its military activities in Ukraine, extending repercussions to individuals associated with Meta Platforms. The circumstances also reflect the broader context of heightened scrutiny and legal actions against perceived dissent and criticism within Russia’s digital landscape.

Google blocks news links in California amid legislative battle

Google has responded to a bill proposing payment from tech giants like Google and Meta to news publishers by blocking news links for California-based news organisations in search results for certain Californians. Meta, in turn, threatens to block all news links on its social platforms if the bill is enacted.

This decision comes amidst vigorous lobbying efforts from these companies, arguing that the legislation would impose a ‘link tax’ and disrupt the free exchange of information online. Some small news publishers and business groups also oppose the bill, citing fears of diminished discoverability and potential negative consequences for the broader business landscape. On the other hand, proponents argue that such laws are necessary to sustain journalism in an era where traditional revenue streams have dwindled.

Despite labelling its action as a ‘short-term test,’ Google faced sharp criticism from politicians and publishers who condemned the move as an abuse of power. Nonetheless, California news publishers have not yet felt significant repercussions from Google’s actions.

Why does it matter?

In Australia and Canada, heated battles ultimately ended in compromises. Google and the government brokered a deal in Canada, establishing a yearly $73.5 million news fund for Canadian providers. Nevertheless, Meta persists in blocking news links on Facebook and Instagram in Canada, leading to a marked decline in traffic for Canadian news organisations. Meanwhile, the outcome of the standoff in California is still uncertain, but one thing’s for sure: the intense debates will persist.

Bollywood actors featured in AI fake videos for India’s election

In the midst of India’s monumental general election, AI-generated fake videos featuring Bollywood actors criticising Prime Minister Narendra Modi and endorsing the opposition Congress party have gone viral. The videos, viewed over half a million times on social media, underscore the growing role of AI-generated content in elections worldwide.

India’s election, involving almost one billion voters, pits Modi’s Bharatiya Janata Party (BJP) against an alliance of opposition parties. As campaigning shifts towards digital platforms like WhatsApp and Facebook, AI is being utilised for the first time in Indian elections, signalling a new era of political communication.

Despite efforts by platforms like Facebook to remove the fake videos, they continue to circulate, prompting police investigations and highlighting the challenges of combating misinformation in the digital age. While actors Aamir Khan and Ranveer Singh have denounced the videos as fake, their proliferation underscores the potential impact of AI-generated content on public opinion.

Why does it matter?

In this year’s election in India, politicians employ AI in various ways, from creating videos featuring deceased family members to using AI-generated anchors to deliver political messages. These tactics raise questions about the ethical implications of AI in politics and its potential to shape public discourse in unprecedented ways.

Meta shifts away from politics ahead of 2024 US election

In a significant shift ahead of the Trump-Biden rematch, Meta is distancing itself from politics after years of positioning as a key player in political discourse. The company has reduced the visibility of political content on Facebook and Instagram, imposed new rules on political advertisers, and downsized the team responsible for engaging with politicians and campaigns. The shift reshaped digital outreach strategies for the 2024 US election and could transform political communication on social media platforms.

Meta’s retreat from politics follows years of controversy and public scrutiny, including outrage over Russian interference in the 2016 presidential race and the role of social media in the 6 January 2021 attack on the US Capitol. The company’s efforts to minimise political content in users’ news feeds reflect a broader trend away from news and politics on social media platforms. This shift has impacted major news outlets, with significant declines in user engagement observed across platforms.

As Meta redefines its approach to political content, political campaigns adapt their strategies to navigate this new landscape. The Biden campaign has increased its social media presence to drive engagement, while Trump has turned to alternative platforms like Truth Social. However, both parties recognise the continued importance of Facebook as a vital tool for reaching voters despite the platform’s evolving restrictions on political advertising and content.

Why does it matter?

The changing dynamics of political communication on social media raise concerns about access to information and the role of tech companies in shaping public discourse. With political content increasingly marginalised on platforms like Facebook and Instagram, questions arise about how voters will stay informed about key issues during elections. As campaigns adjust to Meta’s evolving policies, the impact on democratic discourse and the dissemination of political information remains a topic of debate and scrutiny.

X defies Australia’s content removal demands

Elon Musk’s company, formerly known as Twitter and now called X, is gearing up for a legal battle against the government in Australia. The legal move comes in response to orders demanding the removal of content depicting violence and violent extremism. The content in question involves two recent knife attacks: one resulting in multiple deaths at a shopping centre and another targeting a Christian bishop in his church.

In the wake of these attacks, inflammatory and false information circulated, with a prominent Australian figure using X to wrongly attribute the shopping centre incident to a Jewish man. A mainstream television news outlet also amplified this misinformation by broadcasting false claims. The church attack officially declared a terrorist incident, involved a teenage assailant stabbing an Assyrian bishop, who enjoys a significant following on social media.

Prompted by disseminating graphic footage from these incidents on social media platforms, Australia’s eSafety Commissioner, tasked with online safety enforcement, issued orders to remove such content. While X initially attempted to comply with these demands, it later refused to remove the video of the church attack globally, citing the lack of authority for such requests outside of Australia’s jurisdiction. X announced its intention to challenge this directive in court, denouncing it as unlawful and dangerous.

This stance from X has sparked intense criticism from Australian politicians, who advocate for stricter regulations on social media platforms. The clash underscores the ongoing debate surrounding tech companies’ responsibilities in curbing harmful content online and the balance between free speech and preventing violence and misinformation.

US House of Representatives passes bill targeting TikTok over national security concerns

The House of Representatives overwhelmingly voted 360 to 58 on a bill that could result in the unprecedented action of shutting down TikTok, a popular social media platform, over concerns related to Chinese influence and data privacy. The bill, authored by Texas Republican representative Michael McCaul, aims to protect Americans, especially children, from what he described as the ‘malign influence of Chinese propaganda’ on TikTok, which he referred to as a ‘spy balloon in Americans’ phones.’

The legislation was passed as part of a broader foreign aid package put forth by House Republican speaker Mike Johnson, which includes support for Ukraine, Israel, and Taiwan. The updated bill extends the divestment period for TikTok’s parent company, ByteDance, from six months to a year, a move supported by Senate Commerce Committee chair Maria Cantwell to allow sufficient time for potential buyers to negotiate a deal.

Critics of TikTok have expressed concerns that ByteDance, being based in China, could collect user data and censor content critical of the Chinese government. In response, TikTok has consistently denied sharing US user data with the Chinese government, highlighting its independent leadership structure across different countries.

Following the House’s passage of the bill, TikTok voiced disappointment, emphasising its substantial economic contribution to the US and arguing against what it sees as an infringement on free speech rights. The bill’s broader implications on data privacy and surveillance practices have also drawn criticism from other tech industry figures, including the president of Signal, who warned of potential repercussions extending beyond TikTok to other social media platforms. Despite these concerns, President Joe Biden has indicated his intention to sign the bill into law if it passes the Senate, aligning with his previous statements and ongoing scrutiny of TikTok’s operations.

EU demands adult content platforms to assess risks

Three major adult content platforms, Pornhub, Stripchat, and XVideos, are required to conduct risk assessments and implement measures to address systemic risks associated with their services under new EU online content rules announced by the European Commission. These companies were classified as very large online platforms in December under the Digital Services Act (DSA), which demands heightened efforts to remove illegal and harmful content from their platforms.

The EU executive specified that Pornhub and Stripchat must comply with these rigorous DSA obligations by 21 April, while XVideos has until 23 April to do the same. These obligations include submitting risk assessment reports to the Commission and implementing mitigation measures to tackle systemic risks linked to their services. Additionally, the platforms are expected to adhere to transparency requirements related to advertisements and provide researchers with data access.

Failure to comply with the DSA regulations could lead to significant penalties, with companies facing fines of up to 6% of their global annual turnover for breaches. The European Commission’s actions underscore its commitment to ensuring that large online platforms take proactive steps to address illegal and harmful content, particularly within the context of adult content services. These measures are part of broader efforts to enhance online safety and accountability across digital platforms operating within the EU.

Google faces privacy concerns in UK over cookie replacements

The UK’s privacy regulator has expressed concerns about Google’s proposed cookie replacements, stating that they must do more to safeguard consumer privacy in the UK. According to internal documents, Google’s Privacy Sandbox initiative, aimed at phasing out third-party cookies and reducing tracking, leaves gaps that could compromise anonymity.

The Information Commissioner’s Office (ICO) has reportedly drafted a report highlighting the potential for exploitation within Google’s proposed technology. Despite Google’s plans to eliminate third-party cookies by the latter half of 2024, the ICO is pushing for changes to enhance privacy protections.

The ICO’s efforts include engaging with the UK’s Competition and Markets Authority (CMA), which reviews Google’s plans amidst concerns about their potential impact on competition in digital advertising. The CMA has pledged to consider the ICO’s recommendations as part of its evaluation process.

In response, a Google spokesperson emphasised ongoing engagement with privacy and competition regulators globally, aiming to find a solution that benefits users and the digital ecosystem. Both the ICO and CMA have yet to comment on the matter.