EU launches investigation into Facebook and Instagram over child safety

The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.

The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.

Why does it matter?

Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.

Tech firms urged to implement child safety measures in UK

Social media platforms such as Facebook, Instagram, and TikTok face proposed measures in the UK to modify their algorithms and better safeguard children from harmful content. These measures, outlined by regulator Ofcom, are part of the broader Online Safety Act and include implementing robust age checks to shield children from harmful material related to sensitive topics like suicide, self-harm, and pornography.

Ofcom’s Chief Executive, Melanie Dawes, has underscored the situation’s urgency, emphasising the necessity of holding tech firms accountable for protecting children online. She asserts that platforms must reconfigure aggressive algorithms that push harmful content to children and incorporate age verification mechanisms.

The utilisation of complex algorithms by social media companies to curate content has raised serious concerns. These algorithms often amplify harmful material, potentially influencing children negatively. The proposed measures seek to address this issue by urging platforms to reevaluate their algorithmic systems to prioritize child safety by providing children with a safer online experience tailored to their age.

UK’s Technology Secretary, Michelle Donelan, called for social media platforms to engage with regulators and proactively implement these measures, cautioning against waiting for enforcement and potential fines. After a consultation, Ofcom plans to finalise its Children’s Safety Codes of Practice within a year, with anticipated enforcement actions, including penalties for non-compliance, once parliament approves.

UNICEF study finds video games can boost children’s well-being when properly designed

New research from UNICEF Innocenti’s Global Office of Research and Foresight, as part of the Responsible Innovation in Technology for Children (RITEC) project, suggests that video games can significantly enhance the well-being of children if designed thoughtfully.

This international collaboration, co-founded by UNICEF and the LEGO Group and funded by the LEGO Foundation, highlights that well-designed digital games can promote children’s autonomy, competence, creativity, identity, emotion regulation, and relationship building.

The study, conducted in partnership with the University of Sheffield, New York University, City University New York and the Queensland University of Technology, found that digital games offer children valuable experiences such as a sense of control, mastery, achievement, and the ability to explore personal and social identities. However, the positive impact of games depends on their ability to cater to children’s unique needs and desires.

As digital games evolve, the research advocates for designs prioritising young players’ safety, creativity, and emotional development, potentially redefining gaming’s role in nurturing future generations.

Why does it matter?

Traditionally, video games have been viewed with scepticism, often considered detrimental to the psychological and emotional development of children, especially because of their often addictive features. However, this new study suggests a nuanced perspective, prompting a reevaluation of how games are crafted and integrated into children’s lives rather than attempting to eliminate video games from children’s lives—a challenging and potentially counterproductive approach.

TikTok responds to EU concerns, suspends rewards in Lite app

TikTok has suspended its rewards functions in TikTok Lite, a new app catering to regions with slower internet speeds. This decision follows concerns raised by the European Commission regarding the app’s ‘Task and Reward Program,’ which incentivises user engagement with rewards like Amazon vouchers and PayPal gift cards. Particularly, worries over potential addictive effects, especially for children, due to inadequate age verification mechanisms have been highlighted by the EU executive.

In response to the Commission’s apprehensions, TikTok stated its commitment to engaging constructively with regulators and suspended the rewards functions. However, Commissioner Thierry Breton emphasised that concerns regarding TikTok’s platform addictiveness persist, along with an ongoing investigation to determine TikTok Lite’s compliance with the Digital Services Act (DSA). The DSA, which came into force recently, imposes regulations on how online platforms handle illegal and harmful content, with TikTok falling under its jurisdiction as a very large online platform (VLOP).

Under the DSA, TikTok was required to conduct and submit a risk assessment before launching the Lite app. However, the Commission’s proceedings revealed TikTok’s initial failure to meet this requirement. Despite missing the initial deadline, TikTok eventually submitted the risk assessment, indicating compliance with the Commission’s demands. France’s digital minister and MEPs have welcomed TikTok’s suspension decision, signalling a positive response from the EU authorities regarding the company’s efforts to address regulatory concerns.

EU threatens TikTok Lite suspension over mental health concerns

The European Commission has warned TikTok that it may suspend a key feature of TikTok Lite in the European Union on Thursday if the company fails to address concerns regarding its impact on users’ mental health. This action is being taken under the EU’s Digital Services Act (DSA), which mandates that large online platforms take action against harmful content or face fines of up to 6% of their global annual turnover.

Thierry Breton, the EU industry chief, emphasised the Commission’s readiness to implement interim measures, including suspending TikTok Lite, if TikTok does not provide compelling evidence of the feature’s safety. Breton highlighted concerns about potential addiction generated by TikTok Lite’s reward program.

TikTok has been given a 24-hour deadline to provide a risk assessment report on TikTok Lite to avoid fines and additional requested information by 3 May to avoid penalties. Despite these demands, TikTok still needs to respond to the Commission’s requests for comment.

The TikTok Lite app, recently launched in France and Spain, includes a reward program where users earn points by engaging in specific tasks on the platform. However, TikTok should have submitted a risk assessment report before the app’s launch, as required by the DSA. The Commission remains firm on enforcing regulations to protect users’ well-being amidst the growing influence of digital platforms.

Kyrgyzstan blocks TikTok over child protection concerns

Kyrgyzstan has banned TikTok following security service recommendations to safeguard children. The decision comes amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

The Kyrgyz digital ministry cited ByteDance’s failure to comply with child protection laws, sparking concerns from advocacy groups about arbitrary censorship. The decision reflects Kyrgyzstan’s broader trend of tightening control over media and civil society, departing from its relatively open stance.

Meanwhile, TikTok continues to face scrutiny worldwide over its data policies and alleged connections to the Chinese government.

Why does it matter?

This decision stems from legislative text approved last summer aimed at curbing the distribution of ‘harmful’ online content accessible to minors. Such content encompasses material featuring ‘non-traditional sexual relationships’ and those that undermine ‘family values,’ as well as promoting illegal conduct, substance abuse, or anti-social behaviours. Chinese officials have not publicly commented on this decision, although in March, Beijing accused the US of ‘bullying’ over similar actions against TikTok.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.

European Commission gives TikTok 24 hours to provide risk assessment of TikTok Lite

European regulators have demanded a risk assessment from TikTok within 24 hours regarding its new app, TikTok Lite, recently launched in France and Spain. The European Commission, under the Digital Services Act (DSA), is concerned about potential impacts on children and users’ mental health. This action follows an investigation initiated two months ago into TikTok for potential breaches of the EU tech rules.

Thierry Breton, the EU industry chief, emphasised the need for TikTok to conduct a risk assessment before launching the app in the 27-country EU. The DSA requires platforms to take stronger actions against illegal and harmful content, with penalties of up to 6% of their global annual turnover for violations. Breton likened the potentially addictive and toxic nature of ‘social media lite’ to ‘cigarettes light,’ underlining the commitment to protecting minors under the DSA.

TikTok Lite, targeted at users aged 18+, includes a ‘Task and Reward Lite’ program that allows users to earn points by engaging in specific platform activities. These points can be redeemed for rewards like Amazon vouchers, PayPal gift cards, or TikTok coins for tipping creators. The Commission expressed concerns about the app’s impact on minors and users’ mental health, particularly potential addictive behaviours.

Why does it matter?

TikTok has been directed to provide the requested risk assessment for TikTok Lite within 24 hours and additional information by 26 April. The Commission will analyse TikTok’s response and determine the next steps. TikTok has acknowledged the request for information and stated that it is in direct contact with the Commission regarding this matter. Additionally, the Commission has asked for details on measures implemented by TikTok to mitigate systemic risks associated with the new app.

Mark Zuckerberg wins dismissal in lawsuits over social media harm to children

Meta CEO Mark Zuckerberg has secured the dismissal of certain claims in multiple lawsuits alleging that Facebook and Instagram concealed the harmful effects of their platforms on children. US District Judge Yvonne Gonzalez Rogers in Oakland, California, ruled in favour of Zuckerberg, dismissing claims from 25 cases that sought to hold him personally liable for misleading the public about platform safety.

The lawsuits, part of a broader litigation by children against social media giants like Meta, assert that Zuckerberg’s prominent role and public stature required him to fully disclose the risks posed by Meta’s products to children. However, Judge Rogers rejected this argument, stating it would establish an unprecedented duty to disclose for any public figure.

Despite dismissing claims against Zuckerberg, Meta remains a defendant in the ongoing litigation involving hundreds of lawsuits filed by individual children against Meta and other social media companies like Google, TikTok, and Snapchat. These lawsuits allege that social media use led to physical, mental, and emotional harm among children, including anxiety, depression, and suicide. The plaintiffs seek damages and a cessation of harmful practices by these tech companies.

Why does it matter?

The lawsuits highlight a broader concern about social media’s impact on young users, prompting legal action from states and school districts. Meta and other defendants deny wrongdoing and have emphasised their commitment to addressing these concerns. While some claims against Zuckerberg have been dismissed, the litigation against Meta and other social media giants continues as plaintiffs seek accountability and changes to practices allegedly detrimental to children’s well-being.

The ruling underscores the complex legal landscape surrounding social media platforms and their responsibilities regarding user safety, particularly among younger demographics. The outcome of these lawsuits could have significant implications for the regulation and oversight of social media companies as they navigate concerns related to their platforms’ impact on mental health and well-being.

Belgian EU Presidency proposes compromise text to strengthen online child protection laws

The Belgian EU Council Presidency has introduced a compromise text aimed at detecting and preventing online child sexual abuse material (CSAM). The proposal refines risk categorisation thresholds and outlines data retention obligations for service providers.

However, it has faced criticism for potentially allowing authorities to scan private messages on platforms like WhatsApp or Gmail. The draft legislation enables service providers to flag potential abuse, triggering detection orders mandating active searching for abusive content. Providers are also required to assist the newly established EU Centre by conducting audits at the source code level to combat such material.

Additionally, the compromise introduces specific risk thresholds for categorising service providers. It emphasises adherence to data processing principles, particularly focusing on lawfulness, purpose limitation, and data minimisation in age verification measures.

Why does it matter?

The EU’s proposed legislation to employ surveillance technologies for detecting CSAM in digital messaging faced further scrutiny as the Commission’s ombudsman criticised the lack of transparency in communications with a child safety tech company early this year. Critics argue that the proposal poses risks to privacy and fundamental freedoms and suggest lobbyists influenced it.

Since the law needs approval from the Commission, Council, and Parliament, the next step with the CSAM proposal remains to be determined.