Kyrgyzstan blocks TikTok over child protection concerns

Kyrgyzstan has banned TikTok following security service recommendations to safeguard children. The decision comes amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

The Kyrgyz digital ministry cited ByteDance’s failure to comply with child protection laws, sparking concerns from advocacy groups about arbitrary censorship. The decision reflects Kyrgyzstan’s broader trend of tightening control over media and civil society, departing from its relatively open stance.

Meanwhile, TikTok continues to face scrutiny worldwide over its data policies and alleged connections to the Chinese government.

Why does it matter?

This decision stems from legislative text approved last summer aimed at curbing the distribution of ‘harmful’ online content accessible to minors. Such content encompasses material featuring ‘non-traditional sexual relationships’ and those that undermine ‘family values,’ as well as promoting illegal conduct, substance abuse, or anti-social behaviours. Chinese officials have not publicly commented on this decision, although in March, Beijing accused the US of ‘bullying’ over similar actions against TikTok.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.

European Commission gives TikTok 24 hours to provide risk assessment of TikTok Lite

European regulators have demanded a risk assessment from TikTok within 24 hours regarding its new app, TikTok Lite, recently launched in France and Spain. The European Commission, under the Digital Services Act (DSA), is concerned about potential impacts on children and users’ mental health. This action follows an investigation initiated two months ago into TikTok for potential breaches of the EU tech rules.

Thierry Breton, the EU industry chief, emphasised the need for TikTok to conduct a risk assessment before launching the app in the 27-country EU. The DSA requires platforms to take stronger actions against illegal and harmful content, with penalties of up to 6% of their global annual turnover for violations. Breton likened the potentially addictive and toxic nature of ‘social media lite’ to ‘cigarettes light,’ underlining the commitment to protecting minors under the DSA.

TikTok Lite, targeted at users aged 18+, includes a ‘Task and Reward Lite’ program that allows users to earn points by engaging in specific platform activities. These points can be redeemed for rewards like Amazon vouchers, PayPal gift cards, or TikTok coins for tipping creators. The Commission expressed concerns about the app’s impact on minors and users’ mental health, particularly potential addictive behaviours.

Why does it matter?

TikTok has been directed to provide the requested risk assessment for TikTok Lite within 24 hours and additional information by 26 April. The Commission will analyse TikTok’s response and determine the next steps. TikTok has acknowledged the request for information and stated that it is in direct contact with the Commission regarding this matter. Additionally, the Commission has asked for details on measures implemented by TikTok to mitigate systemic risks associated with the new app.

Mark Zuckerberg wins dismissal in lawsuits over social media harm to children

Meta CEO Mark Zuckerberg has secured the dismissal of certain claims in multiple lawsuits alleging that Facebook and Instagram concealed the harmful effects of their platforms on children. US District Judge Yvonne Gonzalez Rogers in Oakland, California, ruled in favour of Zuckerberg, dismissing claims from 25 cases that sought to hold him personally liable for misleading the public about platform safety.

The lawsuits, part of a broader litigation by children against social media giants like Meta, assert that Zuckerberg’s prominent role and public stature required him to fully disclose the risks posed by Meta’s products to children. However, Judge Rogers rejected this argument, stating it would establish an unprecedented duty to disclose for any public figure.

Despite dismissing claims against Zuckerberg, Meta remains a defendant in the ongoing litigation involving hundreds of lawsuits filed by individual children against Meta and other social media companies like Google, TikTok, and Snapchat. These lawsuits allege that social media use led to physical, mental, and emotional harm among children, including anxiety, depression, and suicide. The plaintiffs seek damages and a cessation of harmful practices by these tech companies.

Why does it matter?

The lawsuits highlight a broader concern about social media’s impact on young users, prompting legal action from states and school districts. Meta and other defendants deny wrongdoing and have emphasised their commitment to addressing these concerns. While some claims against Zuckerberg have been dismissed, the litigation against Meta and other social media giants continues as plaintiffs seek accountability and changes to practices allegedly detrimental to children’s well-being.

The ruling underscores the complex legal landscape surrounding social media platforms and their responsibilities regarding user safety, particularly among younger demographics. The outcome of these lawsuits could have significant implications for the regulation and oversight of social media companies as they navigate concerns related to their platforms’ impact on mental health and well-being.

Belgian EU Presidency proposes compromise text to strengthen online child protection laws

The Belgian EU Council Presidency has introduced a compromise text aimed at detecting and preventing online child sexual abuse material (CSAM). The proposal refines risk categorisation thresholds and outlines data retention obligations for service providers.

However, it has faced criticism for potentially allowing authorities to scan private messages on platforms like WhatsApp or Gmail. The draft legislation enables service providers to flag potential abuse, triggering detection orders mandating active searching for abusive content. Providers are also required to assist the newly established EU Centre by conducting audits at the source code level to combat such material.

Additionally, the compromise introduces specific risk thresholds for categorising service providers. It emphasises adherence to data processing principles, particularly focusing on lawfulness, purpose limitation, and data minimisation in age verification measures.

Why does it matter?

The EU’s proposed legislation to employ surveillance technologies for detecting CSAM in digital messaging faced further scrutiny as the Commission’s ombudsman criticised the lack of transparency in communications with a child safety tech company early this year. Critics argue that the proposal poses risks to privacy and fundamental freedoms and suggest lobbyists influenced it.

Since the law needs approval from the Commission, Council, and Parliament, the next step with the CSAM proposal remains to be determined.

Meta tests features to protect teens on Instagram

Meta, Instagram’s parent company, has announced plans to trial new features aimed at protecting teens by blurring messages containing nudity. This initiative is part of Meta’s broader effort to address concerns surrounding harmful content on its platforms. The tech giant faces increasing scrutiny in the US and Europe amid allegations that its apps are addictive and contribute to mental health issues among young people.

The proposed protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse images for nudity. It will be enabled by default for users under 18, with Meta urging adults to activate it as well. Notably, the nudity protection feature will operate even in end-to-end encrypted chats, ensuring privacy while maintaining safety measures.

Meta is also developing technology to identify accounts potentially involved in sextortion scams and is testing new pop-up messages to warn users who may have interacted with such accounts. These efforts come after Meta’s previous announcements regarding increased content restrictions for teens on Facebook and Instagram, particularly concerning sensitive topics like suicide, self-harm, and eating disorders.

Why does it matter?

The company’s actions follow legal challenges, including a lawsuit filed by 33 US states alleging that Meta misled the public about the dangers of its platforms. The European Commission has also requested information on Meta’s measures to protect children from illegal and harmful content in Europe. As Meta continues to navigate regulatory and public scrutiny, its focus on enhancing safety features underscores the ongoing debate surrounding social media’s impact on mental health and well-being, especially among younger users.

TikTok expands STEM education focus in EU amid regulatory scrutiny

TikTok is intensifying its focus on educational content amid mounting scrutiny in the US and the UK. The platform is rolling out its STEM feed across Europe, starting with the UK and Ireland, following its successful launch in the US last year. This dedicated feed, featuring science, technology, engineering, and mathematics content, will now be integrated alongside the main feeds for users under 18 and can be enabled by older users through the app’s settings. Since its US debut, one-third of teens regularly engage with the STEM feed, with a notable surge in STEM-related content production.

The expansion comes with enhanced measures to ensure content quality and reliability. Namely, TikTok is partnering with Common Sense Networks and Poynter to vet the content appearing on the STEM feed. Common Sense Networks will assess appropriateness, while Poynter will evaluate information reliability. Content failing these checks will not qualify for the STEM feed, aiming to provide users with credible educational materials.

This move arrives amidst growing criticism over TikTok’s handling of harmful content and its impact on young users. Concerns have been raised about addictive design tactics and inadequate protection of minors from inappropriate content. In response, the EU is investigating TikTok’s compliance with online safety regulations.

By emphasising its educational initiatives, including the STEM feed, TikTok aims to position itself as a constructive platform for youth development, countering regulatory scrutiny and public concerns.

Why does it matter?

TikTok’s push for educational content aligns with its recent efforts to present a positive global image to lawmakers and stakeholders. The company has showcased the STEM feed in congressional hearings to refute accusations of harm to young users. Through initiatives like this, TikTok seeks to demonstrate its commitment to promoting learning and responsible content consumption while navigating regulatory challenges in multiple jurisdictions.

Schools and lawmakers ramp up media literacy education

As concerns grow over the proliferation of AI-generated disinformation, schools and lawmakers are doubling down on media literacy education. The push, already underway in 18 states, aims to equip students with the skills to discern fake news, which is particularly crucial as the 2024 presidential election looms. Beyond politics, the harmful effects of social media on children, including cyberbullying and online radicalisation, underscore the urgency of these efforts.

States like Delaware and New Jersey have set the bar high, mandating comprehensive media literacy standards for K-12 classrooms. These standards promote digital citizenship and empower students to navigate media safely. Yet, disparities exist among states, with some, like Illinois, implementing more muted forms of media literacy education, focusing primarily on high school instruction.

In response to the lack of federal guidelines, bipartisan efforts in Congress, such as the AI Literacy Act, seek to address the gap. Introduced by Rep. Lisa Blunt-Rochester and Rep. Larry Bucshon, the bill aims to integrate AI literacy into existing education programs, emphasising its importance for national competitiveness. However, progress on the bill has stalled since its introduction, leaving the federal approach to media literacy uncertain.

Despite variations in implementation, students across states are embracing media literacy education positively. For educators like Lisa Manganello in New Jersey, the focus is on fostering critical thinking and information literacy, irrespective of political affiliations. As misinformation continues to increase online, the need for media literacy education at the state and federal levels remains paramount to empower students as responsible digital citizens.

Belgian EU Council presidency unveils framework for online child protection law

A newly revealed document from the Belgian EU Council presidency sheds light on the risk assessment framework crucial for drafting a forthcoming law aimed at detecting and eliminating online child sexual abuse material (CSAM). The document, shared with the Council’s Law Enforcement Working Party (LEWP), underscores the Coordinated Authority’s pivotal role in receiving risk assessments, implementing mitigation measures, and orchestrating efforts to detect, report, and remove CSAM across the EU member states.

Building upon earlier approaches by the Belgian presidency, the document delves into categorising potential risks associated with online services, offering detailed methodologies and criteria for practical application. These methodologies include evaluating service types, core architecture, effectiveness of safety features, and user tendencies. Notably, the categorisation spans various parameters, such as service policies, user behaviour patterns, and safety protocols, emphasising safeguarding child users.

Proposed scoring methodologies within the risk categorisation system aim to streamline assessment processes with options like binary questions, hierarchical criteria, and sampling methods. These practices, integrated into a multi-class scoring framework, evaluate the efficacy of service policies and features in preventing child sexual abuse, facilitating a nuanced understanding of risk levels across different platforms.

Why does it matter?

The document signals a clear approach to refining the CSAM legislation, emphasising alignment with fundamental rights and the need for robust safeguards. As discussions progress, the focus remains on extracting fundamental principles and identifying core aspects crucial for effective risk assessment and mitigation strategies in combating online child sexual abuse.

UN Secretary-General issues policy brief for Global Digital Compact

As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity. 

The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:

  • Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy. 
  • Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do). 
  • Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues. 
  • An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
  • Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms. 
  • Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact. 
  • Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector. 
  • Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms. 

The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme. 

As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.