Schools and lawmakers ramp up media literacy education

As concerns grow over the proliferation of AI-generated disinformation, schools and lawmakers are doubling down on media literacy education. The push, already underway in 18 states, aims to equip students with the skills to discern fake news, which is particularly crucial as the 2024 presidential election looms. Beyond politics, the harmful effects of social media on children, including cyberbullying and online radicalisation, underscore the urgency of these efforts.

States like Delaware and New Jersey have set the bar high, mandating comprehensive media literacy standards for K-12 classrooms. These standards promote digital citizenship and empower students to navigate media safely. Yet, disparities exist among states, with some, like Illinois, implementing more muted forms of media literacy education, focusing primarily on high school instruction.

In response to the lack of federal guidelines, bipartisan efforts in Congress, such as the AI Literacy Act, seek to address the gap. Introduced by Rep. Lisa Blunt-Rochester and Rep. Larry Bucshon, the bill aims to integrate AI literacy into existing education programs, emphasising its importance for national competitiveness. However, progress on the bill has stalled since its introduction, leaving the federal approach to media literacy uncertain.

Despite variations in implementation, students across states are embracing media literacy education positively. For educators like Lisa Manganello in New Jersey, the focus is on fostering critical thinking and information literacy, irrespective of political affiliations. As misinformation continues to increase online, the need for media literacy education at the state and federal levels remains paramount to empower students as responsible digital citizens.

Belgian EU Council presidency unveils framework for online child protection law

A newly revealed document from the Belgian EU Council presidency sheds light on the risk assessment framework crucial for drafting a forthcoming law aimed at detecting and eliminating online child sexual abuse material (CSAM). The document, shared with the Council’s Law Enforcement Working Party (LEWP), underscores the Coordinated Authority’s pivotal role in receiving risk assessments, implementing mitigation measures, and orchestrating efforts to detect, report, and remove CSAM across the EU member states.

Building upon earlier approaches by the Belgian presidency, the document delves into categorising potential risks associated with online services, offering detailed methodologies and criteria for practical application. These methodologies include evaluating service types, core architecture, effectiveness of safety features, and user tendencies. Notably, the categorisation spans various parameters, such as service policies, user behaviour patterns, and safety protocols, emphasising safeguarding child users.

Proposed scoring methodologies within the risk categorisation system aim to streamline assessment processes with options like binary questions, hierarchical criteria, and sampling methods. These practices, integrated into a multi-class scoring framework, evaluate the efficacy of service policies and features in preventing child sexual abuse, facilitating a nuanced understanding of risk levels across different platforms.

Why does it matter?

The document signals a clear approach to refining the CSAM legislation, emphasising alignment with fundamental rights and the need for robust safeguards. As discussions progress, the focus remains on extracting fundamental principles and identifying core aspects crucial for effective risk assessment and mitigation strategies in combating online child sexual abuse.

UN Secretary-General issues policy brief for Global Digital Compact

As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity. 

The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:

  • Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy. 
  • Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do). 
  • Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues. 
  • An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
  • Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms. 
  • Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact. 
  • Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector. 
  • Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms. 

The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme. 

As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.

Regulating digital games

Regulating digital games is a topic that has been at the forefront recently due to the advancement in gaming technology. US Congress pushed the games industry to set up an Entertainment Software Ratings Board to determine age ratings. This led to the Pan European Game Information rating in 2003. This rating system is similar to that of films.

As online games have become more impactful, the question of content regulation is becoming more important. This issue got in the focus after the Christchurch shootings in 2019, when users of Roblox, an online gaming platform, started re-enacting the event. After this incident, Roblox employs “thousands” of human moderators and artificial intelligence to check user-submitted games and police chat among its 60m daily users, who have an average age of about 13.

This has prompted debates on how to regulate social media-like conversations. Politicians have argued that the constitution should protect in-game chat as it is considered similar to one-to-one conversation.

Game makers are doing their best to design out bad behaviour before it occurs. For example, when users of Horizon Worlds complained of being virtually groped, a minimum distance between avatars was introduced.

Similar o content moderation of social media, online gaming will trigger new policy and governance challenges.

US state of Utah introduces laws that prohibit social media platforms from allowing access to minors without explicit parental consent

In the USA, the Governor of Utah, Spencer Cox, has signed two laws introducing new measures intended to protect children online. The first law prohibits social media companies from using ‘a practice, design,
or feature that […] the social media company knows, or which by the exercise of reasonable care should know, causes a Utah minor account holder to have an addiction to the social media platform’. The second law introduces age requirements for the use of social media platforms: Social media companies are required to introduce age verification for users in Utah and to allow minors to create user accounts only with the express consent of a parent or guardian. The laws also prohibit social media companies from advertising to minors, collecting information about them, or targeting content to them. In addition, there is a requirement for companies to enable parents or guardians to access the minors’ accounts. and minors should not be allowed to access their social media accounts between 10:30 pm and 06:30 am.

The laws – set to enter into force in March 2024 – have been criticised by civil liberties groups and tech lobby groups who argue that they are overly broad and could infringe on free speech and privacy rights. Social media companies will likely challenge the new rules.

European Parliament calls for strengthened consumer protection in online video games

The European Parliament adopted a report on consumer protection in online video games, calling for better protection of gamers from addiction and manipulative practices. The report notes the need for harmonised rules that would give parents an overview of and control over the games played by their children. It also highlights the importance of clearer information on the content, in-game purchase policies, and target age group of games.

In the view of the European Parliament, online video games should prioritise data protection, gender balance, and the safety of players and should not discriminate against people with disabilities. Moreover, cancelling game subscriptions must be as easy as subscribing to them. The purchase, return, and refund policies must comply with EU rules.

New report reveals how US adolescents engage with or experience pornography online

According to research by Common Sense Media, 75% of US teens have seen online pornography by the time they are 17, with the average age of first exposure being 12 years old. The report’s goals are to provide a baseline for understanding US teens’ pornography use and to comprehend the role that internet pornography plays in adolescent life in the USA.

The study was based on a poll of 1,358 Americans between the ages of 13 and 17. More than half of those surveyed admitted to seeing pornographic footage of violent crimes like rape, suffocation, or people in pain. The majority of respondents claimed that Asian, Black, and Latino stereotypes were depicted in pornography. After seeing porn, more than half of the respondents claimed they felt bad or ashamed. Meanwhile, 45% of respondents felt that pornography gave them useful information about sex. Teenagers who identify as L.G.B.T.Q. in particular claimed it helped them learn more about their sexuality.

More time spent online might increase the risk of OCD for children

Preteens are more likely to develop obsessive-compulsive disorder if they spend more time playing internet games or watching videos. The most extensive long-term investigation of brain development in American children, the Adolescent Brain Cognitive Development research, has reached this conclusion. The preteens had a 13% higher chance of developing obsessive-compulsive disorder within two years for every additional hour they spent playing video games. Additionally, for every additional hour they spent watching internet videos, their chance of OCD increased by 11%. According to the report, schools can be vital in ensuring that adolescents form positive digital habits at a crucial juncture in their growth.

Epic Games to pay $520 million penalty in USA over privacy violations and ‘dark patterns’ cases

The US Federal Trade Commission and the creator of Fortnite, Epic Games, have reached a settlement which would see the company pay a total of US$ 520 million in penalties over allegations that it had violated the Children’s Online Privacy Protection Act and used dark patterns to trick players into making unintentional purchases.

For allegations related to collecting personal information from Fortnite players under the age of 13 without getting consent from their parents or caregivers, Epic has agreed to pay a US$ 275 million penalty. Furthermore, the FTC determined that Epic’s default settings for its live text and voice communication features, as well as its system of pairing children with adults/strangers to play Fortnite with, exposed youngsters to harassment and abuse. Epic is also required to adopt strong privacy default settings for children and teens, ensuring that voice and text communications are turned off by default.

In a second case, the business conceded to pay US$ 245 million to refund users for its dark patterns and billing practices.