X (Twitter) will no longer allow advertisers to promote accounts on the timeline

X, previously known as Twitter, has announced the discontinuation of promoted accounts, or ‘Follower Objective’ ads, within its platform’s timeline for attracting new followers. This decision, outlined in an email to advertising clients, is part of X’s broader strategy to enhance content formats and experiences. The change has raised significance due to the substantial revenue generated by promoted accounts, contributing over $100 million annually to X’s global revenue.

Promoted accounts, a well-established advertising format, appear as text-based posts with a ‘Follow’ button and have been a staple in X’s advertising offerings. However, these ads lack the multimedia versatility, such as video, that X aims to emphasize. Despite the impact on ad revenue, X is shifting its focus towards optimizing user experiences through different content formats.

Why does it matter?

According to insiders, the move indicates X’s product-driven evolution rather than a revenue-driven motive. The change was abrupt, with the client team having limited time communicating the shift to advertisers. While follower ads represent a relatively small portion of X’s overall ad revenue, their removal occurs when the company faces challenges maintaining ad revenue and profitability.

Russia fines Reddit over ‘prohibited’ content

On Tuesday, Russia imposed its first-ever fine on social media platform Reddit for failing to remove ‘prohibited content,’ alleged to contain false details regarding Russia’s military operations in Ukraine. The penalty was reported by RIA, referencing a Moscow court. This move places Reddit among several online platforms currently facing scrutiny in Russia due to their inability to delete content that the government considers unlawful promptly. Notable entities include Wikimedia, Twitch, and Google (GOOGL.O). The Russian government has been actively monitoring and taking action against platforms that host content it deems as violating its legal framework.

Why does it matter?

This instance underscores the ongoing tension between social media platforms and governments striving to regulate online content within their jurisdictions. The fine on Reddit marks a notable development in Russia’s efforts to assert control over digital information dissemination, as it enforces penalties for non-compliance with content removal requests.

The Philippines government launches a nationwide media literacy campaign

The Presidential Communication Office (PCO) is poised to launch the Marcos administration’s Media and Information Literacy (MIL) Project, targeting misinformation combat. President Ferdinand R. Marcos Jr. led the project launch with various government bodies, stressing media literacy’s importance and collaborative efforts against fake news. The MIL Project aims to equip citizens, particularly youth, with critical thinking skills for media navigation. Undersecretary Emerald Ridao highlighted education’s role in empowering individuals to discern credible sources. The campaign aims to nurture critical thinking and responsible engagement, with attendees encouraged to collectively promote media literacy.

The Department of Social Welfare and Development (DSWD) pledged support, recognizing misinformation’s harmful impact. The DSWD will ensure beneficiaries rely on trustworthy information sources. The MIL Project involves multiple agencies led by PCO. The Department of Education (DepEd) will manage surveys, discussions, and MIL integration into the curriculum. The Commission on Higher Education (CHED) will foster collaboration with universities. The Department of the Interior and Local Government (DILG) will coordinate local implementation. The Marcos administration aims to create a media-literate society capable of distinguishing truth from falsehood, uniting government, education, and the public in this effort.

Why does it matter?

Media literacy is vital in the Philippines to combat the rampant spread of fake news and misinformation, safeguarding the public’s ability to discern accurate information from deceptive content. The nationwide approach is crucial for fostering informed decision-making, combating the negative impacts of fake news, and strengthening the country’s democratic discourse.

Spanish-language climate misinformation proliferates amid extreme weather events

As extreme weather events continue to capture global attention, a surge in Spanish-language misinformation and disinformation related to climate change has emerged, posing challenges to accurate public understanding.

Researchers point to the rise in false information during media coverage of extreme weather occurrences and discussions about climate policies. A recent report by Media Matters highlighted the propagation of Spanish-language climate change conspiracy theories on platforms like TikTok, spotlighting the dissemination of content denying climate change and promoting the idea of it being a hoax.

While an established network of US-based English-language social media accounts propagating climate denialism narratives and opposing climate action exists, a comparable network is lacking in the Spanish-speaking realm. Instead, a network of Spanish-language social media accounts, primarily based in Spain and Latin America, has emerged, engaging with a broader right-wing agenda and heavily relying on translated right-wing content originating in the US.

Why does it matter?

The influence of misinformation is particularly pronounced among Latinos, who are more likely to rely on social media for information. This trend underscores the need for improved content moderation and awareness campaigns to counter the spread of climate misinformation among Spanish-speaking communities, particularly given the disproportionate impact of climate change on Latino populations in the United States and beyond.

Iraq lifted Telegram App ban after company addresses data leak concerns

Iraq’s telecoms ministry ended its ban on the Telegram messaging app this Sunday, implemented earlier in the week, due to concerns about security and the leakage of sensitive information from government entities and citizens. In Iraq, Telegram is not just a means of communication but also a platform for news dissemination and content sharing.

Certain channels on the app have contained substantial personal data, including names, addresses, and familial connections of Iraqi residents. The ministry’s decision to lift the ban stemmed from the company that owns Telegram complying with security authorities’ demands to reveal the sources responsible for the data leaks. The company has also expressed willingness to cooperate with relevant authorities.

Telegram’s press team noted in response to a Reuters inquiry that sharing private data without consent goes against their terms of service, and moderators consistently remove such content. The team confirmed the removal of channels with personal data but emphasized that Telegram had not been asked for private user data and had not shared any. In the previous week, the ministry stated that Telegram’s company hadn’t responded to their request to shut down platforms leaking official government and personal citizen data.

Why does it matter?

The decision to lift the ban on the Telegram app underscores the challenges governments face in balancing national security and upholding citizens’ rights to use digital communication tools. Nevertheless, it’s important to highlight that Iraq’s Ministry of Communications (MoC) indicated the app’s compliance with security authorities’ demands, whereas Telegram emphasized that they hadn’t received any requests for private user data and had not shared any such information. The sharing of information like this would be unprecedented, considering that in similar requests from countries like Russia and Brazil, Telegram has consistently refused to provide any data that could identify users.

Ofcom report highlights challenges in understanding video-sharing platform rules

The media regulator Ofcom has found that many users of video-sharing platforms like OnlyFans, Twitch, and Snapchat struggle to comprehend the complex and lengthy terms and conditions of these platforms. Ofcom examined six platforms and discovered that advanced reading skills were necessary to understand the rules, making them unsuitable for children.

Jessica Zucker, Ofcom’s online safety policy director, emphasized that unambiguous rules are essential for protecting users, especially children, from harm. OnlyFans had the longest terms, taking over an hour for adult users to read, followed by Twitch, Snapchat, TikTok, Brand New Tube, and BitChute. Ofcom assessed the terms as difficult to read, except for TikTok’s, which were more accessible but still challenging for young users.

Some platforms use “clickwrap” agreements, making agreeing easier without reading the terms. Users also lack understanding about prohibited content and potential consequences for rule violations. Moderators’ training and resources varied, impacting their ability to enforce terms effectively. Platforms like Snapchat acknowledged the need for clearer guidelines and committed to improvements based on Ofcom’s findings.

Why does it matter?

While these platforms have become integral to modern communication and entertainment, the complexity and length of their rules are creating significant barriers for users to understand their rights and responsibilities. Moreover, using methods like “clickwrap” raises crucial questions regarding users’ informed consent and subsequent accountability. As for enforcing these rules, inconsistent training and resources for moderators can lead to uneven enforcement, potentially allowing harmful or prohibited content to go unchecked. Twitter’s last year’s decision to scale down its team of contract-based content monitors has garnered criticism from various organizations, further igniting discussions on the platform’s content governance practices and the essential role moderators have in enforcing them.

Georgia considers law requiring parental consent for children’s social media accounts

The state of Georgia lawmakers, including Lt. Gov. Burt Jones and Sen. Jason Anavitarte, are considering a law that would make it necessary for children to have explicit consent from their parents before creating social media accounts. This proposal, planned for 2024, could also apply to other online services.

Anavitarte emphasized the importance of empowering parents who might not know how to manage content access for their children. This initiative takes inspiration from Louisiana’s recently passed law, which mandates age verification and parental consent for minors joining social media platforms. Similar laws have been enacted in Arkansas, Texas, and Utah.

Surgeon General Vivek Murthy has expressed concerns about social media safety for young individuals. Meta Platforms, owner of Facebook and Instagram, has been contacted regarding these plans. This move comes in response to the popularity of social media among teenagers, with the Pew Research Center reporting up to 95% of teens aged 13 to 17 use such platforms. Anavitarte also aims to strengthen Georgia’s cyberbullying law.

Why does it matter?

With the surge in underage social media use and potential associated risks, such legislative efforts seem well-intentioned in safeguarding minors from online threats, including cyberbullying, inappropriate content, and privacy breaches. However, advocates for free speech caution that these actions might result in websites restricting access to information and creating obstacles for adults’ access. Additionally, the introduced laws might prompt online platforms to implement government identification for age verification purposes, which is already happening on some pornography sites, raising concerns about privacy and potential data breaches.

UK regulator investigates Snapchat’s handling of underage users

The UK’s data regulator is collecting information about Snapchat’s efforts to remove underage users from its platform. This comes after a report revealed that Snap Inc., the company behind Snapchat, had only removed a few underage users in the UK while having thousands of such users, according to estimates.

UK law requires parental consent for processing data of children under 13. Snapchat generally requires users to be 13 or older but has not disclosed its measures to address this issue. The Information Commissioner’s Office (ICO) has received complaints and is assessing whether Snap breached rules. Snap could be fined up to 4% of its annual global turnover if found in breach.

Similar pressure has been on other social media platforms like TikTok, which was fined for mishandling children’s data. Snapchat blocks users under 13 from signing up their age, but other platforms take more proactive steps to prevent underage access.

Why does it matter?

As social media platforms increasingly become spaces for cyberbullying, inappropriate content, and other risks that could have lasting psychological and emotional effects on young individuals, governments are contemplating measures to protect these individuals. Recently, these platforms have encountered hurdles as they navigate a complex landscape of US state laws. The laws demand age verification from users and seek enhanced parental control over children’s accounts. The focus is on both ensuring user safety and ensuring that social media companies uphold responsible practices, particularly regarding children’s welfare.

Tech groups rally behind TikTok in lawsuit against Montana state ban

Two prominent technology advocacy groups, NetChoice and Chamber of Progress, have thrown their support behind TikTok’s legal battle to thwart the enforcement of a forthcoming ban on the popular short video-sharing app in Montana.

The groups jointly filed a court document asserting that Montana’s ban on TikTok contradicts the fundamental principles of the internet and could lead to a fragmented online experience. TikTok, owned by China’s ByteDance, has been locked in a legal dispute since May, claiming that the state’s ban infringes upon the First Amendment rights of the company and its users.

The tech groups contended that the ban, if implemented, could lead to a fragmented internet where access to information is restricted based on local political preferences, diminishing the overall value of the internet for humanity. A court hearing on TikTok’s request for a preliminary injunction is scheduled for October 12th.

Why does it matter?

Former President Donald Trump attempted to block new downloads of TikTok in 2020. Still, his efforts were thwarted by a series of court decisions, not without prompting a cascade of global considerations. Concerns include data privacy and content censorship, particularly regarding potential Chinese access to user data.  Austria recently joined the UK and the EU states in prohibiting TikTok on government devices, reflecting this global trend. Tech groups argue that these decisions could have potential consequences for allowing states to ban specific online platforms, leading to a fragmented internet experience and curtailing users’ access to global networks.

UK government new rules to protect children online

The UK government is introducing new rules to combat illegal advertisements, influencer scams, and protect children online. These rules will require social media platforms, websites, and advertising display networks to take stronger measures against age-restricted adverts targeting children for products like alcohol or gambling. The aim is to address fraudulent celebrity endorsements, pop-up malware, and promotions for prohibited products such as weapons and drugs.

Currently, there is a self-regulatory system for online advertising overseen by the Advertising Standards Authority (ASA), but it lacks the power to address illegal harms as effectively as harmful advertising by legitimate businesses. The government plans to introduce statutory regulation to tackle illegal paid-for online adverts and enhance child protection. This new regulation will extend responsibilities to major players across the online advertising supply chain, including ad tech intermediary services and promotional posts by social media influencers.

The government will launch a consultation on the specifics of potential legislation, including the preferred choice for a regulator to oversee the new rules. The task force will gather evidence on illegal advertising and collaborate with industry initiatives to protect children and address harmful practices. The proposed regulations aim to strike a balance between internet safety and supporting innovation in online advertising while ensuring transparency, consumer trust, and industry growth.