Canadian parliamentary committee urges accountability for tech giants over false online information

A parliamentary committee in Canada is recommending that tech giants be held responsible for sharing false or misleading information online, mainly when foreign actors propagate it. However, the Conservatives in the committee did not back this call, saying this would endorse censorship online.

This is one of 22 recommendations by the House Ethics Committee, which studied foreign interference in Canada’s affairs, focusing on China and Russia. The committee’s report also calls for creating a foreign agent registry and improved measures to combat online misinformation.

The government has 60 days to respond to these recommendations, which have gained attention due to increasing concerns about foreign meddling in the country’s internal matters.

Why does it matter?

Concerns have been mounting in Western countries about campaigns orchestrated by foreign actors. In August, Meta, in collaboration with Australian research groups, dismantled the world’s largest covert Chinese spam network. This network was designed to target global users, promoting pro-China content while criticizing Western nations and their policies. A recent US intelligence report also revealed Russia’s extensive efforts to undermine public trust in global elections through espionage, state-controlled media, and manipulation of social media. In the era of digital information warfare, nations face the challenge of safeguarding their democratic processes and preserving public trust.

EU requires TikTok and Meta provide information on disinformation

The European Union has formally required TikTok and Meta to provide information concerning the potential dissemination of false or misleading information related to the Israel-Gaza conflict on their platforms. Unlike the previous non-binding request, this latest demand carries legal force, and the companies now have one week to respond.

Under the Digital Services Act (DSA), failure to comply could result in fines of up to 6% of a company’s global turnover or even suspension of their platforms. The EU’s primary concern is the spread of terrorist and violent content and hate speech following the recent conflict involving Hamas. This follows the EU’s recent contact with X on similar concerns, further highlighting the increasing focus on addressing misinformation and ensuring online safety.

Why does it matter?

This request follows the European Commission’s selection in April of 19 major digital platforms, including Meta, TikTok, Amazon, Apple’s App Store, Microsoft’s LinkedIn, Google services, and X, to adhere to stricter content moderation rules under the DSA. While the DSA’s ability to impose substantial fines is a compelling motivation for these companies to comply with the EU’s requirements, the unfolding of this situation will certainly be closely watched to assess the practical impact of these regulatory measures.

Meta’s news block in Canada has minimal impact on Facebook usage, data shows

Meta’s recent decision to block news links within Canada’s borders has not significantly altered the usage patterns of Facebook, according to data analysis from independent tracking firms. Despite criticism from Canadian authorities, the daily engagement of users and the time spent on the Facebook app have shown slight variation since the news block was enacted in early August.

Analytical reports from Similarweb and Data.ai reveal that the platform’s usage in Canada has remained relatively stable since the news block was implemented in August. Meta has maintained that news links account for less than 3% of content in its Facebook feeds and do not contribute economically to the company. Despite this claim, reports indicate that news remains popular on Facebook in the United States, where it is disclosed as the most frequently viewed content.

The Canadian government plans to release specific implementation rules for the Online News Act by December, after which platforms are expected to finalize agreements with publishers.

Why does it matter?

This outcome appears to validate Meta’s claim that news content makes a limited contribution to its platform’s overall value. This viewpoint stands as a significant point of dispute as the company engages in a standoff with the Canadian government regarding the Online News Act. This legislation mandates that internet giants compensate news content providers, a precedent that could extend to other jurisdictions following this regulatory trend.

Major brands suspend advertising on X after seeing their ads on pro-nazi account

Several brands have suspended advertising on X (formerly known as Twitter) after their ads were run on an account promoting fascism. The issue arose despite X CEO Linda Yaccarino’s proclaimed commitment to brand safety. Media Matters for America documented mainstream brands’ ads being run on the pro-Nazi account, leading to the suspension of ad spending by NCTA and Gilead Sciences. Other brands, including Adobe, the University of Maryland’s football team, and NYU Langone Hospital, also ran ads alongside pro-Nazi content. X conducted an investigation and found minimal ad impressions on the pro-Nazi account. Hours after the Media Matters report was published Wednesday morning and CNN observed additional brands’ ads running on the account, the account appeared to be suspended. 

Why does it matter?

This incident raises concerns about X’s commitment to brand safety, the need for continued improvement in brand safety and content measures, and raising awareness about potential ad placement on platforms like X.

X (Twitter) will no longer allow advertisers to promote accounts on the timeline

X, previously known as Twitter, has announced the discontinuation of promoted accounts, or ‘Follower Objective’ ads, within its platform’s timeline for attracting new followers. This decision, outlined in an email to advertising clients, is part of X’s broader strategy to enhance content formats and experiences. The change has raised significance due to the substantial revenue generated by promoted accounts, contributing over $100 million annually to X’s global revenue.

Promoted accounts, a well-established advertising format, appear as text-based posts with a ‘Follow’ button and have been a staple in X’s advertising offerings. However, these ads lack the multimedia versatility, such as video, that X aims to emphasize. Despite the impact on ad revenue, X is shifting its focus towards optimizing user experiences through different content formats.

Why does it matter?

According to insiders, the move indicates X’s product-driven evolution rather than a revenue-driven motive. The change was abrupt, with the client team having limited time communicating the shift to advertisers. While follower ads represent a relatively small portion of X’s overall ad revenue, their removal occurs when the company faces challenges maintaining ad revenue and profitability.

Russia fines Reddit over ‘prohibited’ content

On Tuesday, Russia imposed its first-ever fine on social media platform Reddit for failing to remove ‘prohibited content,’ alleged to contain false details regarding Russia’s military operations in Ukraine. The penalty was reported by RIA, referencing a Moscow court. This move places Reddit among several online platforms currently facing scrutiny in Russia due to their inability to delete content that the government considers unlawful promptly. Notable entities include Wikimedia, Twitch, and Google (GOOGL.O). The Russian government has been actively monitoring and taking action against platforms that host content it deems as violating its legal framework.

Why does it matter?

This instance underscores the ongoing tension between social media platforms and governments striving to regulate online content within their jurisdictions. The fine on Reddit marks a notable development in Russia’s efforts to assert control over digital information dissemination, as it enforces penalties for non-compliance with content removal requests.

The Philippines government launches a nationwide media literacy campaign

The Presidential Communication Office (PCO) is poised to launch the Marcos administration’s Media and Information Literacy (MIL) Project, targeting misinformation combat. President Ferdinand R. Marcos Jr. led the project launch with various government bodies, stressing media literacy’s importance and collaborative efforts against fake news. The MIL Project aims to equip citizens, particularly youth, with critical thinking skills for media navigation. Undersecretary Emerald Ridao highlighted education’s role in empowering individuals to discern credible sources. The campaign aims to nurture critical thinking and responsible engagement, with attendees encouraged to collectively promote media literacy.

The Department of Social Welfare and Development (DSWD) pledged support, recognizing misinformation’s harmful impact. The DSWD will ensure beneficiaries rely on trustworthy information sources. The MIL Project involves multiple agencies led by PCO. The Department of Education (DepEd) will manage surveys, discussions, and MIL integration into the curriculum. The Commission on Higher Education (CHED) will foster collaboration with universities. The Department of the Interior and Local Government (DILG) will coordinate local implementation. The Marcos administration aims to create a media-literate society capable of distinguishing truth from falsehood, uniting government, education, and the public in this effort.

Why does it matter?

Media literacy is vital in the Philippines to combat the rampant spread of fake news and misinformation, safeguarding the public’s ability to discern accurate information from deceptive content. The nationwide approach is crucial for fostering informed decision-making, combating the negative impacts of fake news, and strengthening the country’s democratic discourse.

Spanish-language climate misinformation proliferates amid extreme weather events

As extreme weather events continue to capture global attention, a surge in Spanish-language misinformation and disinformation related to climate change has emerged, posing challenges to accurate public understanding.

Researchers point to the rise in false information during media coverage of extreme weather occurrences and discussions about climate policies. A recent report by Media Matters highlighted the propagation of Spanish-language climate change conspiracy theories on platforms like TikTok, spotlighting the dissemination of content denying climate change and promoting the idea of it being a hoax.

While an established network of US-based English-language social media accounts propagating climate denialism narratives and opposing climate action exists, a comparable network is lacking in the Spanish-speaking realm. Instead, a network of Spanish-language social media accounts, primarily based in Spain and Latin America, has emerged, engaging with a broader right-wing agenda and heavily relying on translated right-wing content originating in the US.

Why does it matter?

The influence of misinformation is particularly pronounced among Latinos, who are more likely to rely on social media for information. This trend underscores the need for improved content moderation and awareness campaigns to counter the spread of climate misinformation among Spanish-speaking communities, particularly given the disproportionate impact of climate change on Latino populations in the United States and beyond.

Iraq lifted Telegram App ban after company addresses data leak concerns

Iraq’s telecoms ministry ended its ban on the Telegram messaging app this Sunday, implemented earlier in the week, due to concerns about security and the leakage of sensitive information from government entities and citizens. In Iraq, Telegram is not just a means of communication but also a platform for news dissemination and content sharing.

Certain channels on the app have contained substantial personal data, including names, addresses, and familial connections of Iraqi residents. The ministry’s decision to lift the ban stemmed from the company that owns Telegram complying with security authorities’ demands to reveal the sources responsible for the data leaks. The company has also expressed willingness to cooperate with relevant authorities.

Telegram’s press team noted in response to a Reuters inquiry that sharing private data without consent goes against their terms of service, and moderators consistently remove such content. The team confirmed the removal of channels with personal data but emphasized that Telegram had not been asked for private user data and had not shared any. In the previous week, the ministry stated that Telegram’s company hadn’t responded to their request to shut down platforms leaking official government and personal citizen data.

Why does it matter?

The decision to lift the ban on the Telegram app underscores the challenges governments face in balancing national security and upholding citizens’ rights to use digital communication tools. Nevertheless, it’s important to highlight that Iraq’s Ministry of Communications (MoC) indicated the app’s compliance with security authorities’ demands, whereas Telegram emphasized that they hadn’t received any requests for private user data and had not shared any such information. The sharing of information like this would be unprecedented, considering that in similar requests from countries like Russia and Brazil, Telegram has consistently refused to provide any data that could identify users.

Ofcom report highlights challenges in understanding video-sharing platform rules

The media regulator Ofcom has found that many users of video-sharing platforms like OnlyFans, Twitch, and Snapchat struggle to comprehend the complex and lengthy terms and conditions of these platforms. Ofcom examined six platforms and discovered that advanced reading skills were necessary to understand the rules, making them unsuitable for children.

Jessica Zucker, Ofcom’s online safety policy director, emphasized that unambiguous rules are essential for protecting users, especially children, from harm. OnlyFans had the longest terms, taking over an hour for adult users to read, followed by Twitch, Snapchat, TikTok, Brand New Tube, and BitChute. Ofcom assessed the terms as difficult to read, except for TikTok’s, which were more accessible but still challenging for young users.

Some platforms use “clickwrap” agreements, making agreeing easier without reading the terms. Users also lack understanding about prohibited content and potential consequences for rule violations. Moderators’ training and resources varied, impacting their ability to enforce terms effectively. Platforms like Snapchat acknowledged the need for clearer guidelines and committed to improvements based on Ofcom’s findings.

Why does it matter?

While these platforms have become integral to modern communication and entertainment, the complexity and length of their rules are creating significant barriers for users to understand their rights and responsibilities. Moreover, using methods like “clickwrap” raises crucial questions regarding users’ informed consent and subsequent accountability. As for enforcing these rules, inconsistent training and resources for moderators can lead to uneven enforcement, potentially allowing harmful or prohibited content to go unchecked. Twitter’s last year’s decision to scale down its team of contract-based content monitors has garnered criticism from various organizations, further igniting discussions on the platform’s content governance practices and the essential role moderators have in enforcing them.