Mozambique and Mauritius to block social media

Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.

Meanwhile, in Mauritius, the government imposed a similar blackout until after the 10 November election, following a wiretapping scandal involving leaked conversations of high-profile figures. Furthermore, digital rights groups such as Access Now and the #KeepItOn coalition have condemned these actions, arguing that they violate international human rights standards, including the African Commission on Human and Peoples’ Rights (ACHPR) resolution 580 and the International Covenant on Civil and Political Rights (ICCPR), as well as national constitutions.

In response, digital rights advocates are calling on telecommunications providers, including Emtel and Mauritius Telecom, to resist government orders to enforce the shutdowns. By maintaining internet connectivity, these companies could help preserve citizens’ access to information and uphold democratic principles in politically sensitive times.

Additionally, rights organisations argue that internet service providers have a unique role in supporting transparency and accountability, which are vital to democratic societies.

Musk’s America PAC prioritises Facebook and Instagram over X in strategic ad campaign to support Trump

Elon Musk’s political action committee, America Pac, established to support former US President Donald Trump, reveals a strategic focus on social media platforms like Facebook, Instagram, and YouTube, with comparatively minimal attention to X, formerly Twitter, despite Musk’s ownership. Between July and October, America Pac allocated 3 million USD to Facebook and Instagram ads and 1.5 million USD to Google and YouTube, overshadowing the 201,000 USD spent on X.

America Pac’s advertisement strategy notably emphasises geographic targeting of pivotal swing states, such as Pennsylvania, Georgia, Michigan, Nevada, Arizona, and Wisconsin. Pennsylvania, in particular, receives significant attention through both digital ad presence and Musk’s campaign efforts. Despite X generating around 32 million ad impressions, the precise impact on Facebook remains less clear due to Meta’s limited aggregate data, although individual ad engagement is notably high.

Offering financial incentives, Musk seeks to engage voters beyond conventional social media advertising. He has pledged daily 1 million USD giveaways until the election to encourage voter sign-ups for America Pac’s petition, further amplifying voter engagement through high-profile events in Pennsylvania.

Why does it matter?

Musk’s move suggests a tactical decision to leverage platforms with a broader reach and advanced targeting capabilities for their advertising efforts. Advertising remains a vital component of X’s revenue strategy. However, the multifaceted approach of focusing on other platforms reiterates America Pac’s deployment across platforms capable of maximum political influence, favouring well-established networks over Musk’s social media to optimise their outreach in swing electoral regions.

Social media blamed for fuelling UK unrest, Ofcom finds

Ofcom has linked the violent unrest in England and Northern Ireland during the summer to the rapid spread of harmful content on social media platforms. The media regulator found that disinformation and illegal posts circulated widely online following the Southport stabbings in July, which sparked the disorder.

While some platforms acted swiftly to remove inflammatory content, others were criticised for uneven responses. Experts highlighted the significant influence of social media in driving divisive narratives during the crisis, with some calling for platforms to be held accountable for unchecked dangerous content.

Ofcom, which has faced criticism for its handling of the situation, argued that its enhanced powers under the forthcoming Online Safety Act were not yet in force at the time. The new legislation will introduce stricter responsibilities for tech firms in tackling harmful content and disinformation.

The unrest, the worst seen in the United Kingdom in a decade, resulted in arrests and public scrutiny of tech platforms. A high-profile row erupted between the Prime Minister and Elon Musk, after the billionaire suggested that civil war was inevitable following the disorder, a claim strongly rebuked by Sir Keir Starmer.

New safety regulations set for social media platforms by UK regulator

Starting in December, Britain’s media regulator Ofcom will outline new safety demands for social media platforms, compelling them to take action against illegal content. Under the new guidelines, tech companies will have three months to assess the risks of harmful content or face consequences, including hefty fines or even having their services blocked. These demands stem from the Online Safety Bill passed last year, aiming to protect users, particularly children, from harmful content.

the UK‘s Ofcom’s Chief Executive Melanie Dawes emphasised that the time for discussion is over, and 2025 will be pivotal for making the internet a safer space. Platforms such as Meta, the parent company of Facebook and Instagram, have already introduced changes to limit risks like children being contacted by strangers. However, the regulator has made it clear that any companies failing to meet the new standards will face strict penalties.

Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

ICC rolls out AI to combat toxic content on social media

The International Cricket Council (ICC) has introduced a social media moderation programme ahead of the ICC Women’s T20 World Cup 2024. The initiative is designed to protect players and fans from toxic online content. More than 60 players have already joined, with further onboarding expected.

To safeguard mental health and promote inclusivity, the ICC has partnered with GoBubble. Together, they will use a combination of AI and human oversight to monitor harmful comments on social media platforms. The service will operate across Facebook, Instagram, and YouTube, with the option for players to use it on their own accounts.

The technology is designed to automatically detect and hide negative comments, including hate speech, harassment, and misogyny. By doing so, it creates a healthier environment for teams, players, and fans to engage with the tournament which will be held in Bangladesh.

Finn Bradshaw, ICC’s Head of Digital, expressed his satisfaction with the programme’s early success. Players and teams have welcomed the initiative, recognising the importance of maintaining a positive digital atmosphere during the tournament.

Social platform X must pay fines before Brazil ban is lifted

Brazil’s Supreme Court has ruled that social platform X, formerly known as Twitter, must pay $5 million in pending fines before being allowed to resume operations in the country. The platform, owned by Elon Musk, was suspended in Brazil after failing to comply with court orders to block accounts spreading hate speech and to appoint a legal representative.

Judge Alexandre de Moraes said the fines, totalling 18.3 million reais ($3.4 million), remain unpaid, alongside an additional fine of 10 million reais ($1.8 million) imposed after X became briefly accessible to some users last week. The court can use frozen funds from X and Starlink accounts in Brazil, but Starlink must first withdraw its appeal against the fund freeze.

X has since complied with court orders, blocking the accounts as instructed and naming a legal representative in Brazil. A source close to the company suggested that while X is likely to pay the original fines, it may contest the extra penalty imposed after the platform ban.

The platform has been unavailable in Brazil since late August. Musk had initially criticised the court’s actions as censorship but began complying with the rulings last week.

Snapchat’s balance between user safety and growth remains a challenge

Snapchat is positioning itself as a healthier social media alternative for teens, with CEO Evan Spiegel emphasising the platform’s different approach at the company’s annual conference. Recent research from the University of Amsterdam supports this view, showing that while platforms like TikTok and Instagram negatively affect youth mental health, Snapchat use appears to have positive effects on friendships and well-being.

However, critics argue that Snapchat’s disappearing messages feature can facilitate illegal activities. Matthew Bergman, an advocate for social media victims, claimed the platform has been used by drug dealers, citing instances of children dying from fentanyl poisoning after buying drugs via the app. Despite these concerns, Snapchat remains popular, particularly with younger users.

Industry analysts recognise the platform’s efforts but highlight its ongoing challenges. As Snapchat continues to grow its user base, balancing privacy and safety with revenue generation remains a key issue, especially as it struggles to compete with bigger players like TikTok, Meta, and Google for advertising.

Snapchat’s appeal lies in its low-pressure environment, with features like disappearing stories and augmented reality filters. Young users, like 14-year-old Lily, appreciate the casual nature of communication on the platform, while content creators praise its ability to offer more freedom and reduce social pressure compared to other social media platforms.

US FTC highlights privacy concerns with social media data

A recent report from the US Federal Trade Commission (FTC) has criticised social media platforms for lacking transparency in how they manage user data. Companies such as Meta, TikTok, and Twitch have been highlighted for inadequate data retention policies, raising significant privacy concerns.

Social platforms collect large amounts of data using tracking technologies and by purchasing information from data brokers, often without users’ knowledge. Much of this data fuels the development of AI, with little control given to users. Data privacy for teenagers remains a pressing issue, leading to recent legislative moves in Congress.

Some companies, including X (formerly Twitter), responded by saying that they have improved their data practices since 2020. Others failed to comment. Advertising industry groups defended data collection, claiming it supports free access to online services.

FTC officials are concerned about the risks posed to individuals, especially those not even using the platforms, due to widespread data collection. Inadequate data management by social platforms may expose users to privacy breaches and identity theft.

Social media owners, politicians, and governments top threats to online news trust, IPIE report shows

A recent report from the International Panel on the Information Environment (IPIE) highlights social media owners, politicians, and governments as the primary threats to a trustworthy online news landscape. The report surveyed 412 experts across various academic fields and warned of the unchecked power social media platforms wield over content distribution and moderation. According to Philip Howard, a panel co-founder, such results pose a critical threat to the global flow of reliable information.

The report also raised concerns about major platforms like X (formerly Twitter), Facebook, Instagram, and TikTok. Allegations surfaced regarding biassed moderation, with Elon Musk’s X reportedly prioritising the owner’s posts and Meta being accused of neglecting non-English content. TikTok, under scrutiny for potential ties to the Chinese government, has consistently denied being an agent of any country. The panel emphasised that these platforms’ control over information significantly impacts public trust.

The survey revealed that around two-thirds of respondents anticipate the information environment will deteriorate, marking a noticeable increase in concern compared to previous years. Experts cited AI tools as a growing threat, with generative AI exacerbating the spread of misinformation. AI-generated videos and voice manipulation ranked as the top concerns, especially in developing countries with more acute impact.

However, not all views on AI are negative. Most respondents also saw its potential to combat misinformation by helping journalists sift through large datasets and detect false information. The report concluded by suggesting key solutions: promoting independent media, launching digital literacy initiatives, and enhancing fact-checking efforts to mitigate the negative trends in the digital information landscape.