New safety regulations set for social media platforms by UK regulator

Starting in December, Britain’s media regulator Ofcom will outline new safety demands for social media platforms, compelling them to take action against illegal content. Under the new guidelines, tech companies will have three months to assess the risks of harmful content or face consequences, including hefty fines or even having their services blocked. These demands stem from the Online Safety Bill passed last year, aiming to protect users, particularly children, from harmful content.

the UK‘s Ofcom’s Chief Executive Melanie Dawes emphasised that the time for discussion is over, and 2025 will be pivotal for making the internet a safer space. Platforms such as Meta, the parent company of Facebook and Instagram, have already introduced changes to limit risks like children being contacted by strangers. However, the regulator has made it clear that any companies failing to meet the new standards will face strict penalties.

Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

ICC rolls out AI to combat toxic content on social media

The International Cricket Council (ICC) has introduced a social media moderation programme ahead of the ICC Women’s T20 World Cup 2024. The initiative is designed to protect players and fans from toxic online content. More than 60 players have already joined, with further onboarding expected.

To safeguard mental health and promote inclusivity, the ICC has partnered with GoBubble. Together, they will use a combination of AI and human oversight to monitor harmful comments on social media platforms. The service will operate across Facebook, Instagram, and YouTube, with the option for players to use it on their own accounts.

The technology is designed to automatically detect and hide negative comments, including hate speech, harassment, and misogyny. By doing so, it creates a healthier environment for teams, players, and fans to engage with the tournament which will be held in Bangladesh.

Finn Bradshaw, ICC’s Head of Digital, expressed his satisfaction with the programme’s early success. Players and teams have welcomed the initiative, recognising the importance of maintaining a positive digital atmosphere during the tournament.

Social platform X must pay fines before Brazil ban is lifted

Brazil’s Supreme Court has ruled that social platform X, formerly known as Twitter, must pay $5 million in pending fines before being allowed to resume operations in the country. The platform, owned by Elon Musk, was suspended in Brazil after failing to comply with court orders to block accounts spreading hate speech and to appoint a legal representative.

Judge Alexandre de Moraes said the fines, totalling 18.3 million reais ($3.4 million), remain unpaid, alongside an additional fine of 10 million reais ($1.8 million) imposed after X became briefly accessible to some users last week. The court can use frozen funds from X and Starlink accounts in Brazil, but Starlink must first withdraw its appeal against the fund freeze.

X has since complied with court orders, blocking the accounts as instructed and naming a legal representative in Brazil. A source close to the company suggested that while X is likely to pay the original fines, it may contest the extra penalty imposed after the platform ban.

The platform has been unavailable in Brazil since late August. Musk had initially criticised the court’s actions as censorship but began complying with the rulings last week.

Snapchat’s balance between user safety and growth remains a challenge

Snapchat is positioning itself as a healthier social media alternative for teens, with CEO Evan Spiegel emphasising the platform’s different approach at the company’s annual conference. Recent research from the University of Amsterdam supports this view, showing that while platforms like TikTok and Instagram negatively affect youth mental health, Snapchat use appears to have positive effects on friendships and well-being.

However, critics argue that Snapchat’s disappearing messages feature can facilitate illegal activities. Matthew Bergman, an advocate for social media victims, claimed the platform has been used by drug dealers, citing instances of children dying from fentanyl poisoning after buying drugs via the app. Despite these concerns, Snapchat remains popular, particularly with younger users.

Industry analysts recognise the platform’s efforts but highlight its ongoing challenges. As Snapchat continues to grow its user base, balancing privacy and safety with revenue generation remains a key issue, especially as it struggles to compete with bigger players like TikTok, Meta, and Google for advertising.

Snapchat’s appeal lies in its low-pressure environment, with features like disappearing stories and augmented reality filters. Young users, like 14-year-old Lily, appreciate the casual nature of communication on the platform, while content creators praise its ability to offer more freedom and reduce social pressure compared to other social media platforms.

US FTC highlights privacy concerns with social media data

A recent report from the US Federal Trade Commission (FTC) has criticised social media platforms for lacking transparency in how they manage user data. Companies such as Meta, TikTok, and Twitch have been highlighted for inadequate data retention policies, raising significant privacy concerns.

Social platforms collect large amounts of data using tracking technologies and by purchasing information from data brokers, often without users’ knowledge. Much of this data fuels the development of AI, with little control given to users. Data privacy for teenagers remains a pressing issue, leading to recent legislative moves in Congress.

Some companies, including X (formerly Twitter), responded by saying that they have improved their data practices since 2020. Others failed to comment. Advertising industry groups defended data collection, claiming it supports free access to online services.

FTC officials are concerned about the risks posed to individuals, especially those not even using the platforms, due to widespread data collection. Inadequate data management by social platforms may expose users to privacy breaches and identity theft.

Social media owners, politicians, and governments top threats to online news trust, IPIE report shows

A recent report from the International Panel on the Information Environment (IPIE) highlights social media owners, politicians, and governments as the primary threats to a trustworthy online news landscape. The report surveyed 412 experts across various academic fields and warned of the unchecked power social media platforms wield over content distribution and moderation. According to Philip Howard, a panel co-founder, such results pose a critical threat to the global flow of reliable information.

The report also raised concerns about major platforms like X (formerly Twitter), Facebook, Instagram, and TikTok. Allegations surfaced regarding biassed moderation, with Elon Musk’s X reportedly prioritising the owner’s posts and Meta being accused of neglecting non-English content. TikTok, under scrutiny for potential ties to the Chinese government, has consistently denied being an agent of any country. The panel emphasised that these platforms’ control over information significantly impacts public trust.

The survey revealed that around two-thirds of respondents anticipate the information environment will deteriorate, marking a noticeable increase in concern compared to previous years. Experts cited AI tools as a growing threat, with generative AI exacerbating the spread of misinformation. AI-generated videos and voice manipulation ranked as the top concerns, especially in developing countries with more acute impact.

However, not all views on AI are negative. Most respondents also saw its potential to combat misinformation by helping journalists sift through large datasets and detect false information. The report concluded by suggesting key solutions: promoting independent media, launching digital literacy initiatives, and enhancing fact-checking efforts to mitigate the negative trends in the digital information landscape.

Judge blocks Utah’s social media law targeting minors

A federal judge has temporarily halted a new Utah law designed to protect minors’ mental health by regulating social media use. The law, set to go into effect on 1 October, would have required social media companies to verify users’ ages and impose restrictions on accounts used by minors. Chief US District Judge Robert Shelby granted a preliminary injunction, stating that the law likely violates the First Amendment rights of social media companies by overly restricting their free speech.

The lawsuit, filed by tech industry group NetChoice, argued that the law unfairly targets social media platforms while exempting other websites, creating content-based restrictions. NetChoice represents major tech firms, including Meta, YouTube, Snapchat, and X (formerly Twitter). The court found their arguments convincing, highlighting that the law failed to meet the high scrutiny required for laws regulating speech.

Utah officials expressed disappointment with the ruling but affirmed their commitment to protecting children from the harmful effects of social media. Attorney General Sean Reyes stated that his office is reviewing the decision and is considering further steps. Governor Spencer Cox signed the law in March, hoping to shield minors from the negative impact of social media. Still, the legal battle underscores the complexity of balancing free speech with safeguarding children online.

The ruling is part of a broader national debate, with courts blocking similar laws in states like California, Texas, and Arkansas. Chris Marchese, director of NetChoice’s litigation centre, hailed the decision as a victory, emphasising that the law is deeply flawed and should be permanently struck down. This ongoing legal struggle reveals the challenge of finding solutions to address growing concerns over the effects of social media on youth without infringing on constitutional rights.

Australia targets social media giants over misinformation

Australia is stepping up its efforts to curb the spread of misinformation online with a new law that could see tech platforms fined up to 5% of their global revenue if they fail to prevent disseminating harmful content. The legislation, part of a broader crackdown on tech giants, aims to hold platforms accountable for falsehoods that threaten election integrity, public health, or critical infrastructure.

Under the proposed law, platforms must create codes of conduct detailing managing misinformation. A regulator must approve these codes, which can impose its standards and penalties if the platforms fail to comply. The government has emphasised the importance of addressing misinformation, warning of its risks to democracy and public safety. Communications Minister Michelle Rowland stressed that inaction would allow the problem to worsen, making it clear that the stakes are high for society and the economy.

The new legislation has sparked debate, with free speech advocates raising concerns about government overreach. A previous version of the bill was criticised for giving too much power to regulators to define what constitutes misinformation. However, the revised proposal includes safeguards, ensuring that professional news, artistic, and religious content are protected while limiting the regulator’s ability to remove specific posts or user accounts.

Tech companies, including Meta and X, have expressed reservations about the law. Meta, which serves a significant portion of Australia’s population, has remained tight-lipped on the legislation, while industry group DIGI has raised questions about its implementation. Meanwhile, X (formerly Twitter) has reduced its content moderation efforts, particularly following its acquisition by Elon Musk, adding another layer of complexity to the debate.

Australia’s stringent legal initiative is part of a global trend, with governments worldwide looking for ways to address the influence of tech platforms. As the country heads into an election year, leaders must ensure that foreign-controlled platforms do not undermine national sovereignty or disrupt the political landscape.

OFAC updates Russia General License for telecoms, issues alert on sanction evasion

The US Department of the Treasury’s Office of Foreign Assets Control (OFAC) has recently updated its Russia General License (GL) 25E, maintaining authorisation for essential and incidental transactions to telecommunications involving the Russian Federation. That license facilitates various internet-based services, including instant messaging, social networking, and e-learning platforms.

It supports the ongoing exchange of communications and allows for the export or reexport of related software, hardware, and technology, provided such transactions comply with the Department of Commerce’s Export Administration Regulations. However, it is important to note that transactions involving significant Russian telecommunications companies designated by OFAC remain unauthorised under this license and must be carefully analysed.

The Department of the Treasury’s Office of Foreign Assets Control has also issued a critical alert regarding Russia’s attempts to evade sanctions by establishing new overseas branches and subsidiaries of Russian financial institutions. That alert warns that these efforts to open new international branches or subsidiaries should be considered potential red flags for sanction evasion.

Financial institutions and foreign regulators are advised to exercise caution when engaging with these entities, as activities such as maintaining accounts, transferring funds, or providing financial services may carry significant risks of facilitating Russia’s attempts to bypass sanctions.