Singapore blocks 95 accounts linked to exiled Chinese tycoon Guo Wengui

Singapore has ordered five social media platforms to block access to 95 accounts linked to exiled Chinese tycoon Guo Wengui. These accounts posted over 120 times from April 17 to May 10, alleging foreign interference in Singapore’s leadership transition. The Home Affairs Ministry stated that the posts suggested a foreign actor influenced the selection of Singapore’s new prime minister.

Singapore’s Foreign Interference (Countermeasures) Act, enacted in October 2021, was used for the first time to address this issue. Guo Wengui, recently convicted in the US for fraud, has a history of opposing Beijing. Together with former Trump adviser Steve Bannon, he launched the New Federal State of China, aimed at overthrowing China’s Communist Party.

The ministry expressed concern that Guo’s network could spread false narratives detrimental to Singapore’s interests and sovereignty. Blocking these accounts was deemed necessary to prevent potential hostile information campaigns targeting Singapore.

Guo and his affiliated organisations have been known to push various Singapore-related narratives. The coordinated actions and previous attempts to use Singapore to advance their agenda highlighted their capability to undermine Singapore’s social cohesion and sovereignty.

Trump assassination attempt sparks online conspiracy theories

Following the attempted assassination of Donald Trump, misinformation and conspiracy theories flooded social media platforms. Thomas Matthew Crooks, the 20-year-old shooter, was killed by the Secret Service after his shot grazed Trump’s ear. The incident has led to a surge in wild claims and disinformation online.

Conspiracy theories have proliferated, including suggestions that the shooting was staged and that the shooter is not actually dead. Social media platforms have seen a rise in copycat accounts and videos claiming to be from the shooter, further fuelling speculation and confusion.

Some users are alleging that President Joe Biden was behind the attack. The situation has also sparked misogynistic comments online, with users criticising female Secret Service agents. Russian trolls have been active, spreading false information and blaming Ukraine for the attack. Cybersecurity experts warn that disinformation campaigns are likely to continue as the investigation unfolds.

Authorities and cybersecurity specialists are monitoring the situation closely, emphasising the need for vigilance against the spread of false information. The surge in conspiracy theories highlights the ongoing challenge of managing misinformation in the digital age.

Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

Cambodian messaging app faces backlash over privacy fears

Cambodia recently launched its messaging app, CoolApp, which is supported by former Prime Minister Hun Sen. He has emphasised that the app is crucial for national security, aiming to protect Cambodian information from foreign interference. Hun Sen’s endorsement of CoolApp aligns with his long-standing approach of maintaining tight control over the country’s communication channels, especially in the face of external influences. He compared the app to other national messaging services like China’s WeChat and Russia’s Telegram, indicating a desire for Cambodia to have a secure, homegrown platform.

However, the introduction of CoolApp has raised significant concerns among critics and opposition leaders. They argue that the app could be a tool for government surveillance, potentially used to monitor and suppress political discourse. Mu Sochua, an exiled opposition leader, warned that CoolApp represents a new method for mass surveillance and control of public discourse, reminiscent of practices seen in China. Another opposition figure, Sam Rainsy, called for a boycott of the app, suggesting that its true purpose is to strengthen the repressive tools available to the Cambodian regime. These concerns are amplified by Cambodia’s recent history of internet censorship, media blackouts, and persecution of government critics.

CoolApp’s founder and CEO, Lim Cheavutha, claims the app uses end-to-end encryption to ensure user privacy and has reached 150,000 downloads, with expectations to reach up to 1 million. However, these assurances do little to alleviate fears of government surveillance, given Cambodia’s history of using technology to control dissent.

The app’s launch comes amid broader security challenges in Cambodia, including online scams by Chinese gangs and close ties with China’s surveillance-heavy regime. The following situation highlights the ongoing tension between Cambodia’s national security and civil liberties.

Illinois judge dismisses lawsuit against X over social media photo scanning

A federal judge in Illinois dismissed a class action lawsuit against the social network X, ruling that the photos it collected did not constitute biometric data under the state’s Biometric Information Privacy Act (BIPA). The lawsuit alleged that X violated BIPA by using Microsoft’s PhotoDNA software to scan for offensive images without proper disclosure and consent.

The judge concluded that the plaintiff failed to prove that the PhotoDNA tool involved facial geometry scanning or could identify specific individuals. Instead, the software analysed uploaded photos to detect nudity or pornographic content, which did not qualify as a scan of facial geometry under BIPA.

The ruling mirrors a recent case involving Facebook, where allegations of illegally collecting biometric data were dismissed. Both cases clarified that a digital signature generated from a photograph, known as a ‘hash’ or face signature, did not violate BIPA’s definition of biometric identifiers.

The judge emphasised that BIPA aims to regulate specific biometric identifiers like retina scans or fingerprints, excluding photographs to avoid an overly broad scope. Applying BIPA to any face geometry scan that cannot identify individuals would contradict the law’s purpose of ensuring notice and consent.

BIPA’s private right of action has been a significant deterrent for biometrics companies, allowing users to sue for damages in cases of non-compliance.

Warning labels for social media suggested by US Surgeon General

US Surgeon General Vivek Murthy has called for a warning label on social media apps to highlight the harm these platforms can cause young people, particularly adolescents. In a New York Times op-ed, Murthy emphasised that while a warning label alone won’t make social media safe, it can raise awareness and influence behaviour, similar to tobacco warning labels. The proposal requires legislative approval from Congress. Social media platforms like Facebook, Instagram, TikTok, and Snapchat have faced longstanding criticism for their negative impact on youth, including shortened attention spans, negative body image, and vulnerability to online predators and bullies.

Murthy’s proposal comes amid increasing efforts by youth advocates and lawmakers to protect children from social media’s harmful effects. US senators grilled CEOs of major social media companies, accusing them of failing to protect young users from dangers such as sexual predators. States are also taking action; New York recently passed legislation requiring parental consent for users under 18 to access ‘addictive’ algorithmic content, and Florida has banned children under 14 from social media platforms while requiring parental consent for 14- and 15-year-olds.

Despite these growing concerns and legislative efforts, major social media companies have not publicly responded to Murthy’s call for warning labels. The push for such labels is part of broader initiatives to mitigate the mental health risks associated with social media use among adolescents, aiming to reduce issues like anxiety and depression linked to these platforms.

New York lawmakers pass bills on social media restrictions

New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.

Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.

While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.

The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.

New York to require parental consent for social media access

New York lawmakers are preparing to ban social media companies from using algorithms to control content seen by youth without parental consent. The legal initiative, expected to be voted on this week, aims to protect minors from automated feeds and notifications during overnight hours unless parents approve. The move comes as social media platforms face increasing scrutiny for their addictive nature and impact on young people’s mental health.

Earlier this year, New York City Mayor Eric Adams announced a lawsuit against major social media companies, including Facebook and Instagram, for allegedly contributing to a mental health crisis among youth. Similar actions have been taken by other states, with Florida recently passing a law requiring parental consent for minors aged 14 and 15 to use social media and banning those under 14 from accessing these platforms.

Why does it matter?

The trend started with Utah, which became the first state to regulate children’s social media access last year. States like Arkansas, Louisiana, Ohio, and Texas have since followed suit. The heightened regulation is affecting social media companies, with shares of Meta and Snap seeing a slight decline in extended trading.

GLAAD report: major social media platforms fail LGBTQ safety standards

GLAAD, the LGBTQ media advocacy organisation, gave failing grades to most major social media platforms for their handling of safety, privacy, and expression for the LGBTQ community online, as reported by The Hill. In the fourth annual Social Media Safety Index, GLAAD assessed hate, disinformation, anti-LGBTQ tropes, content suppression, AI, data protection, and the link between online hate and real-world harm.

Five of six leading social media platforms, including X (formerly Twitter), YouTube, Facebook, Instagram, and Threads, received failing grades for the third consecutive year. TikTok was the only platform not to receive an F, instead earning a D+ due to improvements in its Anti-Discrimination Ad Policy, which included preventing advertisers from wrongfully targeting or excluding users from content. Meanwhile, Threads received its first F since its launch in 2023, and Facebook and Instagram’s ratings worsened from the previous year.

Why does it matter?

GLAAD uses this index to urge social media leaders to create safer environments for the LGBTQ community, noting a lack of enforcement of current policies in the digital sector and a clear link between online hate and increasing real-world violence and legislative attacks.

AI-generated child images on social media attract disturbing attention

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, “Woman With Chopsticks,” had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.