Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

Meta faces lawsuits over teen mental health concerns

A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.

Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.

The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.

Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.

Big Tech’s AI models fall short of new EU AI Act’s standards

A recent assessment of some of the top AI models has revealed significant gaps in compliance with the EU regulations, particularly in cybersecurity resilience and preventing discriminatory outputs. The study by Swiss startup LatticeFlow in collaboration with the EU officials, tested generative AI models from major tech companies like Meta, OpenAI, and Alibaba. The findings are part of an early attempt to measure compliance with the EU’s upcoming AI Act, which will be phased in over the next two years. Companies that fail to meet these standards could face fines of up to €35 million or 7% of their global annual turnover.

LatticeFlow’s ‘Large Language Model (LLM) Checker’ evaluated the AI models across multiple categories, assigning scores between 0 and 1. While many models received respectable scores, such as Anthropic’s ‘Claude 3 Opus,’ which scored 0.89, others revealed vulnerabilities. For example, OpenAI’s ‘GPT-3.5 Turbo’ received a low score of 0.46 for discriminatory output, and Alibaba’s ‘Qwen1.5 72B Chat’ scored even lower at 0.37, highlighting the persistent issue of AI reflecting human biases in areas like gender and race.

In cybersecurity testing, some models also struggled. Meta’s ‘Llama 2 13B Chat’ scored 0.42 in the ‘prompt hijacking’ category, a type of cyberattack where malicious prompts are used to extract sensitive information. Mistral’s ‘8x7B Instruct’ model fared similarly poorly, scoring 0.38. These results show the need for tech companies to strengthen security measures to meet the EU’s strict standards.

While the EU is still finalising the enforcement details of its AI Act, expected by 2025, LatticeFlow’s test provides an early roadmap for companies to fine-tune their models. LatticeFlow CEO Petar Tsankov expressed optimism, noting that the test results are mainly positive and offer guidance for companies to improve their models’ compliance with the forthcoming regulations.

The European Commission, though unable to verify external tools, has welcomed this initiative, calling it a ‘first step’ toward translating the AI Act into enforceable technical requirements. As tech companies prepare for the new rules, the LLM Checker is expected to play a crucial role in helping them ensure compliance.

India investigates WhatsApp’s privacy policy

WhatsApp is facing potential sanctions from India’s Competition Commission (CCI) over its controversial 2021 privacy policy update, which has raised significant privacy concerns. The CCI is reportedly preparing to take action against the messaging platform, owned by Meta, for allegedly breaching antitrust laws related to user data handling. The policy, which allows WhatsApp to share certain user data with Meta, has faced widespread criticism from regulators and users who view it as intrusive and unfair.

The CCI’s investigation suggests that WhatsApp’s data-sharing practices, particularly involving business transaction data, may give Meta an unfair competitive advantage, violating provisions against the abuse of dominance. A draft order has been prepared to penalise both WhatsApp and Meta, as the CCI’s director general has submitted findings indicating these violations.

In response, WhatsApp stated that the case is still under judicial review and defended its privacy policy by noting that users had the choice to accept the update without losing access to their accounts. If sanctions are imposed, this could represent a pivotal moment in India’s efforts to regulate major tech firms and establish precedents for the intersection of privacy and competition laws in the digital age.

X corp settles with Unilever in antitrust dispute

Elon Musk’s X, formerly known as Twitter, has dropped Unilever from its antitrust lawsuit that accused the company and others of conspiring to boycott the social media platform, leading to a loss in ad revenue. X’s filing in a Texas federal court confirmed the decision, though details of the agreement between the two companies were not disclosed.

Unilever, known for products like Dove and Hellmann’s, confirmed the settlement, stating that X has committed to meeting standards ensuring brand safety. Both X and Unilever expressed satisfaction with the resolution, with X noting its plans to continue working with the company.

The original lawsuit, filed in August, named several other companies and accused the World Federation of Advertisers of leading a boycott that withheld billions in ad revenue. X has stated that it will continue to pursue its claims against the remaining defendants. The boycott followed concerns about harmful content appearing next to ads after Musk’s acquisition of X in 2022.

UK police scale back presence on X over misinformation worries

British police forces are scaling back their presence on X, formerly known as Twitter, due to concerns over the platform’s role in spreading extremist content and misinformation. This decision comes after riots broke out in the UK this summer, fueled by false online claims, with critics blaming Elon Musk’s approach to moderation for allowing hate speech and disinformation to flourish. Several forces, including North Wales Police, have stopped using the platform altogether, citing misalignment with their values.

Of the 33 police forces surveyed, 10 are actively reviewing their use of X, while others are assessing whether the platform is still suitable for reaching their communities. Emergency services have relied on X for more than a decade to share critical updates, but some, like Gwent Police, are reconsidering due to the platform’s tone and reach.

This shift is part of a larger trend in Britain, where some organisations, including charities and health services, have also moved away from X. As new online safety laws requiring tech companies to remove illegal content come into effect, digital platforms, including X, are facing growing scrutiny over their role in spreading harmful material.

Meta takes action against Russian-linked accounts in Moldova

Meta Platforms announced it had removed a network of accounts targeting Russian speakers in Moldova ahead of the country’s October 20 election, citing violations of its fake accounts policy. Moldovan authorities have also blocked numerous Telegram channels and chatbots allegedly used to pay voters to cast “no” votes in a referendum on EU membership being held alongside the presidential election. Pro-European President Maia Sandu, seeking a second term, has made the referendum central to her platform.

The deleted Meta accounts targeted President Maia Sandu, pro-EU politicians, and the strong ties between Moldova and Romania while promoting pro-Russia parties. This network featured fake Russian-language news brands masquerading as independent media across various platforms, including Facebook, Instagram, Telegram, OK.ru, and TikTok. Meta’s actions involved removing multiple accounts, pages, and groups to combat coordinated inauthentic behaviour.

Moldova’s National Investigation Inspectorate has blocked 15 Telegram channels and 95 chatbots that were offering payments to voters, citing violations of political financing laws. Authorities linked these activities to supporters of fugitive businessman Ilan Shor, who established the ‘Victory’ electoral bloc while in exile in Moscow. In response, Moldovan police have raided the homes of Shor’s associates, alleging that payments were funnelled through a Russian bank to influence the election. Shor, who was sentenced in absentia for his involvement in a significant 2014 bank fraud case, denies the bribery allegations. Meanwhile, President Maia Sandu accuses Russia of attempting to destabilise her government, while Moscow claims that she is inciting ‘Russophobia.’

Hundreds lose jobs as TikTok focuses on AI moderation

TikTok, owned by ByteDance, is cutting hundreds of jobs globally as it pivots towards greater use of AI in content moderation. Among the hardest hit is Malaysia, where fewer than 500 employees were affected, mostly involved in moderation roles. The layoffs come as TikTok seeks to improve the efficiency of its moderation system, relying more heavily on automated detection technologies.

The firm’s spokesperson explained that the move is part of a broader plan to optimise its global content moderation model, aiming for more streamlined operations. TikTok has announced plans to invest $2 billion in global trust and safety measures, with 80% of harmful content already being removed by AI.

The layoffs in Malaysia follow increased regulatory pressure on technology companies operating in the region. Malaysia’s government recently urged social media platforms, including TikTok, to enhance their monitoring systems and apply for operating licences to combat rising cybercrime.

ByteDance, which employs over 110,000 people worldwide, is expected to continue restructuring next month as it consolidates some of its regional operations. These changes highlight the company’s ongoing shift towards automation in its content management strategy.

Cybercriminals use AI to target elections, says OpenAI

OpenAI reports cybercriminals are increasingly using its AI models to generate fake content aimed at influencing elections. The startup has neutralised over 20 attempts this year, including accounts producing articles on the US elections. Several accounts from Rwanda were banned in July for similar activities related to elections in that country.

The company confirmed that none of these attempts succeeded in generating viral engagement or reaching sustainable audiences. However, the use of AI in election interference remains a growing concern, especially as the US approaches its presidential elections. The US Department of Homeland Security also warns of foreign nations attempting to spread misinformation using AI tools.

As OpenAI strengthens its global position, the rise in election manipulation efforts underscores the critical need for heightened vigilance. The company recently completed a $6.6 billion funding round, further securing its status as one of the most valuable private firms.

ChatGPT continues to see rapid growth, boasting 250 million weekly active users since launching in November 2022, emphasising the platform’s widespread influence.

Temu faces deadline from EU over illegal product sales

The European Commission has set a deadline of October 21 for the Chinese online marketplace Temu to respond to inquiries regarding its compliance with the Digital Services Act (DSA). The Commission is seeking detailed information about Temu’s efforts to combat the sale of illegal products on its platform and the measures it has implemented to ensure consumer protection, public health, and user wellbeing.

Temu, founded in 2022 by PDD Holdings, was classified as a Very Large Online Platform due to its user base exceeding 45 million monthly average users in the EU. It was previously required to meet DSA standards by the end of September, including addressing systemic risks and preventing the sale of counterfeit goods. This latest inquiry marks the second time the Commission has sought clarification from Temu, following questions in June about its compliance with the “Notice and Action mechanism” for reporting illegal products.

The European Consumer Organisation (BEUC) has also raised concerns about Temu’s practices, filing complaints against the platform for failing to protect consumers and employing manipulative tactics. These complaints, supported by representatives from 17 EU member states, allege that Temu does not provide essential seller information, hindering consumers’ ability to verify product safety compliance. The DSA has been in effect since February, and the EU has initiated several investigations into other major platforms for similar compliance issues.