Google faces probe over gaming app policies in India

India’s Competition Commission (CCI) has launched an investigation into Google’s gaming app policies following a complaint by gaming platform WinZO. The inquiry will examine allegations of discriminatory practices against apps offering real-money games.

WinZO accused Google of favouring certain categories, such as fantasy sports and rummy, while excluding others like carrom, puzzles, and racing games. The gaming platform filed the complaint in 2022, claiming Google’s updated policies create an uneven playing field, disadvantaging smaller developers.

The investigation compounds Google’s regulatory challenges in India, where it has already faced significant fines for anti-competitive behaviour in the Android ecosystem. A CCI official has been tasked with completing the inquiry within 60 days.

Google has yet to comment on the allegations, as the announcement coincided with Thanksgiving in the US.

Australian social media ban sparked by politician’s wife’s call to action

Australia has passed a landmark law banning children under 16 from using social media, following a fast-moving push led by South Australian Premier Peter Malinauskas. The law, which takes effect in November 2025, aims to protect young people from the harmful effects of social media, including mental health issues linked to cyberbullying and body image problems. The bill has widespread support, with a government survey showing 77% of Australians backing the measure. However, it has sparked significant opposition from tech companies and privacy advocates, who argue that the law is rushed and could push young users to more dangerous parts of the internet.

The push for the national ban gained momentum after Malinauskas’s state-level initiative to restrict social media access for children under 14 in September. This led to a broader federal response, with Prime Minister Anthony Albanese’s government introducing a nationwide version of the policy. The legislation eliminates parental discretion, meaning no child under 16 will be able to use social media without facing fines for platforms that fail to enforce the rules. This move contrasts with policies in countries like France and Florida, where minors can access social media with parental permission.

While the law has garnered support from most of Australia’s political leaders, it has faced strong criticism from social media companies like Meta and TikTok. These platforms warn that the law could drive teens to hidden corners of the internet and that the rushed process leaves many questions unanswered. Despite the backlash, the law passed with bipartisan support, and a trial of age-verification technology will begin in January to prepare for its full implementation.

The debate over the law highlights growing concerns worldwide about the impact of social media on young people. Although some critics argue that the law is an overreach, others believe it is a necessary step to protect children from online harm. With the law now in place, Australia has set a precedent that could inspire other countries grappling with similar issues.

Australia’s new social media ban faces backlash from Big Tech

Australia’s new law banning children under 16 from using social media has sparked strong criticism from major tech companies. The law, passed late on Thursday, targets platforms like Meta’s Instagram and Facebook, as well as TikTok, imposing fines of up to A$49.5 million for allowing minors to log in. Tech giants, including TikTok and Meta, argue that the legislation was rushed through parliament without adequate consultation and could have harmful unintended consequences, such as driving young users to less visible, more dangerous parts of the internet.

The law was introduced after a parliamentary inquiry into the harmful effects of social media on young people, with testimony from parents of children who had been bullied online. While the Australian government had warned tech companies about the impending legislation for months, the bill was fast-tracked in a chaotic final session of parliament. Critics, including Meta, have raised concerns about the lack of clear evidence linking social media to mental health issues and question the rushed process.

Despite the backlash, the law has strong political backing, and the government is set to begin a trial of enforcement methods in January, with the full ban expected to take effect by November 2025. Australia’s long-standing tensions with major US-based tech companies, including previous legislation requiring platforms to pay for news content, are also fueling the controversy. As the law moves forward, both industry representatives and lawmakers face challenges in determining how it will be practically implemented.

Spotify misused for scams and malware

Scammers are misusing Spotify’s playlist and podcast features to promote pirated software, malware, and phishing schemes. By embedding popular search terms like ‘free download’ or ‘crack’ in playlists and podcast titles, these bad actors ensure their spam appears in Google search results. Users who click on these links often land on unsafe sites designed to install malicious software or steal personal data.

The schemes include playlists and short podcast episodes featuring synthetic voice prompts that redirect listeners to risky external sites. These scams exploit Spotify’s trusted reputation and indexed pages to rank high in search results. Scammers profit through ad clicks, fake surveys, and affiliate links while spreading malware or engaging in phishing attempts.

Experts warn users to avoid clicking on suspicious links, verify playlist or podcast creators, and stick to official sources for downloads. Spotify and search engines like Google face calls to strengthen safeguards to prevent misuse of their platforms. In the meantime, users are encouraged to report fraudulent content and use antivirus software to stay protected.

Mixed reactions as Australia bans social media for minors

Australia’s recent approval of a social media ban for children under 16 has sparked mixed reactions nationwide. While the government argues that the law sets a global benchmark for protecting youth from harmful online content, critics, including tech giants like TikTok, warn that it could push minors to darker corners of the internet. The law, which will fine platforms like Meta’s Facebook, Instagram and TikTok up to A$49.5 million if they fail to enforce it, takes effect one year after a trial period begins in January.

Prime Minister Anthony Albanese emphasised the importance of protecting children’s physical and mental health, citing the harmful impact of social media on body image and misogynistic content. Despite widespread support—77% of Australians back the measure—many are divided. Some, like Sydney resident Francesca Sambas, approve of the ban, citing concerns over inappropriate content, while others, like Shon Klose, view it as an overreach that undermines democracy. Young people, however, expressed their intent to bypass the restrictions, with 11-year-old Emma Wakefield saying she would find ways to access social media secretly.

This ban positions Australia as the first country to impose such a strict regulation, ahead of other countries like France and several US states that have restrictions based on parental consent. The swift passage of the law, which was fast-tracked through parliament, has drawn criticism from social media companies, which argue the law was rushed and lacked proper scrutiny. TikTok, in particular, warned that the law could worsen risks to children rather than protect them.

The move has also raised concerns about Australia’s relationship with the United States, as figures like Elon Musk have criticised the law as a potential overreach. However, Albanese defended the law, drawing parallels to age-based restrictions on alcohol, and reassured parents that while enforcement may not be perfect, it’s a necessary step to protect children online.

ByteDance sues former intern for $1.1 Million

ByteDance, the parent company of TikTok, has filed a $1.1 million lawsuit against former intern Tian Keyu, alleging deliberate sabotage of its AI model training infrastructure. The rare legal action, filed in a Beijing court, has attracted significant attention in China amid an intense race in AI development.

According to ByteDance, Tian manipulated code and made unauthorised modifications, disrupting its large language model (LLM) training tasks. While the company dismissed rumors of damages involving millions of dollars and thousands of graphics processing units, it confirmed that the intern was terminated in August.

The case underscores the growing stakes in generative AI, where technologies capable of creating text and images are advancing rapidly. ByteDance declined to comment further, while Tian, reportedly a postgraduate at Peking University, has yet to respond publicly. This lawsuit highlights the high-pressure environment of AI innovation and the risks companies face from internal threats.

Microsoft rejects AI training allegations

Microsoft has refuted allegations that it uses data from its Microsoft 365 applications, including Word and Excel, to train AI models. These claims surfaced online, with users pointing to the need to opt out of the ‘connected experiences’ feature as a possible loophole for data usage.

A Microsoft spokesperson stated categorically that customer data from both consumer and commercial Microsoft 365 applications is not utilised to train large language models. The spokesperson clarified in an email to Reuters that such suggestions were ‘untrue.’

The company explained that the ‘connected experiences’ feature is designed to support functionalities like co-authoring and cloud storage, rather than contributing to AI training. These assurances aim to address user concerns over potential misuse of their data.

Ongoing discussions on social media underscore persistent public worries about privacy and data security in AI development. Questions about data usage policies continue to highlight the need for transparency from technology companies.

Australia enacts groundbreaking law banning under-16s from social media

Australia has approved a groundbreaking law banning children under 16 from accessing social media, following a contentious debate. The new regulation targets major tech companies like Meta, TikTok, and Snapchat, which will face fines of up to A$49.5 million if they allow minors to log in. Starting with a trial period in January, the law is set to take full effect in 2025. The move comes amid growing global concerns about the mental health impact of social media on young people, with several countries considering similar restrictions.

The law, which marks a significant political win for Prime Minister Anthony Albanese, has received widespread public support, with 77% of Australians backing the ban. However, it has faced opposition from privacy advocates, child rights groups, and social media companies, which argue the law was rushed through without adequate consultation. Critics also warn that it could inadvertently harm vulnerable groups, such as LGBTQIA or migrant teens, by cutting them off from supportive online communities.

Despite the backlash, many parents and mental health advocates support the ban, citing concerns about social media’s role in exacerbating youth mental health issues. High-profile campaigns and testimonies from parents of children affected by cyberbullying have helped drive public sentiment in favour of the law. However, some experts warn the ban could have unintended consequences, pushing young people toward more dangerous corners of the internet where they can avoid detection.

The law also has the potential to strain relations between Australia and the United States, as tech companies with major US ties, including Meta and X, have voiced concerns about its implications for internet freedom. While these companies have pledged to comply, there remain significant questions about how the law will be enforced and whether it can achieve its intended goals without infringing on privacy or digital rights.

Romania plans TikTok suspension over election concerns

Romania‘s telecoms regulator is set to initiate steps to suspend TikTok, citing potential interference in the recent presidential election. Pavel Popescu, the regulator’s deputy head, announced plans to begin the suspension process on Thursday. The action will remain in place until state authorities conclude their investigation into allegations of electoral manipulation linked to the platform.

The scrutiny comes after TikTok‘s role in Sunday’s election raised concerns about misinformation and influence. Officials are prioritising transparency and security during the ongoing electoral process.

The decision underscores the increasing global attention on social media platforms’ influence on democratic processes.

UK social media platforms criticised over safety failures

Nearly a quarter of children aged 8-17 in the UK lie about their age to access adult social media platforms, according to a new Ofcom report. The media regulator criticised current verification processes as insufficient and warned tech companies they face heavy fines if they fail to improve safety measures under the Online Safety Act, which takes effect in 2025.

The law will require platforms to implement ‘highly effective’ age assurance to prevent underage users from accessing adult content. Ofcom’s findings highlight the risks children face from harmful material online, sparking concerns from advocates like the Molly Rose Foundation, which warns that tech companies are not enforcing their own rules.

Some social media platforms, including TikTok, claim they are enhancing safety measures with machine learning and other innovations. However, BBC investigations and feedback from teenagers suggest that bypassing current systems remains alarmingly easy, with no ID verification required for account setup. Calls for stricter regulation continue as online safety concerns grow.