AI-generated news alerts paused by Apple amid accuracy concerns

Apple has halted AI-powered notification summaries for news and entertainment apps after backlash over misleading news alerts. A BBC complaint followed a summary that misrepresented an article about a murder case involving UnitedHealthcare’s CEO.

The latest developer previews for iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3 disable notification summaries for such apps, with Apple planning to reintroduce them after improvements. Notification summaries will now appear in italics to help users distinguish them from standard alerts.

Users will also gain the ability to turn off notification summaries for individual apps directly from the Lock Screen. Apple will notify users in the Settings app that the feature remains in beta and may contain errors.

A public beta is expected next week, but the general release date for iOS 18.3 remains unclear. Apple had already announced plans to clarify that summary texts are generated by Apple Intelligence.

US Supreme Court to hear challenge to Texas pornography age verification law

The US Supreme Court will hear a challenge on Wednesday regarding a Texas law that mandates adult websites verify the age of users before granting access to potentially harmful material. The law, which is part of a broader trend across Republican-led states, requires users to submit personal information proving they are at least 18 years old to access pornographic content. The case raises significant First Amendment concerns, as adult entertainment industry groups argue that the law unlawfully restricts free speech and exposes users to risks such as identity theft and data breaches.

The challengers, including the American Civil Liberties Union and the Free Speech Coalition, contend that alternative methods like content-filtering software could better protect minors without infringing on adults’ rights to access non-obscene material. Texas, however, defends the law, citing concerns over the ease with which minors can access explicit content online.

This case is significant because it will test the balance between state efforts to protect minors from explicit content and the constitutional rights of adults to access protected expression. If the Supreme Court upholds the law, it could set a precedent for similar age-verification measures across the US.

Indonesia targets age limits for social media access

Indonesia plans to implement interim guidelines to protect children on social media as it works toward creating a law to establish a minimum age for users, a senior communications ministry official announced on Wednesday. The move follows discussions between Communications Minister Meutya Hafid and President Prabowo Subianto, aiming to address concerns about online safety for children.

The proposed law will mirror recent regulations in Australia, which banned children under 16 from accessing social media platforms like Instagram, Facebook, and TikTok, penalising tech companies that fail to comply. In the meantime, Indonesia will issue regulations requiring platforms to follow child protection guidelines, focusing on shielding children from harmful content while still allowing access to some degree.

Public opinion on the initiative is divided. While parents like Nurmayanti support stricter controls to reduce exposure to harmful material, human rights advocates, including Anis Hidayah, urge caution to ensure children’s access to information is not unduly restricted. A recent survey revealed nearly half of Indonesian children under 12 use the internet, with many accessing social media platforms such as Facebook, Instagram, and TikTok.

This regulatory push reflects Indonesia’s broader efforts to balance digital innovation with safeguarding younger users in its rapidly growing online landscape

Indonesia plans social media age restrictions to protect children

Indonesia is preparing to introduce regulations setting a minimum age for social media users, aiming to shield children from potential online risks, according to Communications Minister Meutya Hafid. The announcement follows Australia’s recent ban on social media access for children under 16, which imposes penalties on platforms like Meta’s Facebook and Instagram, as well as TikTok, for non-compliance.

While the specific age limit for Indonesia remains undecided, Minister Hafid stated that President Prabowo Subianto supports the initiative, emphasising the importance of child protection in the digital space. The move highlights concerns about young users’ exposure to inappropriate content and data privacy risks.

Indonesia, with a population of approximately 280 million, has significant internet usage. A recent survey found internet penetration at 79.5%, with nearly half of children under 12 accessing the web, often using platforms like Facebook, Instagram, and TikTok. Among “Gen Z” users aged 12 to 27, internet penetration reached 87%. The proposed regulation reflects growing global efforts to prioritise child safety online.

Father of Molly Russell urges UK to strengthen online safety laws

Ian Russell, father of Molly Russell, has called on the UK government to take stronger action on online safety, warning that delays in regulation are putting children at risk. In a letter to Prime Minister Sir Keir Starmer, he criticised Ofcom’s approach to enforcing the Online Safety Act, describing it as a “disaster.” Russell accused tech firms, including Meta and X, of prioritising profits over safety and moving towards a more dangerous, unregulated online environment.

Campaigners argue that Ofcom’s guidelines contain major loopholes, particularly in addressing harmful content such as live-streamed material that promotes self-harm and suicide. While the government insists that tech companies must act responsibly, the slow progress of new regulations has raised concerns. Ministers acknowledge that additional legislation may be required as AI technology evolves, introducing new risks that could further undermine online safety.

Russell has been a prominent campaigner for stricter online regulations since his daughter’s death in 2017. Despite the Online Safety Act granting Ofcom the power to fine tech firms, critics believe enforcement remains weak. With concerns growing over the effectiveness of current safeguards, pressure is mounting on the government to act decisively and ensure platforms take greater responsibility in protecting children from harmful content.

Gaming app offers mental health support for kids

A new app designed to help children aged seven to twelve manage anxiety through gaming is being launched in Lincolnshire, UK. The app, called Lumi Nova, combines cognitive behavioural therapy (CBT) techniques with personalised quests to gently expose children to their fears in a safe and interactive way.

The digital game has been created by BFB Labs, a social enterprise focused on digital therapy, in collaboration with children, parents, and mental health experts. The app aims to make mental health support more accessible, particularly in rural areas, where traditional services may be harder to reach.

Families in Lincolnshire can download the app for free without needing a prescription or referral. Councillor Patricia Bradwell from Lincolnshire County Council highlighted the importance of flexible mental health services, saying: ‘We want to ensure children and young people have easy access to support that suits their needs.’

By using immersive videos and creative tasks, Lumi Nova allows children to confront their worries at their own pace from the comfort of home, making mental health care more engaging and approachable. The year-long pilot aims to assess the app’s impact on childhood anxiety in the region.

TikTok faces new allegations of child exploitation

TikTok is under heightened scrutiny following newly unsealed allegations from a Utah lawsuit claiming the platform knowingly allowed harmful activities, including child exploitation and sexual misconduct, to persist on its livestreaming feature, TikTok Live. According to the lawsuit, TikTok disregarded the issue because it ‘profited significantly’ from these livestreams. The revelations come as the app faces a potential nationwide ban in the US unless its parent company, ByteDance, divests ownership.

The complaint, filed by Utah’s Division of Consumer Protection in June, accuses TikTok Live of functioning as a ‘virtual strip club,’ connecting minors with adult predators in real-time. Internal documents and investigations, including Project Meramec and Project Jupiter probes, reveal that TikTok was aware of the dangers. The findings indicate that hundreds of thousands of minors bypassed age restrictions and were allegedly groomed by adults to perform explicit acts in exchange for virtual gifts. The probes also uncovered criminal activities such as money laundering and drug sales facilitated through TikTok Live.

TikTok has defended itself, claiming it prioritises user safety and accusing the lawsuit of distorting facts by selectively quoting outdated internal documents. A spokesperson emphasised the platform’s ‘proactive measures’ to support community safety and dismissed the allegations as misleading. However, the unsealed material from the case, released by Utah Judge Coral Sanchez, paints a stark picture of TikTok Live’s risks to minors.

This lawsuit is not an isolated case. In October, 13 US states and Washington, D.C., filed a bipartisan lawsuit accusing TikTok of exploiting children and fostering addiction to the app. Utah Attorney General Sean Reyes called social media a pervasive tool for exploiting America’s youth and welcomed the disclosure of TikTok’s internal communications as critical evidence for demonstrating the platform’s culpability.

Why does it matter?

The controversy unfolds amid ongoing national security concerns about TikTok’s ties to China. President Joe Biden signed legislation authorising a TikTok ban last April, citing risks that the app could share sensitive data with the Chinese government. The US Supreme Court is set to hear arguments on whether to delay the ban on 10 January, with a decision expected shortly thereafter. The case underscores the intensifying debate over social media’s role in safeguarding users while balancing innovation and accountability.

Albania’s TikTok ban: Balancing youth protection with free speech and economic impact

In Tirana, Albania, Ergus Katiaj, a small business owner who relies on TikTok to market his nighttime delivery service for snacks, cigarettes, and alcohol, faces an uncertain future. The Albanian government has announced a year-long ban on the social media platform, a move aimed at curbing youth violence.

The ban follows a tragic incident in November where a 14-year-old boy was fatally stabbed, reportedly after an online clash with a peer. Prime Minister Edi Rama said the decision, announced on 21 December, is to protect young people, but critics argue it threatens free speech and commerce ahead of the May elections.

The ban aligns Albania with a growing list of countries imposing restrictions on TikTok due to concerns over harmful content and its ties to China-based parent company ByteDance. However, business owners like Katiaj fear significant financial losses, as TikTok has been a vital tool for free marketing.

Rights groups and opposition leaders, such as Arlind Qori of the Bashke party, worry the ban sets a troubling precedent for political censorship, particularly in a country where protests against the jailing of political opponents were met with harsh government responses last year.

TikTok has called for urgent clarification from the Albanian government, asserting that reports indicate the videos linked to the tragic incident were uploaded to another platform. Meanwhile, the debate continues, with some viewing the ban as a protective measure for youth and others as an overreach limiting commerce and dissent.

For many, like Katiaj, the ban underscores the broader challenges of balancing public safety with democratic freedoms in Albania.

California’s ban on addictive feeds for minors upheld

A federal judge has upheld California’s law, SB 976, which restricts companies from serving addictive content feeds to minors. The decision allows the legislation to take effect, beginning a significant shift in how social media platforms operate in the state.

Companies must now ensure that addictive feeds, defined as algorithms recommending content based on user behaviour rather than explicit preferences, are not shown to minors without parental consent. By 2027, businesses will also need to implement age assurance techniques, such as age estimation models, to identify underage users and tailor their feeds accordingly.

The tech industry group NetChoice, representing firms like Meta, Google, and X, attempted to block the law, citing First Amendment concerns. While the judge dismissed their challenge to the addictive feeds provision, certain aspects of the law, such as limits on nighttime notifications for minors, were blocked.

This ruling marks a notable step in California’s efforts to regulate the digital landscape and protect younger users from potentially harmful online content.

TikTok fined in Russia for legal violations

A Moscow court has fined TikTok three million roubles (around $28,930) for failing to meet Russian legal requirements. The court’s press service confirmed the verdict but did not elaborate on the specific violation.

The social media platform, owned by ByteDance, has been facing increasing scrutiny worldwide. Allegations of non-compliance with legal frameworks and security concerns have made headlines in multiple countries.

TikTok encountered further setbacks recently, including a year-long ban in Albania last December. Canadian authorities also ordered the company to halt operations, citing national security threats.

The fine in Russia reflects the mounting regulatory challenges for TikTok as it navigates stricter oversight in various regions.