Gaming app offers mental health support for kids

A new app designed to help children aged seven to twelve manage anxiety through gaming is being launched in Lincolnshire, UK. The app, called Lumi Nova, combines cognitive behavioural therapy (CBT) techniques with personalised quests to gently expose children to their fears in a safe and interactive way.

The digital game has been created by BFB Labs, a social enterprise focused on digital therapy, in collaboration with children, parents, and mental health experts. The app aims to make mental health support more accessible, particularly in rural areas, where traditional services may be harder to reach.

Families in Lincolnshire can download the app for free without needing a prescription or referral. Councillor Patricia Bradwell from Lincolnshire County Council highlighted the importance of flexible mental health services, saying: ‘We want to ensure children and young people have easy access to support that suits their needs.’

By using immersive videos and creative tasks, Lumi Nova allows children to confront their worries at their own pace from the comfort of home, making mental health care more engaging and approachable. The year-long pilot aims to assess the app’s impact on childhood anxiety in the region.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

TikTok faces new allegations of child exploitation

TikTok is under heightened scrutiny following newly unsealed allegations from a Utah lawsuit claiming the platform knowingly allowed harmful activities, including child exploitation and sexual misconduct, to persist on its livestreaming feature, TikTok Live. According to the lawsuit, TikTok disregarded the issue because it ‘profited significantly’ from these livestreams. The revelations come as the app faces a potential nationwide ban in the US unless its parent company, ByteDance, divests ownership.

The complaint, filed by Utah’s Division of Consumer Protection in June, accuses TikTok Live of functioning as a ‘virtual strip club,’ connecting minors with adult predators in real-time. Internal documents and investigations, including Project Meramec and Project Jupiter probes, reveal that TikTok was aware of the dangers. The findings indicate that hundreds of thousands of minors bypassed age restrictions and were allegedly groomed by adults to perform explicit acts in exchange for virtual gifts. The probes also uncovered criminal activities such as money laundering and drug sales facilitated through TikTok Live.

TikTok has defended itself, claiming it prioritises user safety and accusing the lawsuit of distorting facts by selectively quoting outdated internal documents. A spokesperson emphasised the platform’s ‘proactive measures’ to support community safety and dismissed the allegations as misleading. However, the unsealed material from the case, released by Utah Judge Coral Sanchez, paints a stark picture of TikTok Live’s risks to minors.

This lawsuit is not an isolated case. In October, 13 US states and Washington, D.C., filed a bipartisan lawsuit accusing TikTok of exploiting children and fostering addiction to the app. Utah Attorney General Sean Reyes called social media a pervasive tool for exploiting America’s youth and welcomed the disclosure of TikTok’s internal communications as critical evidence for demonstrating the platform’s culpability.

Why does it matter?

The controversy unfolds amid ongoing national security concerns about TikTok’s ties to China. President Joe Biden signed legislation authorising a TikTok ban last April, citing risks that the app could share sensitive data with the Chinese government. The US Supreme Court is set to hear arguments on whether to delay the ban on 10 January, with a decision expected shortly thereafter. The case underscores the intensifying debate over social media’s role in safeguarding users while balancing innovation and accountability.

Albania’s TikTok ban: Balancing youth protection with free speech and economic impact

In Tirana, Albania, Ergus Katiaj, a small business owner who relies on TikTok to market his nighttime delivery service for snacks, cigarettes, and alcohol, faces an uncertain future. The Albanian government has announced a year-long ban on the social media platform, a move aimed at curbing youth violence.

The ban follows a tragic incident in November where a 14-year-old boy was fatally stabbed, reportedly after an online clash with a peer. Prime Minister Edi Rama said the decision, announced on 21 December, is to protect young people, but critics argue it threatens free speech and commerce ahead of the May elections.

The ban aligns Albania with a growing list of countries imposing restrictions on TikTok due to concerns over harmful content and its ties to China-based parent company ByteDance. However, business owners like Katiaj fear significant financial losses, as TikTok has been a vital tool for free marketing.

Rights groups and opposition leaders, such as Arlind Qori of the Bashke party, worry the ban sets a troubling precedent for political censorship, particularly in a country where protests against the jailing of political opponents were met with harsh government responses last year.

TikTok has called for urgent clarification from the Albanian government, asserting that reports indicate the videos linked to the tragic incident were uploaded to another platform. Meanwhile, the debate continues, with some viewing the ban as a protective measure for youth and others as an overreach limiting commerce and dissent.

For many, like Katiaj, the ban underscores the broader challenges of balancing public safety with democratic freedoms in Albania.

Malaysia tightens social media oversight with new licensing law

Malaysia’s communications regulator has granted licenses to Tencent’s WeChat and ByteDance’s TikTok under a new social media law designed to combat rising cybercrime. The law, effective from 1 January, mandates that platforms and messaging services with over 8 million users in Malaysia must obtain a license or face legal consequences.

While messaging app Telegram is close to completing the licensing process, Meta Platforms, the owner of Facebook, Instagram, and WhatsApp, has just started compliance steps. Other major platforms face scrutiny under the law. X, formerly known as Twitter, claims its user base in Malaysia falls below the 8 million threshold, a claim currently under review by authorities.

Alphabet’s YouTube has not applied for a license, citing concerns about how the law applies to its video-sharing features. The regulator emphasised that non-compliance could lead to investigations and regulatory actions.

The move follows a surge in harmful online content earlier this year, prompting Malaysian authorities to urge tighter monitoring from social media companies. Content related to online scams, child exploitation, cyberbullying, and sensitive topics such as race, religion, and royalty is classified as harmful.

Platforms like TikTok, Facebook, and YouTube reportedly have millions of active users in Malaysia. TikTok has over 28 million users aged 18 and above, highlighting the region’s high stakes of regulatory compliance.

California’s ban on addictive feeds for minors upheld

A federal judge has upheld California’s law, SB 976, which restricts companies from serving addictive content feeds to minors. The decision allows the legislation to take effect, beginning a significant shift in how social media platforms operate in the state.

Companies must now ensure that addictive feeds, defined as algorithms recommending content based on user behaviour rather than explicit preferences, are not shown to minors without parental consent. By 2027, businesses will also need to implement age assurance techniques, such as age estimation models, to identify underage users and tailor their feeds accordingly.

The tech industry group NetChoice, representing firms like Meta, Google, and X, attempted to block the law, citing First Amendment concerns. While the judge dismissed their challenge to the addictive feeds provision, certain aspects of the law, such as limits on nighttime notifications for minors, were blocked.

This ruling marks a notable step in California’s efforts to regulate the digital landscape and protect younger users from potentially harmful online content.

TikTok fined in Russia for legal violations

A Moscow court has fined TikTok three million roubles (around $28,930) for failing to meet Russian legal requirements. The court’s press service confirmed the verdict but did not elaborate on the specific violation.

The social media platform, owned by ByteDance, has been facing increasing scrutiny worldwide. Allegations of non-compliance with legal frameworks and security concerns have made headlines in multiple countries.

TikTok encountered further setbacks recently, including a year-long ban in Albania last December. Canadian authorities also ordered the company to halt operations, citing national security threats.

The fine in Russia reflects the mounting regulatory challenges for TikTok as it navigates stricter oversight in various regions.

European nations debate school smartphone bans

As concerns grow over the impact of smartphones on children, several European countries are implementing or debating restrictions on their use in schools. France, for example, has prohibited phones in primary and secondary schools since 2018 and recently extended the policy to include ‘digital breaks’ at some institutions. Similarly, the Netherlands and Hungary have adopted bans, with exceptions for educational purposes or special needs, while Italy, Greece, and Latvia have also imposed restrictions.

The debate is fueled by studies showing that smartphones can distract students, though some argue they can also be useful for learning. A 2023 UNESCO report recommended limiting phones in schools to support education, with more than 60 countries now following similar measures. However, enforcement remains a challenge, as some reports suggest that many students still find ways to use their devices despite the bans.

Experts remain divided on the issue. While some highlight the risks of distraction and mental health impacts, others emphasise the need for balance. ‘Banning phones can be beneficial, but we must ensure children have adequate alternatives for education and communication,’ said Ben Carter, a professor of medical statistics at King’s College London.

The trend reflects broader concerns about screen time among children, with countries like Sweden and Luxembourg calling for clearer rules to promote healthier digital habits. While opinions differ, the growing movement underscores a collective effort to create focused, engaging, and healthier learning environments.

How teens are falling victim to digital scams

In the rapidly expanding online world, teenagers are becoming prime targets for scammers. Over a recent five-year period, financial losses reported by teens increased by an alarming 2,500%, outpacing the 805% rise among seniors. Experts attribute this to scammers exploiting the tech-savviness of younger users while capitalising on their lack of experience.

Scammers use various tactics, including impersonating online influencers, romance schemes, and phishing for sensitive information through gaming platforms. One growing threat involves sextortion, where victims are coerced into sharing explicit images that are later used to demand money under the threat of public exposure. Tragically, such incidents have already led to devastating consequences, including teen suicides.

Parents are urged to foster open communication with their children about these risks, creating a safe space for them to share any unsettling online encounters. Basic steps like monitoring app usage, staying connected on social media, and setting clear tech boundaries can go a long way in shielding teens from these dangers. The key, experts stress, is building trust and ensuring children know they have unwavering support, no matter the situation.

Major US telecom firms confirm cyberattacks by Chinese group ‘Salt Typhoon’, sparking national security concerns

AT&T and Verizon have confirmed cyberattacks linked to a Chinese hacking group known as “Salt Typhoon,” but assured the public on Saturday that their US networks are now secure. Both companies acknowledged the breaches for the first time, stating they are cooperating with law enforcement and government agencies to address the threat. AT&T disclosed that the attackers targeted a small group of individuals tied to foreign intelligence, while Verizon emphasised that the activities have been contained following extensive remediation efforts.

The attacks, described by US officials as the most extensive telecommunications hack in the nation’s history, reportedly allowed Salt Typhoon operatives to access sensitive network systems, including the ability to geolocate individuals and record phone calls. Authorities have linked the breaches to several telecom firms, with a total of nine entities now confirmed as compromised. In response, the Cybersecurity and Infrastructure Security Agency has urged government officials to transition to encrypted communication methods.

US Senators, including Democrat Ben Ray Luján and Republican Ted Cruz, have expressed alarm over the breach’s scale, calling for stronger safeguards against future intrusions. Meanwhile, Chinese officials have denied the accusations, dismissing them as disinformation and reaffirming their opposition to cyberattacks. Despite assurances from the companies and independent cybersecurity experts, questions remain about how long it will take to fully restore public confidence in the nation’s telecommunications security.