TikTok fined in Russia for legal violations

A Moscow court has fined TikTok three million roubles (around $28,930) for failing to meet Russian legal requirements. The court’s press service confirmed the verdict but did not elaborate on the specific violation.

The social media platform, owned by ByteDance, has been facing increasing scrutiny worldwide. Allegations of non-compliance with legal frameworks and security concerns have made headlines in multiple countries.

TikTok encountered further setbacks recently, including a year-long ban in Albania last December. Canadian authorities also ordered the company to halt operations, citing national security threats.

The fine in Russia reflects the mounting regulatory challenges for TikTok as it navigates stricter oversight in various regions.

European nations debate school smartphone bans

As concerns grow over the impact of smartphones on children, several European countries are implementing or debating restrictions on their use in schools. France, for example, has prohibited phones in primary and secondary schools since 2018 and recently extended the policy to include ‘digital breaks’ at some institutions. Similarly, the Netherlands and Hungary have adopted bans, with exceptions for educational purposes or special needs, while Italy, Greece, and Latvia have also imposed restrictions.

The debate is fueled by studies showing that smartphones can distract students, though some argue they can also be useful for learning. A 2023 UNESCO report recommended limiting phones in schools to support education, with more than 60 countries now following similar measures. However, enforcement remains a challenge, as some reports suggest that many students still find ways to use their devices despite the bans.

Experts remain divided on the issue. While some highlight the risks of distraction and mental health impacts, others emphasise the need for balance. ‘Banning phones can be beneficial, but we must ensure children have adequate alternatives for education and communication,’ said Ben Carter, a professor of medical statistics at King’s College London.

The trend reflects broader concerns about screen time among children, with countries like Sweden and Luxembourg calling for clearer rules to promote healthier digital habits. While opinions differ, the growing movement underscores a collective effort to create focused, engaging, and healthier learning environments.

How teens are falling victim to digital scams

In the rapidly expanding online world, teenagers are becoming prime targets for scammers. Over a recent five-year period, financial losses reported by teens increased by an alarming 2,500%, outpacing the 805% rise among seniors. Experts attribute this to scammers exploiting the tech-savviness of younger users while capitalising on their lack of experience.

Scammers use various tactics, including impersonating online influencers, romance schemes, and phishing for sensitive information through gaming platforms. One growing threat involves sextortion, where victims are coerced into sharing explicit images that are later used to demand money under the threat of public exposure. Tragically, such incidents have already led to devastating consequences, including teen suicides.

Parents are urged to foster open communication with their children about these risks, creating a safe space for them to share any unsettling online encounters. Basic steps like monitoring app usage, staying connected on social media, and setting clear tech boundaries can go a long way in shielding teens from these dangers. The key, experts stress, is building trust and ensuring children know they have unwavering support, no matter the situation.

Major US telecom firms confirm cyberattacks by Chinese group ‘Salt Typhoon’, sparking national security concerns

AT&T and Verizon have confirmed cyberattacks linked to a Chinese hacking group known as “Salt Typhoon,” but assured the public on Saturday that their US networks are now secure. Both companies acknowledged the breaches for the first time, stating they are cooperating with law enforcement and government agencies to address the threat. AT&T disclosed that the attackers targeted a small group of individuals tied to foreign intelligence, while Verizon emphasised that the activities have been contained following extensive remediation efforts.

The attacks, described by US officials as the most extensive telecommunications hack in the nation’s history, reportedly allowed Salt Typhoon operatives to access sensitive network systems, including the ability to geolocate individuals and record phone calls. Authorities have linked the breaches to several telecom firms, with a total of nine entities now confirmed as compromised. In response, the Cybersecurity and Infrastructure Security Agency has urged government officials to transition to encrypted communication methods.

US Senators, including Democrat Ben Ray Luján and Republican Ted Cruz, have expressed alarm over the breach’s scale, calling for stronger safeguards against future intrusions. Meanwhile, Chinese officials have denied the accusations, dismissing them as disinformation and reaffirming their opposition to cyberattacks. Despite assurances from the companies and independent cybersecurity experts, questions remain about how long it will take to fully restore public confidence in the nation’s telecommunications security.

Trump urges Supreme Court to postpone TikTok law

President-elect Donald Trump has called on the US Supreme Court to postpone implementing a law that would ban TikTok or force its sale, arguing for time to seek a political resolution after taking office. The court will hear arguments on the case on 10 January, ahead of a 19 January deadline for TikTok’s Chinese owner, ByteDance, to sell the app or face a US ban.

The move marks a stark shift for Trump, who previously sought to block TikTok in 2020 over national security concerns tied to its Chinese ownership. Trump’s legal team emphasised that his request does not take a stance on the law’s merits but seeks to allow his incoming administration to explore alternatives. Trump has expressed a newfound appreciation for TikTok, citing its role in boosting his campaign visibility.

TikTok, with over 170 million US users, continues to challenge the legislation, asserting that its data and operations affecting US users are fully managed within the country. However, national security concerns persist, with the Justice Department and a coalition of attorneys general urging the Supreme Court to uphold the divest-or-ban mandate. The case highlights the growing debate between free speech advocates and national security interests in regulating digital platforms.

Social media platforms face penalties over child safety

The UK government is intensifying efforts to safeguard children online, with new measures requiring social media platforms to implement robust age verification and protect young users from harmful content. Technology Secretary Peter Kyle highlighted the importance of ‘watertight’ systems, warning that companies failing to comply could face significant fines or even prison terms for executives.

The measures, part of the Online Safety Act passed in 2023, will see platforms penalised for failing to address issues such as bullying, violent content, and risky stunts. Ofcom, the UK‘s communications regulator, is set to outline further obligations in January, including stricter ID verification for adult-only apps.

Debate continues over the balance between safety and accessibility. While some advocate for bans similar to Australia‘s under-16 restrictions, teenagers consulted by Kyle emphasised the positive aspects of social media, including learning opportunities and community connections. Research into the impact of screen time on mental health is ongoing, with new findings expected next year.

Google tests Gemini AI against Anthropic’s Claude

Google contractors improving the Gemini AI model have been tasked with comparing its responses against those of Anthropic’s Claude, according to internal documents reviewed by TechCrunch. The evaluation process involves scoring responses on criteria such as truthfulness and verbosity, with contractors given up to 30 minutes per prompt to determine which model performs better. Notably, some outputs identify themselves as Claude, sparking questions about Google’s use of its competitor’s model.

Claude’s responses, known for emphasising safety, have sometimes refused to answer prompts deemed unsafe, unlike Gemini, which has faced criticism for safety violations. One such instance involved Gemini generating responses flagged for inappropriate content. Despite Google’s significant investment in Anthropic, Claude’s terms of service prohibit its use to train or build competing AI models without prior approval.

A spokesperson for Google DeepMind stated that while the company compares model outputs for evaluation purposes, it does not train Gemini using Anthropic models. Anthropic, however, declined to comment on whether Google had obtained permission to use Claude for these tests. Recent revelations also highlight contractor concerns over Gemini producing potentially inaccurate information on sensitive topics, including healthcare.

TikTok faces ban in Albania after teen’s death

Albania has announced a one-year nationwide ban on TikTok, citing concerns about the platform’s influence on children. The decision follows the fatal stabbing of a 14-year-old boy in November, reportedly linked to social media disputes. Prime Minister Edi Rama revealed the ban as part of a broader strategy to enhance school safety after consultations with parents and teachers.

The Prime Minister has criticised TikTok and similar platforms for encouraging youth violence. Videos supporting the killing were shared online, raising alarms about the role of social media in such incidents. Rama stated that society, not children, bears responsibility for the issue, describing TikTok as a platform that holds children ‘hostage’.

Several European nations, including France and Germany, have introduced restrictions on social media for children. Albania’s move aligns with a growing global trend, with Australia recently approving a complete social media ban for users under 16.

TikTok responded by seeking clarity from the Albanian government, claiming no evidence linked the involved teens to the platform. A spokesperson suggested another platform might have hosted the content tied to the incident.

TikTok appeals to Supreme Court to block looming US ban

TikTok and its parent company, ByteDance, have asked the Supreme Court to halt a US law that would force ByteDance to sell TikTok by 19 January or face a nationwide ban. The companies argue that the law violates the First Amendment, as it targets one of the most widely used social media platforms in the United States, which currently has 170 million American users. A group of TikTok users also submitted a similar request to prevent the shutdown.

The law, passed by Congress in April, reflects concerns over national security. The Justice Department claims TikTok poses a threat due to its access to vast user data and potential for content manipulation by a Chinese-owned company. A lower court in December upheld the law, rejecting TikTok’s argument that it infringes on free speech rights. TikTok maintains that users should be free to decide for themselves whether to use the app and that shutting it down for even a month could cause massive losses in users and advertisers.

With the ban set to take effect the day before President-elect Donald Trump’s inauguration, TikTok has urged the Supreme Court to decide by 6 January. Trump, who once supported banning TikTok, has since reversed his position and expressed willingness to reconsider. The case highlights rising trade tensions between the US and China and could set a precedent for other foreign-owned apps operating in America.

Dynamic Coalitions: Bridging digital divides and shaping equitable online governance

The session ‘Dynamic Coalitions and the Global Digital Compact’ at IGF 2024 in Riyadh highlighted the significant role of Dynamic Coalitions (DCs) in advancing the Global Digital Compact’s (GDC) objectives. Moderated by Jutta Croll, the discussion served as a platform to illustrate the alignment of DC efforts with the GDC’s goals, emphasising the need for broader collaboration and inclusion.

One of the pressing topics addressed was bridging digital divides, as emphasised by June Paris, an experienced nurse engaged in research on nutrition in pregnant women and a business development expert. She underscored the challenges faced by Small Island Developing States (SIDS), noting their increased vulnerability to digital marginalisation. Paris called on DCs to prioritise policies that combat polarisation and promote equitable internet access for underrepresented regions.

The conversation also delved into expanding the benefits of the digital economy. Muhammad Shabbir, a member of the Internet Society’s Accessibility Special Interest Group, a member of the Pakistan ISOC chapter, and a member of the Digital Coalition on Accessibility and Disability (DCAD), detailed the contributions of coalitions like the DC on Financial Inclusion, which advocates for accessible financial services, and the DC on Open Education, which focuses on enhancing learning opportunities. Shabbir also highlighted the DC on Accessibility’s work towards digital inclusivity for persons with disabilities and the DC on Environment’s initiatives to address the environmental impacts of digitalisation.

Founder and investor of the WAF lifestyle app and chair of Dynamic Coalition on Core Internet Values, Olivier Crepin-Leblond, provided insights on fostering safe and inclusive digital spaces, stressing the pivotal work of DCs like the DC on Internet Rights and Principles, which champions human rights online, and the DC on Child Online Safety, which ensures the protection of children in the digital realm. He highlighted the significant proportion of under-18 internet users, linking their rights to the UN Convention on the Rights of the Child.

Data governance and AI regulation also featured prominently. Tatevik Grigoryan, co-chair of Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators, discussed frameworks for responsible data management. At the same time, Yao Amevi Amnessinou Sossou, a research fellow for innovation and entrepreneurship, spotlighted AI-related initiatives. These included tackling gender biases through the DC on Gender and Internet Governance and exploring AI’s potential in healthcare and connected devices through other coalitions. Their contributions underscored the need for ethical and inclusive governance of emerging technologies.

The session’s open dialogue further enriched its value. The lead of three dynamic coalitions – Digital Economy, Digital Health and Environment, Dr Rajendra Pratap Gupta, highlighted the urgency of job creation and digital inclusion, while audience members raised critical points on data integrity and the transformative potential of gamification. Mark Carvell’s (co-moderator of the session) mention of the WSIS+20 Review added a forward-looking perspective, inviting DCs to contribute their expertise to this landmark evaluation.

By showcasing the diverse initiatives of Dynamic Coalitions, the session reinforced their essential role in shaping global internet governance. The call for greater inclusion, tangible outcomes, and multistakeholder collaboration resonated throughout, marking a clear path forward for advancing the GDC’s objectives.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.