TikTok’s abrupt shutdown shakes the USA

TikTok’s future in the US took a dramatic turn late Saturday as the app went offline ahead of a Sunday deadline mandated by US law. The US government’s move, affecting 170 million US users, marks an unprecedented shutdown of one of the world’s most influential social media platforms.

The persistence of the US officials to ban TikTok stems from concerns over the platform’s ties to its Chinese parent company, ByteDance, and potential risks to national security. As users grapple with the platform’s disappearance, President-elect Donald Trump has hinted at a possible 90-day extension to allow time for a resolution.

The shutdown comes after the Supreme Court upheld a law requiring TikTok to sever ties with ByteDance or cease US operations. ByteDance’s other apps, such as CapCut and Lemon8, were also removed from US app stores.

TikTok issued a message to users acknowledging the shutdown and expressing hope for a political resolution under the Trump administration, which takes office Monday 20 January 2025. Trump has indicated that he will announce an extension early next week.

The app’s disappearance has sparked many reactions among users, businesses, and competitors. Social media platforms like RedNote, Meta, and Snap have seen an influx of users and investor interest, while many TikTok creators expressed sadness and uncertainty online. Virtual private network (VPN) searches surged as users sought workarounds to access the platform, highlighting the app’s deep integration into American culture and commerce.

Despite the shutdown, speculation continues about TikTok’s future. ByteDance has reportedly been discussing with potential buyers, including billionaire Elon Musk and other US-based entities. Meanwhile, TikTok CEO Shou Zi Chew is set to attend Trump’s inauguration, signalling possible negotiations to keep the platform operational. Proposals from new suitors, such as US search engine startup Perplexity AI, further illustrate the high stakes and value of TikTok’s US operations, which are estimated to be worth up to $50 billion.

The uncertainty has created a ripple effect, with businesses that rely on TikTok for marketing and e-commerce scrambling to adapt. Many worry about the broader implications of this shutdown, which has deepened tensions between Washington and Beijing.

The prospect of a political compromise looms as Trump prepares to take office, but whether TikTok can return to US screens remains uncertain. The platform’s sudden disappearance underscores the complex intersection of technology, geopolitics, and commerce, leaving millions of users and businesses in limbo.

AFP partnership strengthens Mistral’s global reach

Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.

Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.

The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.

Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.

Meta pushes free speech at the cost of content control

Meta has announced that Instagram and Threads users will no longer be able to opt out of seeing political content from accounts they don’t follow. The change, part of a broader push toward promoting “free expression,” will take effect in the US this week and expand globally soon after. Users will be able to adjust how much political content they see but won’t be able to block it entirely.

Adam Mosseri, head of Instagram and Threads, had previously expressed reluctance to feature political posts, favouring community-focused content like sports and fashion. However, he now claims that users have asked to see more political material. Critics, including social media experts, argue the shift is driven by changing political dynamics in the US, particularly with Donald Trump’s imminent return to the White House.

While some users have welcomed Meta’s stance on free speech, many worry it could amplify misinformation and hate speech. Experts also caution that marginalised groups may face increased harm due to fewer content moderation measures. The changes could also push discontented users toward rival platforms like Bluesky, raising questions about Meta’s long-term strategy.

Apple faces backlash over AI-generated news errors

Apple is facing mounting criticism over its AI-generated news summaries, which have produced inaccurate and misleading alerts on its latest iPhones. Media organisations, including the BBC, have raised concerns that the feature, designed to summarise breaking news notifications, has fabricated details that contradict original reports. The National Union of Journalists and Reporters Without Borders have called for the product’s removal, warning it risks spreading misinformation at a time when trust in news is already fragile.

High-profile errors have fuelled demands for urgent action. In December, an Apple AI summary falsely claimed that a murder suspect had taken his own life, while another inaccurately announced Luke Littler as the winner of the PDC World Darts Championship before the event had even begun. Apple has pledged to update the feature to make it clearer that summaries are AI-generated, but critics argue this does not address the root problem.

Journalism watchdogs and industry experts have warned that AI-driven news aggregation remains unreliable. The BBC stressed that the errors could undermine public trust, while former Guardian editor Alan Rusbridger described Apple’s technology as “out of control”. Similar concerns have been raised over generative AI tools from other tech firms, with Google’s AI-powered search summaries also facing scrutiny for producing incorrect responses. Apple insists the feature remains optional and is still in beta testing, with further improvements expected in an upcoming software update.

Israeli spyware deal reports denied by US and Israel

Officials from the United States and Israel have refuted claims of approving the sale of Israeli spyware firm Paragon to Florida-based AE Industrial Partners. Reports of the transaction surfaced in Israeli media, suggesting both governments had greenlit the deal, but US and Israeli representatives dismissed these assertions.

The White House clarified that the sale was a private transaction with no formal US approval, while Israel‘s Defence Ministry stated it was still evaluating the deal. Paragon, linked to former Israeli intelligence officers, has faced scrutiny in the US market, including a paused $2 million contract with ICE.

The alleged acquisition has drawn attention due to Paragon’s ties to national security and controversial surveillance software. Both AE and Paragon have not yet commented on the situation.

Protecting journalists online with global solutions from IGF 2024

The safety of journalists online took centre stage during an open forum at IGF 2024 in Riyadh. Experts and audience members shared insights on the growing threats faced by journalists globally, including online harassment, surveillance, and censorship. Discussions underscored how these challenges disproportionately affect women journalists and individuals from marginalised communities.

Panelists such as Isabelle Lois from Switzerland and Bruna Martins dos Santos from Brazil emphasised the urgent need for stronger legal frameworks and better implementation of existing laws. Digital platforms were urged to increase accountability for online attacks, while media organisations were encouraged to provide robust support systems for their journalists. Gulalai Khan from Pakistan highlighted the importance of digital literacy and ethical reporting in navigating online threats.

Debates also addressed the evolving definition of journalism in the digital age, questioning whether protections should extend to citizen journalists and content creators. Giulia Lucchese from the Council of Europe pointed to positive initiatives like Switzerland’s National Action Plan and European campaigns on journalist safety as steps in the right direction. However, participants agreed on the need for greater international collaboration to amplify these efforts.

The session concluded with a call for multi-stakeholder approaches to foster trust and ensure journalist safety. Speakers stressed that governments, tech companies, and civil society must work together to protect press freedom in democratic societies. Overall, the forum highlighted both ongoing challenges and the importance of collective action to safeguard journalists in an increasingly digital world.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Experts at IGF 2024 address the dual role of AI in elections, emphasising empowerment and challenges

At IGF 2024, panellists explored AI’s role in elections, its potential for both empowerment and disruption, and the challenges it poses to democratic processes. Moderator Tapani Tarvainen led the discussion with contributions from Ayobangira Safari Nshuti, Roxana Radu, Babu Ram Aryal, and other experts.

Speakers noted that AI had been primarily used for self-promotion in campaigns, helping smaller candidates compete with limited resources. Roxana Radu highlighted AI’s positive role in voter outreach in India but warned of risks such as disinformation and public opinion manipulation. Ayobangira Safari Nshuti pointed to algorithmic biases and transparency issues in platforms as critical concerns, emphasising a recent case in Romania where AI-enabled manipulation caused election disruption.

Accountability of social media platforms became a focal point. Platforms increasingly rely on AI for content moderation, but their effectiveness in languages with limited online presence remains inadequate. Babu Ram Aryal stressed the need for stronger oversight, particularly in multilingual nations, while Dennis Redeker underscored the challenges of balancing regulation with free speech.

Panellists called for holistic solutions to safeguard democracy. Suggestions included enhancing platform transparency, implementing robust digital literacy programmes, and addressing social factors like poverty that exacerbate misinformation. Nana, an AI ethics specialist, advocated for proactive governance to adapt electoral institutions to technological realities.

The session concluded with a recognition that AI’s role in elections will continue to evolve. Panellists urged collaborative efforts between governments, civil society, and technology companies to ensure election integrity and maintain public trust in democratic systems.

Human rights concerns over UN Cybercrime Treaty raised at IGF 2024

A panel discussion at the Internet Governance Forum (IGF) raised serious concerns over the UN Cybercrime Treaty and its potential to undermine human rights. Experts from organisations such as Human Rights Watch and the Electronic Frontier Foundation criticised the treaty’s broad scope and lack of clear safeguards for individual freedoms. They warned that the treaty’s vague language, particularly around what constitutes a ‘serious crime,’ could empower authoritarian regimes to exploit its provisions for surveillance and repress dissent.

Speakers such as Joey Shea from Human Rights Watch and Lina al-Hathloul, a Saudi human rights defender, pointed out the risks posed by the treaty’s expansive investigative powers, which extend beyond cybercrimes to any crimes defined by domestic law. Flexibility like this one could force countries to assist in prosecuting acts that are not crimes within their own borders. They also highlighted the treaty’s weak privacy protections, which could jeopardise encryption standards and further harm cybersecurity researchers.

Deborah Brown from Human Rights Watch and Veridiana Alimonti of the Electronic Frontier Foundation shared examples from Saudi Arabia and Latin America, where existing cybercrime and anti-terrorism laws have already been used to target journalists and activists. The panelists expressed concern that the treaty could exacerbate these abuses globally, especially for cybersecurity professionals and civil society.

Fionnuala Ni Aolain, a former UN Special Rapporteur on counterterrorism and human rights, emphasised that the treaty’s provisions could lead to criminalising the vital work of cybersecurity researchers. She joined other experts in urging policymakers and industry leaders to resist ratification in its current form. They called for upcoming protocol negotiations to address these human rights gaps and for greater involvement of civil society voices to prevent the treaty from becoming a tool for transnational repression.

Experts at IGF 2024 address challenges of online information governance

The IGF 2024 panel explored the challenges and opportunities in creating healthier online information spaces. Experts from civil society, governments, and media highlighted concerns about big tech‘s influence, misinformation, and the financial struggles of journalism in the digital age. Discussions centred on multi-stakeholder approaches, regulatory frameworks, and innovative solutions to address these issues.

Speakers including Nighat Dad and Martin Samaan criticised the power imbalance created by major platforms acting as gatekeepers to information. Concerns about insufficient language-specific content moderation and misinformation affecting non-English speakers were raised, with Aws Al-Saadi showcasing Tech4Peace, an Iraqi app tackling misinformation. Julia Haas called for stronger AI governance and transparency to protect vulnerable users while enhancing content curation systems.

The financial sustainability of journalism took centre stage, with Elena Perotti highlighting the decline in advertising revenue for traditional publishers. Isabelle Lois presented Switzerland‘s regulatory initiatives, which focus on transparency, user rights, and media literacy, as potential solutions. Industry collaborations to redirect advertising revenue to professional media were also proposed to sustain quality journalism.

Collaboration emerged as a key theme, with Claire Harring and other speakers emphasising partnerships among governments, media organisations, and tech companies. Initiatives like Meta’s Oversight Board and global dialogues on AI governance were cited as steps toward creating balanced and equitable digital spaces. The session concluded with a call to action for greater engagement in global governance to address the interconnected challenges of the digital information ecosystem.

International experts converge at IGF 2024 to promote digital solidarity in global governance

A panel of international experts at the IGF 2024 gathered to discuss the growing importance of digital solidarity in global digital governance. Jennifer Bachus of the US State Department introduced the concept as a framework for fostering international cooperation centred on human rights and multi-stakeholder engagement. Nashilongo Gervasius, a public interest technology expert from Namibia, highlighted the need to close digital divides and promote inclusivity in global digital policymaking.

The discussion focused on balancing digital sovereignty with the need for international collaboration. Jason Pielemeier, Executive Director of the Global Network Initiative, stressed the critical role of data privacy and cybersecurity in advancing global digital rights. Robert Opp, Chief Digital Officer at the United Nations Development Programme, emphasised the importance of capacity building and enhancing digital infrastructure, particularly in developing nations.

Key global mechanisms like the Internet Governance Forum (IGF) and the World Summit on the Information Society (WSIS) processes featured prominently in the dialogue. Panellists, including Susan Mwape from Zambia, underscored the need to strengthen these platforms while ensuring they remain inclusive and respectful of human rights. The upcoming WSIS+20 review was recognised as an opportunity to revitalise international cooperation in the digital realm.

Challenges such as internet shutdowns, mass surveillance, and the misuse of cybercrime legislation were debated. Mwape voiced concerns about the potential for international forums to lose credibility if hosted by countries with poor human rights records. Audience member Barbara from Nepal called for greater accountability in digital governance practices, while Hala Rasheed from the Alnahda Society echoed the urgency of addressing inequalities in digital policy implementation.

Russian civil society representative Alexander Savnin brought attention to the impact of sanctions on international technical cooperation in cybersecurity. He argued for a more balanced approach that would allow global stakeholders to address shared security challenges effectively. Panellists agreed that fostering trust among diverse actors remains a critical hurdle to achieving digital solidarity.

The session concluded with a commitment to fostering continuous dialogue and collaboration. Panellists expressed hope that inclusive and rights-based approaches could transform digital solidarity into tangible solutions, helping to address the pressing challenges of the digital age.