Apple halts AI news summaries after NUJ criticism

Apple has suspended its AI-generated news summary feature after criticism from the National Union of Journalists (NUJ). Concerns were raised over the tool’s inaccurate reporting and its potential role in spreading misinformation.

The NUJ welcomed the decision, emphasising the risks posed by automated reporting. Recent errors in AI-generated summaries highlighted how such tools can undermine public trust in journalism. Calls for a more human-centred approach in reporting were made by NUJ assistant general secretary, Séamus Dooley.

Apple’s decision follows growing scrutiny of AI’s role in journalism. Critics argue that while automation can streamline news delivery, it must not compromise accuracy or credibility.

The NUJ has urged Apple to prioritise transparency and accountability as it further develops its AI capabilities. Safeguarding trust in journalism remains a key concern in the evolving media landscape.

Donald Trump rebrings TikTok online

TikTok began restoring its services in the US on Sunday after President-elect Donald Trump announced plans to revive the app upon taking office on Monday. Speaking at a rally ahead of his inauguration, Trump assured his supporters that TikTok, a platform used by 170 million Americans, would be brought back online through a joint venture that protects national security. Hours earlier, TikTok users had received a message crediting Trump for the app’s restoration efforts.

TikTok ceased operations late Saturday after a law banning the platform on national security grounds came into effect. The shutdown sparked a frenzy among users and businesses dependent on the app, with web searches for VPNs surging and concerns mounting over disruptions to TikTok Shop transactions. The app’s temporary return relieves millions, but important questions remain about its long-term future in the US.

Trump’s pledge to extend the ban’s enforcement period to facilitate a deal marks a shift from his stance in 2020 when he sought to ban TikTok over concerns that its Chinese parent company, ByteDance, was sharing user data with Beijing. Trump now calls for a joint venture, proposing a 50% US ownership stake while guaranteeing that service providers would not face penalties for restoring TikTok.

Despite Trump’s assurances, the law mandating TikTok’s divestiture remains contentious. Republican lawmakers, including Senators Tom Cotton and Pete Ricketts, have criticised any attempt to circumvent the law, insisting that ByteDance sever all ties with China to meet the divestiture requirements. Meanwhile, TikTok’s ongoing connection to China continues to fuel tensions in US-China relations, with Beijing accusing Washington of unfairly targeting Chinese companies.

TikTok’s temporary return has reignited debates over its valuation, reportedly as high as $50 billion, and potential suitors, including former Los Angeles Dodgers owner Frank McCourt and billionaire Elon Musk. While Beijing has reportedly discussed a possible sale, ByteDance denies such plans. Separately, US startup Perplexity AI has proposed merging with TikTok’s US operations to create a new entity.

The platform’s restoration signals its cultural and economic significance, but it also highlights the geopolitical complexities of its existence. Whether TikTok ultimately secures a deal or faces renewed legal battles, its journey reflects the growing and complicated intersection of technology, digital policies, cyber diplomacy, politics, and global commerce.

TikTok’s abrupt shutdown shakes the USA

TikTok’s future in the US took a dramatic turn late Saturday as the app went offline ahead of a Sunday deadline mandated by US law. The US government’s move, affecting 170 million US users, marks an unprecedented shutdown of one of the world’s most influential social media platforms.

The persistence of the US officials to ban TikTok stems from concerns over the platform’s ties to its Chinese parent company, ByteDance, and potential risks to national security. As users grapple with the platform’s disappearance, President-elect Donald Trump has hinted at a possible 90-day extension to allow time for a resolution.

The shutdown comes after the Supreme Court upheld a law requiring TikTok to sever ties with ByteDance or cease US operations. ByteDance’s other apps, such as CapCut and Lemon8, were also removed from US app stores.

TikTok issued a message to users acknowledging the shutdown and expressing hope for a political resolution under the Trump administration, which takes office Monday 20 January 2025. Trump has indicated that he will announce an extension early next week.

The app’s disappearance has sparked many reactions among users, businesses, and competitors. Social media platforms like RedNote, Meta, and Snap have seen an influx of users and investor interest, while many TikTok creators expressed sadness and uncertainty online. Virtual private network (VPN) searches surged as users sought workarounds to access the platform, highlighting the app’s deep integration into American culture and commerce.

Despite the shutdown, speculation continues about TikTok’s future. ByteDance has reportedly been discussing with potential buyers, including billionaire Elon Musk and other US-based entities. Meanwhile, TikTok CEO Shou Zi Chew is set to attend Trump’s inauguration, signalling possible negotiations to keep the platform operational. Proposals from new suitors, such as US search engine startup Perplexity AI, further illustrate the high stakes and value of TikTok’s US operations, which are estimated to be worth up to $50 billion.

The uncertainty has created a ripple effect, with businesses that rely on TikTok for marketing and e-commerce scrambling to adapt. Many worry about the broader implications of this shutdown, which has deepened tensions between Washington and Beijing.

The prospect of a political compromise looms as Trump prepares to take office, but whether TikTok can return to US screens remains uncertain. The platform’s sudden disappearance underscores the complex intersection of technology, geopolitics, and commerce, leaving millions of users and businesses in limbo.

AFP partnership strengthens Mistral’s global reach

Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.

Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.

The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.

Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.

Meta pushes free speech at the cost of content control

Meta has announced that Instagram and Threads users will no longer be able to opt out of seeing political content from accounts they don’t follow. The change, part of a broader push toward promoting “free expression,” will take effect in the US this week and expand globally soon after. Users will be able to adjust how much political content they see but won’t be able to block it entirely.

Adam Mosseri, head of Instagram and Threads, had previously expressed reluctance to feature political posts, favouring community-focused content like sports and fashion. However, he now claims that users have asked to see more political material. Critics, including social media experts, argue the shift is driven by changing political dynamics in the US, particularly with Donald Trump’s imminent return to the White House.

While some users have welcomed Meta’s stance on free speech, many worry it could amplify misinformation and hate speech. Experts also caution that marginalised groups may face increased harm due to fewer content moderation measures. The changes could also push discontented users toward rival platforms like Bluesky, raising questions about Meta’s long-term strategy.

Apple faces backlash over AI-generated news errors

Apple is facing mounting criticism over its AI-generated news summaries, which have produced inaccurate and misleading alerts on its latest iPhones. Media organisations, including the BBC, have raised concerns that the feature, designed to summarise breaking news notifications, has fabricated details that contradict original reports. The National Union of Journalists and Reporters Without Borders have called for the product’s removal, warning it risks spreading misinformation at a time when trust in news is already fragile.

High-profile errors have fuelled demands for urgent action. In December, an Apple AI summary falsely claimed that a murder suspect had taken his own life, while another inaccurately announced Luke Littler as the winner of the PDC World Darts Championship before the event had even begun. Apple has pledged to update the feature to make it clearer that summaries are AI-generated, but critics argue this does not address the root problem.

Journalism watchdogs and industry experts have warned that AI-driven news aggregation remains unreliable. The BBC stressed that the errors could undermine public trust, while former Guardian editor Alan Rusbridger described Apple’s technology as “out of control”. Similar concerns have been raised over generative AI tools from other tech firms, with Google’s AI-powered search summaries also facing scrutiny for producing incorrect responses. Apple insists the feature remains optional and is still in beta testing, with further improvements expected in an upcoming software update.

Israeli spyware deal reports denied by US and Israel

Officials from the United States and Israel have refuted claims of approving the sale of Israeli spyware firm Paragon to Florida-based AE Industrial Partners. Reports of the transaction surfaced in Israeli media, suggesting both governments had greenlit the deal, but US and Israeli representatives dismissed these assertions.

The White House clarified that the sale was a private transaction with no formal US approval, while Israel‘s Defence Ministry stated it was still evaluating the deal. Paragon, linked to former Israeli intelligence officers, has faced scrutiny in the US market, including a paused $2 million contract with ICE.

The alleged acquisition has drawn attention due to Paragon’s ties to national security and controversial surveillance software. Both AE and Paragon have not yet commented on the situation.

Protecting journalists online with global solutions from IGF 2024

The safety of journalists online took centre stage during an open forum at IGF 2024 in Riyadh. Experts and audience members shared insights on the growing threats faced by journalists globally, including online harassment, surveillance, and censorship. Discussions underscored how these challenges disproportionately affect women journalists and individuals from marginalised communities.

Panelists such as Isabelle Lois from Switzerland and Bruna Martins dos Santos from Brazil emphasised the urgent need for stronger legal frameworks and better implementation of existing laws. Digital platforms were urged to increase accountability for online attacks, while media organisations were encouraged to provide robust support systems for their journalists. Gulalai Khan from Pakistan highlighted the importance of digital literacy and ethical reporting in navigating online threats.

Debates also addressed the evolving definition of journalism in the digital age, questioning whether protections should extend to citizen journalists and content creators. Giulia Lucchese from the Council of Europe pointed to positive initiatives like Switzerland’s National Action Plan and European campaigns on journalist safety as steps in the right direction. However, participants agreed on the need for greater international collaboration to amplify these efforts.

The session concluded with a call for multi-stakeholder approaches to foster trust and ensure journalist safety. Speakers stressed that governments, tech companies, and civil society must work together to protect press freedom in democratic societies. Overall, the forum highlighted both ongoing challenges and the importance of collective action to safeguard journalists in an increasingly digital world.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Experts at IGF 2024 address the dual role of AI in elections, emphasising empowerment and challenges

At IGF 2024, panellists explored AI’s role in elections, its potential for both empowerment and disruption, and the challenges it poses to democratic processes. Moderator Tapani Tarvainen led the discussion with contributions from Ayobangira Safari Nshuti, Roxana Radu, Babu Ram Aryal, and other experts.

Speakers noted that AI had been primarily used for self-promotion in campaigns, helping smaller candidates compete with limited resources. Roxana Radu highlighted AI’s positive role in voter outreach in India but warned of risks such as disinformation and public opinion manipulation. Ayobangira Safari Nshuti pointed to algorithmic biases and transparency issues in platforms as critical concerns, emphasising a recent case in Romania where AI-enabled manipulation caused election disruption.

Accountability of social media platforms became a focal point. Platforms increasingly rely on AI for content moderation, but their effectiveness in languages with limited online presence remains inadequate. Babu Ram Aryal stressed the need for stronger oversight, particularly in multilingual nations, while Dennis Redeker underscored the challenges of balancing regulation with free speech.

Panellists called for holistic solutions to safeguard democracy. Suggestions included enhancing platform transparency, implementing robust digital literacy programmes, and addressing social factors like poverty that exacerbate misinformation. Nana, an AI ethics specialist, advocated for proactive governance to adapt electoral institutions to technological realities.

The session concluded with a recognition that AI’s role in elections will continue to evolve. Panellists urged collaborative efforts between governments, civil society, and technology companies to ensure election integrity and maintain public trust in democratic systems.

Human rights concerns over UN Cybercrime Treaty raised at IGF 2024

A panel discussion at the Internet Governance Forum (IGF) raised serious concerns over the UN Cybercrime Treaty and its potential to undermine human rights. Experts from organisations such as Human Rights Watch and the Electronic Frontier Foundation criticised the treaty’s broad scope and lack of clear safeguards for individual freedoms. They warned that the treaty’s vague language, particularly around what constitutes a ‘serious crime,’ could empower authoritarian regimes to exploit its provisions for surveillance and repress dissent.

Speakers such as Joey Shea from Human Rights Watch and Lina al-Hathloul, a Saudi human rights defender, pointed out the risks posed by the treaty’s expansive investigative powers, which extend beyond cybercrimes to any crimes defined by domestic law. Flexibility like this one could force countries to assist in prosecuting acts that are not crimes within their own borders. They also highlighted the treaty’s weak privacy protections, which could jeopardise encryption standards and further harm cybersecurity researchers.

Deborah Brown from Human Rights Watch and Veridiana Alimonti of the Electronic Frontier Foundation shared examples from Saudi Arabia and Latin America, where existing cybercrime and anti-terrorism laws have already been used to target journalists and activists. The panelists expressed concern that the treaty could exacerbate these abuses globally, especially for cybersecurity professionals and civil society.

Fionnuala Ni Aolain, a former UN Special Rapporteur on counterterrorism and human rights, emphasised that the treaty’s provisions could lead to criminalising the vital work of cybersecurity researchers. She joined other experts in urging policymakers and industry leaders to resist ratification in its current form. They called for upcoming protocol negotiations to address these human rights gaps and for greater involvement of civil society voices to prevent the treaty from becoming a tool for transnational repression.