Meta apps experience widespread outages across the United States

Facebook, Instagram, and WhatsApp experienced significant outages across the United States on Wednesday, leaving thousands of users unable to access the popular platforms. Outage tracking site Downdetector recorded over 27,000 reports for Facebook, 28,000 for Instagram, and more than 1,000 for WhatsApp. The disruptions began around 12:50 p.m. ET, with users encountering error messages such as ‘something went wrong.’

Meta acknowledged the issue in a post on X, assuring users that it was working to resolve the problem quickly. A spokesperson apologised for the inconvenience and said teams were working diligently to restore services to normal.

User frustration echoed on X, with many expressing concerns about the reliability of Meta’s platforms. Outages like this are not unprecedented. Earlier this year, Meta faced a similar global disruption that impacted hundreds of thousands of users. In October, Meta apps were also briefly offline due to technical issues, although those were resolved within an hour.

Meta’s platforms are among the most widely used social media and communication tools globally. The recurrence of technical problems highlights the challenges of maintaining the reliability of such massive online infrastructures.

TikTok’s request to temporarily halt the US ban rejected by US court

TikTok’s deadline is approaching as its Chinese parent company, ByteDance, prepares to take its case to the US Supreme Court. A federal appeals court on Friday rejected TikTok’s request for more time to challenge a law mandating ByteDance to divest TikTok’s US operations by 19 January or face a nationwide ban. The platform, used by 170 million Americans, now has weeks to seek intervention from the Supreme Court to avoid a shutdown that would reshape the digital landscape.

The US government argues that ByteDance’s control over TikTok poses a persistent national security threat, claiming the app’s ties to China could expose American data to misuse. TikTok strongly disputes these assertions, stating that user data and content recommendation systems are stored on US-based Oracle servers and that moderation decisions are made domestically. A TikTok spokesperson emphasised the platform’s intention to fight for free speech, pointing to the Supreme Court’s history of defending such rights.

The ruling leaves TikTok’s immediate fate uncertain, placing the decision first in the hands of President Joe Biden, who could grant a 90-day extension if progress toward a divestiture is evident. However, Biden’s decision would give way to President-elect Donald Trump, who takes office just one day after the 19 January deadline. Despite his previous efforts to ban TikTok in 2020, Trump recently opposed the current law, citing concerns about its benefits to rival platforms like Facebook.

Adding to the urgency, US lawmakers have called on Apple and Google to prepare to remove TikTok from their app stores if ByteDance fails to comply. As the clock ticks, TikTok’s battle with the US government highlights a broader conflict over technology, data privacy, and national security. The legal outcome could force millions of users and businesses to rethink their digital strategies in a post-TikTok world.

Krispy Kreme hit by IT disruption affecting US online orders

Krispy Kreme has reported a cybersecurity incident that disrupted online ordering systems across the United States. The doughnut chain discovered the unauthorised activity on 29 November and immediately launched an investigation with external cybersecurity experts.

While the company’s stores remain open for in-person orders, it warned that revenue losses from digital sales could materially impact its financial results. Shares of Krispy Kreme fell by around 2% in premarket trading following the announcement.

The company said it is actively working to mitigate the effects of the incident while maintaining operations at its global locations.

Serie A takes action against piracy with Meta

Serie A has partnered with Meta to combat illegal live streaming of football matches, aiming to protect its broadcasting rights. Under the agreement, Serie A will gain access to Meta’s tools for real-time detection and swift removal of unauthorised streams on Facebook and Instagram.

Broadcasting revenue remains vital for Serie A clubs, including Inter Milan and Juventus, with €4.5 billion secured through deals with DAZN and Sky until 2029. The league’s CEO urged other platforms to follow Meta’s lead in fighting piracy.

Italian authorities have ramped up anti-piracy measures, passing laws that enable swift takedowns of illegal streams. Earlier this month, police dismantled a network with 22 million users, highlighting the scale of the issue.

IGF 2024 panel tackles global digital identity challenges

The 19th Internet Governance Forum (IGF 2024) in Riyadh, Saudi Arabia, brought together a distinguished panel to address global challenges and opportunities in developing trusted digital identity systems. Moderated by Shivani Thapa, the session featured insights from Bandar Al-Mashari, Emma Theofelus, Siim Sikkut, Sangbo Kim, Kurt Lindqvist, and other notable speakers.

The discussion focused on building frameworks for trusted digital identities, emphasising their role as critical infrastructure for digital transformation. Bandar Al-Mashari, Saudi Arabia’s Assistant Minister of Interior for Technology Affairs, highlighted the Kingdom’s innovative efforts, while Namibia’s Minister of Information, Emma Theofelus, stressed the importance of inclusivity and addressing regional needs.

The panellists examined the balance between enhanced security and privacy protection. Siim Sikkut, Managing Partner of Digital Nations, underscored the value of independent oversight and core principles to maintain trust. Emerging technologies like blockchain, biometrics, and artificial intelligence were recognised for their potential impact, though caution was urged against uncritical adoption.

Barriers to international cooperation, including the digital divide, infrastructure gaps, and the complexity of global systems, were addressed. Sangbo Kim of the World Bank shared insights on fostering collaboration across regions, while Kurt Lindqvist, CEO of ICANN, highlighted the need for a shared vision in navigating differing national priorities.

Speakers advocated for a phased approach to implementation, allowing countries to progress at their own pace while drawing lessons from successful initiatives, such as those in international travel and telecommunications. The call for collaboration was echoed by Prince Bandar bin Abdullah Al-Mishari, who emphasised Saudi Arabia’s commitment to advancing global solutions.

The discussion concluded on an optimistic note. Fatma, briefly mentioned as a participant, contributed to a shared vision of digital identity as a tool for accelerating inclusion and fostering global trust. The panellists agreed that a unified approach, guided by innovation and respect for privacy, is vital to building secure and effective digital identity systems worldwide.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Experts at the IGF address the growing threat of misinformation in the digital age

In an Internet Governance Forum panel in Riyadh, Saudi Arabia, titled ‘Navigating the misinformation maze: Strategic cooperation for a trusted digital future’, moderated by Italian journalist Barbara Carfagna, experts from diverse sectors examined the escalating problem of misinformation and explored solutions for the digital era. Esam Alwagait, Director of the Saudi Data and AI Authority’s National Information Center, identified social media as the primary driver of false information, with algorithms amplifying sensational content.

Natalia Gherman of the UN Counter-Terrorism Committee noted the danger of unmoderated online spaces, while Mohammed Ali Al-Qaed of Bahrain’s Information and Government Authority emphasised the role of influencers in spreading false narratives. Khaled Mansour, a Meta Oversight Board member, pointed out that misinformation can be deadly, stating, ‘Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.’

Emerging technologies like AI were highlighted as both culprits and potential solutions. Alwagait and Al-Qaed discussed how AI-driven tools could detect manipulated media and analyse linguistic patterns, while Al-Qaed proposed ‘verify-by-design’ mechanisms to tag information at its source.

However, the panel warned of AI’s ability to generate convincing fake content, fueling an arms race between creators of misinformation and its detectors. Pearse O’Donohue of the European Commission’s DigiConnect Directorate praised the EU’s Digital Services Act as a regulatory model but questioned, ‘Who moderates the regulator?’ Meanwhile, Mansour cautioned against overreach, advocating for labelling content rather than outright removal to preserve freedom of expression.

Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised the importance of global collaboration, supported by Gherman, who called for unified strategies through international forums like the Internet Governance Forum. Al-Qaed suggested regional cooperation could strengthen smaller nations’ influence over tech platforms. The panel also stressed promoting credible information and digital literacy to empower users, with Mansour noting that fostering ‘good information’ is essential to counter misinformation at its root.

The discussion concluded with a consensus on the need for balanced, innovative solutions. Speakers called for collaborative regulatory approaches, advanced fact-checking tools, and initiatives that protect freedom of expression while tackling misinformation’s far-reaching consequences.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Texas launches investigation into tech platforms over child safety

Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.

The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.

Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.

Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.

BeReal faces privacy complaint over tracking practices

BeReal, the selfie-sharing app acquired by French mobile games publisher Voodoo earlier this year, is under scrutiny for allegedly violating European data protection rules. A privacy complaint filed by Noyb, a European privacy rights organisation, accuses the app of using manipulative ‘dark patterns’ to coerce users into consenting to ad tracking, a tactic that may breach the General Data Protection Regulation (GDPR).

The controversy centres on a consent banner introduced in July 2024, which appears to offer users a straightforward choice to accept or refuse tracking. However, Noyb argues that users who decline tracking face daily pop-ups when they try to post, while those who consent are spared further interruptions. This practice, Noyb asserts, pressures users into compliance, undermining the GDPR’s requirement that consent be ‘freely given.’

The complaint has been filed with France’s data protection authority, CNIL, and demands that BeReal revise its consent process to comply with GDPR. It also calls for any improperly obtained data to be deleted and suggests a fine for the alleged violations. BeReal’s parent company, Voodoo, has yet to comment on the complaint.

This case highlights growing concerns over dark patterns in social media apps, with regulators emphasising the need for fair and transparent consent mechanisms in line with user privacy rights.

Major US telecom hack prompts security push after Salt Typhoon attack

Lawmakers have called for urgent measures to strengthen US telecommunications security following a massive cyberattack linked to China. The hacking campaign, referred to as Salt Typhoon, targeted American telecom companies, compromising vast amounts of metadata and call records. Federal agencies have briefed Congress on the incident, which officials say could be the largest telecom breach in US history.

Senator Ben Ray Luján described the hack as a wake-up call, urging the full implementation of federal recommendations to secure networks. Senator Ted Cruz warned of future threats, emphasising the need to close vulnerabilities in critical infrastructure. Debate also surfaced over the role of offensive cybersecurity measures, with Senator Dan Sullivan questioning whether deterrence efforts are adequate.

The White House reported that at least eight telecommunications firms were affected, with significant data theft. In response, Federal Communications Commission Chairwoman Jessica Rosenworcel proposed annual cybersecurity certifications for telecom companies. Efforts to replace insecure Chinese-made equipment in US networks continue, but funding shortfalls have hampered progress.

China has dismissed the allegations, claiming opposition to all forms of cybercrime. However, US officials have cited evidence of data theft involving companies like Verizon, AT&T, and Lumen. Congress is set to vote on a defence bill allocating $3.1 billion to remove and replace vulnerable telecom hardware.

AI safeguards prove hard to define

Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised. Elizabeth Kelly, director of the US Artificial Intelligence Safety Institute, highlighted challenges such as “jailbreaks” that bypass AI security measures and the ease of tampering with digital watermarks meant to identify AI-generated content. Speaking at the Reuters NEXT conference, Kelly acknowledged the difficulty in establishing best practices without clear evidence of their effectiveness.

The US AI Safety Institute, launched under the Biden administration, is collaborating with academic, industry, and civil society partners to address these issues. Kelly emphasised that AI safety transcends political divisions, calling it a “fundamentally bipartisan issue” amid the upcoming transition to Donald Trump’s presidency. The institute recently hosted a global meeting in San Francisco, bringing together safety bodies from 10 countries to develop interoperable tests for AI systems.

Kelly described the gathering as a convergence of technical experts focused on practical solutions rather than typical diplomatic formalities. While the challenges remain significant, the emphasis on global cooperation and expertise offers a promising path forward.