Ethiopian content moderators claim neglect by Meta

Former moderators for Facebook’s parent company, Meta, have accused the tech giant of disregarding threats from Ethiopian rebels after their removal of inflammatory content. According to court documents filed in Kenya, members of the Oromo Liberation Army (OLA) targeted moderators reviewing Facebook posts, threatening dire consequences unless the posts were reinstated. Contractors hired by Meta allegedly dismissed these concerns, claiming the threats were fabricated, before later offering limited support, such as moving one exposed moderator to a safehouse.

The dispute stems from a lawsuit by 185 former moderators against Meta and two contractors, Sama and Majorel, alleging wrongful termination and blacklisting after they attempted to unionise. Moderators focusing on Ethiopia faced particularly acute risks, receiving threats that detailed their names and addresses, yet their complaints were reportedly met with inaction or suspicion. One moderator, fearing for his life, described living in constant terror of visiting family in Ethiopia.

The case has broader implications for Meta’s content moderation policies, as the company relies on third-party firms worldwide to handle disturbing and often dangerous material. In a related Kenyan lawsuit, Meta stands accused of allowing violent and hateful posts to flourish on its platform, exacerbating Ethiopia’s ongoing civil strife. While Meta, Sama, and the OLA have not commented, the allegations raise serious questions about the accountability of global tech firms in safeguarding their workers and addressing hate speech.

Parliamentarians urged to bridge the global digital divide

At the ‘IGF Parliamentary Track – Session 1’ session in Riyadh, parliamentarians, diplomats, and digital experts gathered to address persistent gaps in global digital governance. The session spotlighted two critical UN-led initiatives: the World Summit on the Information Society (WSIS) and the Global Digital Compact (GDC), underscoring their complementary roles in bridging the digital divide and addressing emerging digital challenges like AI and data governance.

Ambassador Muhammadou M.O. Kah, Chair of the Commission for Science and Technology for Development, stressed the urgency of digital inclusion. ‘Digital technologies are transforming our world at a remarkable pace, but we must confront the persistent divide,’ he said, remembering that twenty years after WSIS first set out a vision for an inclusive digital society, one-third of the world’s population remains unconnected, with inequalities deepening between urban and rural areas, genders, and socioeconomic groups.

The Global Digital Compact, introduced as a ‘refresh’ of WSIS priorities, emerged as a key focus of the discussion. From the UN Tech Envoy’s Office, Isabel de Sola presented the GDC’s five pillars: affordable internet access, tackling misinformation, data governance, fostering inclusive digital economies, and ensuring safe AI implementation. De Sola emphasised, ‘We need a holistic approach. Data governance, AI, and connectivity are deeply interconnected and must work in tandem to serve society fairly.’

Sorina Teleanu, the session’s moderator and Head of knowledge at Diplo, highlighted the need for urgent action, stating: ‘We have the Global Digital Compact, but what’s next? It’s about implementation—how we take global commitments and turn them into real, practical solutions at national and local levels,’ she urged parliamentarians to exercise their oversight role and push for meaningful progress.

The session exposed a growing disconnect between governments and parliaments on digital policy. Several parliamentarians voiced concerns about exclusion from international processes that shape national legislation and budgets. ‘We cannot act effectively if we are not included or informed,’ a delegate from South Africa noted, calling for better integration of lawmakers into global frameworks like the GDC and WSIS.

To close these gaps, speakers proposed practical solutions, including capacity-building programs, toolkits for mapping GDC priorities locally, and stronger regional parliamentary networks. ‘Parliamentarians are closest to the people’ Ambassador Kah reminded attendees, ‘they play a crucial role in translating global commitments into meaningful local action’

The discussion ended with a renewed call for collaboration: greater inclusion of lawmakers, better alignment of international frameworks with local needs, and stronger efforts to bridge the digital divide. As the world approaches WSIS’ 20-year review in 2025, the path forward requires a unified, inclusive effort to ensure digital advancements reach all corners of society.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Digital futures at a crossroads: aligning WSIS and the Global Digital Compact

The path toward a cohesive digital future was the central theme at the ‘From WSIS to GDC: Harmonising Strategies Towards Coordination‘ session held at the Internet Governance Forum (IGF) 2024 in Riyadh. Experts, policymakers, and civil society representatives converged to address how the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC) can work in unison. At the heart of the debate lay two critical imperatives: coordination and avoiding fragmentation.

Panelists, including Jorge Cancio of the Swiss Government and David Fairchild of Canada, underscored the IGF’s central role as a multistakeholder platform for dialogue. However, concerns about its diminishing mandate and inadequate funding surfaced repeatedly. Fairchild warned of ‘a centralisation of digital governance processes,’ hinting at geopolitical forces that could undermine inclusive, global cooperation. Cancio urged an updated ‘Swiss Army knife’ approach to WSIS, where existing mechanisms, like the IGF, are strengthened rather than duplicated.

The session also highlighted emerging challenges since WSIS’s 2005 inception. Amrita Choudhury from MAG and Anita Gurumurthy of IT for Change emphasised that AI, data governance, and widening digital divides demand urgent attention. Gurumurthy lamented that ‘neo-illiberalism,’ characterised by corporate greed and authoritarian politics, threatens the vision of a people-centred information society. Meanwhile, Gitanjali Sah of ITU reaffirmed WSIS’s achievements, pointing to successes like digital inclusion through telecentres and distance learning.

Amid these reflections, the IGF emerged as an essential event for harmonising WSIS and GDC goals. Panellists, including Nigel Cassimire from the Caribbean Telecommunications Union, proposed that the IGF develop performance targets to implement GDC commitments effectively. Yet, as Jason Pielemeier of the Global Network Initiative cautioned, the IGF faces threats of co-optation in settings hostile to open dialogue, which ‘weakens its strength.’

Despite these tensions, hope remained for creative solutions and renewed international solidarity. The session concluded with a call to refocus on WSIS’s original principles—ensuring no one is left behind in the digital future. As Anita Gurumurthy aptly summarised: ‘We reject bad politics and poor economics. What we need is a solidarity vision of interdependence and mutual reciprocity.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Safeguarding democracy: Strategies to combat disinformation in electoral contexts

Disinformation during elections is a growing threat to democracy and human rights, according to a global panel of experts who convened at the Internet Governance Forum 2024 in Riyadh to discuss this issue. Giovanni Zagni, director of Pagella Politica and Facta.news, highlighted the European Union’s approach with its voluntary Code of Practice on Disinformation, which has 34 signatories, including major tech platforms.

The collaborative strategy contrasts with stricter regulatory models, raising questions about the balance between platform accountability and government intervention. Juliano Cappi from the Brazilian Internet Steering Committee underscored the importance of digital sovereignty and public infrastructure, introducing concepts like ‘systemic risk’ and ‘duty of care’ in platform regulation.

Collaboration among stakeholders was a recurring theme, with experts stressing the need for partnerships between fact-checkers, tech companies, and civil society organisations. Nazar Nicholas Kirama, president of the Internet Society Tanzania (ISOC Tanzania) called for platforms to adopt transparent algorithms and assume greater accountability, comparing their influence to that of electoral commissions.

However, Cappi warned about the risks of bias in platforms’ business models and advocated for a ‘follow the money’ approach to trace disinformation campaigns. Aiesha Adnan, co-founder of the ‘Women in Tech Maldives’ and Poncelet Ileleji from the Information Technology Association of the Gambia emphasised media literacy and grassroots empowerment as crucial tools, calling for initiatives like UNESCO-backed fact-checking and community radio programs to counter misinformation.

The tension between regulation and free speech was a central point of debate. While some participants, like Zagni, noted the challenge of addressing disinformation without infringing on freedoms, others warned against government overreach.

Adnan highlighted smaller nations’ unique challenges, urging for culturally sensitive interventions and localised strategies. The session closed with a call for global cooperation and continued dialogue to safeguard democratic processes while respecting diverse regional contexts and fundamental rights.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Serie A takes action against piracy with Meta

Serie A has partnered with Meta to combat illegal live streaming of football matches, aiming to protect its broadcasting rights. Under the agreement, Serie A will gain access to Meta’s tools for real-time detection and swift removal of unauthorised streams on Facebook and Instagram.

Broadcasting revenue remains vital for Serie A clubs, including Inter Milan and Juventus, with €4.5 billion secured through deals with DAZN and Sky until 2029. The league’s CEO urged other platforms to follow Meta’s lead in fighting piracy.

Italian authorities have ramped up anti-piracy measures, passing laws that enable swift takedowns of illegal streams. Earlier this month, police dismantled a network with 22 million users, highlighting the scale of the issue.

UK’s online safety rules take effect

Social media platforms operating in the UK have been given until March 2025 to identify and mitigate illegal content on their services or risk fines of up to 10% of their global revenue. The warning comes as the Online Safety Act (OSA) begins to take effect, with Ofcom, the regulator, releasing final guidelines on tackling harmful material, including child sexual abuse, self-harm promotion, and extreme violence.

Dame Melanie Dawes, Ofcom’s chief, described this as the industry’s “last chance” to reform. “If platforms fail to act, we will take enforcement measures,” she warned, adding that public pressure for stricter action could grow. Companies must conduct risk assessments by March, focusing on how such material appears and devising ways to block its spread.

While hailed as a step forward, critics argue the law leaves gaps in child safety measures. The Molly Rose Foundation and NSPCC have expressed concerns about the lack of targeted action on harmful content in private messaging and self-harm imagery. Despite these criticisms, the UK government views the Act as a reset of societal expectations for tech firms, aiming to ensure a safer online environment.

Experts at the IGF address the growing threat of misinformation in the digital age

In an Internet Governance Forum panel in Riyadh, Saudi Arabia, titled ‘Navigating the misinformation maze: Strategic cooperation for a trusted digital future’, moderated by Italian journalist Barbara Carfagna, experts from diverse sectors examined the escalating problem of misinformation and explored solutions for the digital era. Esam Alwagait, Director of the Saudi Data and AI Authority’s National Information Center, identified social media as the primary driver of false information, with algorithms amplifying sensational content.

Natalia Gherman of the UN Counter-Terrorism Committee noted the danger of unmoderated online spaces, while Mohammed Ali Al-Qaed of Bahrain’s Information and Government Authority emphasised the role of influencers in spreading false narratives. Khaled Mansour, a Meta Oversight Board member, pointed out that misinformation can be deadly, stating, ‘Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.’

Emerging technologies like AI were highlighted as both culprits and potential solutions. Alwagait and Al-Qaed discussed how AI-driven tools could detect manipulated media and analyse linguistic patterns, while Al-Qaed proposed ‘verify-by-design’ mechanisms to tag information at its source.

However, the panel warned of AI’s ability to generate convincing fake content, fueling an arms race between creators of misinformation and its detectors. Pearse O’Donohue of the European Commission’s DigiConnect Directorate praised the EU’s Digital Services Act as a regulatory model but questioned, ‘Who moderates the regulator?’ Meanwhile, Mansour cautioned against overreach, advocating for labelling content rather than outright removal to preserve freedom of expression.

Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised the importance of global collaboration, supported by Gherman, who called for unified strategies through international forums like the Internet Governance Forum. Al-Qaed suggested regional cooperation could strengthen smaller nations’ influence over tech platforms. The panel also stressed promoting credible information and digital literacy to empower users, with Mansour noting that fostering ‘good information’ is essential to counter misinformation at its root.

The discussion concluded with a consensus on the need for balanced, innovative solutions. Speakers called for collaborative regulatory approaches, advanced fact-checking tools, and initiatives that protect freedom of expression while tackling misinformation’s far-reaching consequences.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Google’s old search format criticised by hotels

Google has revealed that a trial of its traditional search result layout, featuring 10 blue links per page, negatively impacted both users and hotels. The test, conducted in Germany, Belgium, and Estonia, aimed to gauge the format’s viability under new EU digital regulations. The results showed users were less satisfied and took longer to find desired information, with hotel traffic dropping by over 10%.

The test was part of Google’s efforts to align with the EU’s Digital Markets Act, which prohibits favouritism towards its own services. However, the return to the older layout, implemented last month, left hotels at a disadvantage and reduced the ability of users to locate accommodations efficiently. “People had to conduct more searches and often gave up without finding what they needed,” stated Oliver Bethell, Google’s Competition Legal Director.

The trial results come as Google faces mounting pressure from price comparison websites and the European Commission. Over 20 comparison platforms have criticised Google’s compliance proposals, urging EU regulators to impose penalties. Google has indicated it will seek further guidance from the Commission to develop a suitable solution. This tension underscores the challenges tech giants face in balancing business interests with regulatory compliance and user experience, particularly in Europe’s increasingly stringent tech landscape.

Texas launches investigation into tech platforms over child safety

Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.

The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.

Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.

Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.

SEC reopens investigation into Elon Musk and Neuralink

The US Securities and Exchange Commission (SEC) has reopened its investigation into Neuralink, Elon Musk’s brain-chip startup, according to a letter shared by Musk on X, formerly known as Twitter. The letter, dated Dec. 12 and written by Musk’s attorney Alex Spiro, also revealed that the SEC issued Musk a 48-hour deadline to settle a probe into his $44 billion takeover of Twitter or face charges. The settlement amount remains undisclosed.

Musk’s tumultuous relationship with the SEC has resurfaced amid allegations that he misled investors about Neuralink’s brain implant safety. Despite ongoing investigations, the extent to which the SEC can take action against Musk is uncertain. Musk, who also leads Tesla and SpaceX, is positioned to gain significant political leverage after investing heavily in supporting Donald Trump’s presidential campaign. Trump, in turn, has appointed Musk to a government reform task force, raising questions about potential regulatory leniency toward his ventures.

In the letter, Spiro criticised the SEC’s actions, stating Musk would not be “intimidated” and reserving his legal rights. This marks the latest in a series of clashes between Musk and the SEC, including a 2018 lawsuit over misleading Tesla-related tweets, which Musk settled by paying $20 million and stepping down as Tesla chairman. Both the SEC and Neuralink have yet to comment on the reopened investigation.