Digital futures at a crossroads: aligning WSIS and the Global Digital Compact

The path toward a cohesive digital future was the central theme at the ‘From WSIS to GDC: Harmonising Strategies Towards Coordination‘ session held at the Internet Governance Forum (IGF) 2024 in Riyadh. Experts, policymakers, and civil society representatives converged to address how the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC) can work in unison. At the heart of the debate lay two critical imperatives: coordination and avoiding fragmentation.

Panelists, including Jorge Cancio of the Swiss Government and David Fairchild of Canada, underscored the IGF’s central role as a multistakeholder platform for dialogue. However, concerns about its diminishing mandate and inadequate funding surfaced repeatedly. Fairchild warned of ‘a centralisation of digital governance processes,’ hinting at geopolitical forces that could undermine inclusive, global cooperation. Cancio urged an updated ‘Swiss Army knife’ approach to WSIS, where existing mechanisms, like the IGF, are strengthened rather than duplicated.

The session also highlighted emerging challenges since WSIS’s 2005 inception. Amrita Choudhury from MAG and Anita Gurumurthy of IT for Change emphasised that AI, data governance, and widening digital divides demand urgent attention. Gurumurthy lamented that ‘neo-illiberalism,’ characterised by corporate greed and authoritarian politics, threatens the vision of a people-centred information society. Meanwhile, Gitanjali Sah of ITU reaffirmed WSIS’s achievements, pointing to successes like digital inclusion through telecentres and distance learning.

Amid these reflections, the IGF emerged as an essential event for harmonising WSIS and GDC goals. Panellists, including Nigel Cassimire from the Caribbean Telecommunications Union, proposed that the IGF develop performance targets to implement GDC commitments effectively. Yet, as Jason Pielemeier of the Global Network Initiative cautioned, the IGF faces threats of co-optation in settings hostile to open dialogue, which ‘weakens its strength.’

Despite these tensions, hope remained for creative solutions and renewed international solidarity. The session concluded with a call to refocus on WSIS’s original principles—ensuring no one is left behind in the digital future. As Anita Gurumurthy aptly summarised: ‘We reject bad politics and poor economics. What we need is a solidarity vision of interdependence and mutual reciprocity.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Safeguarding democracy: Strategies to combat disinformation in electoral contexts

Disinformation during elections is a growing threat to democracy and human rights, according to a global panel of experts who convened at the Internet Governance Forum 2024 in Riyadh to discuss this issue. Giovanni Zagni, director of Pagella Politica and Facta.news, highlighted the European Union’s approach with its voluntary Code of Practice on Disinformation, which has 34 signatories, including major tech platforms.

The collaborative strategy contrasts with stricter regulatory models, raising questions about the balance between platform accountability and government intervention. Juliano Cappi from the Brazilian Internet Steering Committee underscored the importance of digital sovereignty and public infrastructure, introducing concepts like ‘systemic risk’ and ‘duty of care’ in platform regulation.

Collaboration among stakeholders was a recurring theme, with experts stressing the need for partnerships between fact-checkers, tech companies, and civil society organisations. Nazar Nicholas Kirama, president of the Internet Society Tanzania (ISOC Tanzania) called for platforms to adopt transparent algorithms and assume greater accountability, comparing their influence to that of electoral commissions.

However, Cappi warned about the risks of bias in platforms’ business models and advocated for a ‘follow the money’ approach to trace disinformation campaigns. Aiesha Adnan, co-founder of the ‘Women in Tech Maldives’ and Poncelet Ileleji from the Information Technology Association of the Gambia emphasised media literacy and grassroots empowerment as crucial tools, calling for initiatives like UNESCO-backed fact-checking and community radio programs to counter misinformation.

The tension between regulation and free speech was a central point of debate. While some participants, like Zagni, noted the challenge of addressing disinformation without infringing on freedoms, others warned against government overreach.

Adnan highlighted smaller nations’ unique challenges, urging for culturally sensitive interventions and localised strategies. The session closed with a call for global cooperation and continued dialogue to safeguard democratic processes while respecting diverse regional contexts and fundamental rights.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Serie A takes action against piracy with Meta

Serie A has partnered with Meta to combat illegal live streaming of football matches, aiming to protect its broadcasting rights. Under the agreement, Serie A will gain access to Meta’s tools for real-time detection and swift removal of unauthorised streams on Facebook and Instagram.

Broadcasting revenue remains vital for Serie A clubs, including Inter Milan and Juventus, with €4.5 billion secured through deals with DAZN and Sky until 2029. The league’s CEO urged other platforms to follow Meta’s lead in fighting piracy.

Italian authorities have ramped up anti-piracy measures, passing laws that enable swift takedowns of illegal streams. Earlier this month, police dismantled a network with 22 million users, highlighting the scale of the issue.

UK’s online safety rules take effect

Social media platforms operating in the UK have been given until March 2025 to identify and mitigate illegal content on their services or risk fines of up to 10% of their global revenue. The warning comes as the Online Safety Act (OSA) begins to take effect, with Ofcom, the regulator, releasing final guidelines on tackling harmful material, including child sexual abuse, self-harm promotion, and extreme violence.

Dame Melanie Dawes, Ofcom’s chief, described this as the industry’s “last chance” to reform. “If platforms fail to act, we will take enforcement measures,” she warned, adding that public pressure for stricter action could grow. Companies must conduct risk assessments by March, focusing on how such material appears and devising ways to block its spread.

While hailed as a step forward, critics argue the law leaves gaps in child safety measures. The Molly Rose Foundation and NSPCC have expressed concerns about the lack of targeted action on harmful content in private messaging and self-harm imagery. Despite these criticisms, the UK government views the Act as a reset of societal expectations for tech firms, aiming to ensure a safer online environment.

Experts at the IGF address the growing threat of misinformation in the digital age

In an Internet Governance Forum panel in Riyadh, Saudi Arabia, titled ‘Navigating the misinformation maze: Strategic cooperation for a trusted digital future’, moderated by Italian journalist Barbara Carfagna, experts from diverse sectors examined the escalating problem of misinformation and explored solutions for the digital era. Esam Alwagait, Director of the Saudi Data and AI Authority’s National Information Center, identified social media as the primary driver of false information, with algorithms amplifying sensational content.

Natalia Gherman of the UN Counter-Terrorism Committee noted the danger of unmoderated online spaces, while Mohammed Ali Al-Qaed of Bahrain’s Information and Government Authority emphasised the role of influencers in spreading false narratives. Khaled Mansour, a Meta Oversight Board member, pointed out that misinformation can be deadly, stating, ‘Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.’

Emerging technologies like AI were highlighted as both culprits and potential solutions. Alwagait and Al-Qaed discussed how AI-driven tools could detect manipulated media and analyse linguistic patterns, while Al-Qaed proposed ‘verify-by-design’ mechanisms to tag information at its source.

However, the panel warned of AI’s ability to generate convincing fake content, fueling an arms race between creators of misinformation and its detectors. Pearse O’Donohue of the European Commission’s DigiConnect Directorate praised the EU’s Digital Services Act as a regulatory model but questioned, ‘Who moderates the regulator?’ Meanwhile, Mansour cautioned against overreach, advocating for labelling content rather than outright removal to preserve freedom of expression.

Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised the importance of global collaboration, supported by Gherman, who called for unified strategies through international forums like the Internet Governance Forum. Al-Qaed suggested regional cooperation could strengthen smaller nations’ influence over tech platforms. The panel also stressed promoting credible information and digital literacy to empower users, with Mansour noting that fostering ‘good information’ is essential to counter misinformation at its root.

The discussion concluded with a consensus on the need for balanced, innovative solutions. Speakers called for collaborative regulatory approaches, advanced fact-checking tools, and initiatives that protect freedom of expression while tackling misinformation’s far-reaching consequences.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Google’s old search format criticised by hotels

Google has revealed that a trial of its traditional search result layout, featuring 10 blue links per page, negatively impacted both users and hotels. The test, conducted in Germany, Belgium, and Estonia, aimed to gauge the format’s viability under new EU digital regulations. The results showed users were less satisfied and took longer to find desired information, with hotel traffic dropping by over 10%.

The test was part of Google’s efforts to align with the EU’s Digital Markets Act, which prohibits favouritism towards its own services. However, the return to the older layout, implemented last month, left hotels at a disadvantage and reduced the ability of users to locate accommodations efficiently. “People had to conduct more searches and often gave up without finding what they needed,” stated Oliver Bethell, Google’s Competition Legal Director.

The trial results come as Google faces mounting pressure from price comparison websites and the European Commission. Over 20 comparison platforms have criticised Google’s compliance proposals, urging EU regulators to impose penalties. Google has indicated it will seek further guidance from the Commission to develop a suitable solution. This tension underscores the challenges tech giants face in balancing business interests with regulatory compliance and user experience, particularly in Europe’s increasingly stringent tech landscape.

Texas launches investigation into tech platforms over child safety

Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.

The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.

Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.

Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.

SEC reopens investigation into Elon Musk and Neuralink

The US Securities and Exchange Commission (SEC) has reopened its investigation into Neuralink, Elon Musk’s brain-chip startup, according to a letter shared by Musk on X, formerly known as Twitter. The letter, dated Dec. 12 and written by Musk’s attorney Alex Spiro, also revealed that the SEC issued Musk a 48-hour deadline to settle a probe into his $44 billion takeover of Twitter or face charges. The settlement amount remains undisclosed.

Musk’s tumultuous relationship with the SEC has resurfaced amid allegations that he misled investors about Neuralink’s brain implant safety. Despite ongoing investigations, the extent to which the SEC can take action against Musk is uncertain. Musk, who also leads Tesla and SpaceX, is positioned to gain significant political leverage after investing heavily in supporting Donald Trump’s presidential campaign. Trump, in turn, has appointed Musk to a government reform task force, raising questions about potential regulatory leniency toward his ventures.

In the letter, Spiro criticised the SEC’s actions, stating Musk would not be “intimidated” and reserving his legal rights. This marks the latest in a series of clashes between Musk and the SEC, including a 2018 lawsuit over misleading Tesla-related tweets, which Musk settled by paying $20 million and stepping down as Tesla chairman. Both the SEC and Neuralink have yet to comment on the reopened investigation.

Samsung challenges India watchdog over data seizure

Samsung has filed a legal challenge against India‘s Competition Commission (CCI), accusing the watchdog of unlawfully detaining employees and seizing data during a 2022 raid connected to an antitrust investigation involving Amazon and Walmart-owned Flipkart. The CCI claims Samsung colluded with the e-commerce giants to launch products exclusively online, a practice it argues violates competition laws.

In its filing with the northern city of Chandigarh’s High Court, Samsung alleged that confidential data was improperly taken from its employees during the raid and requested the return of the material. Samsung has secured an injunction to pause the CCI’s proceedings but seeks a broader ruling to prevent the use of the seized data. The CCI, in turn, has asked the Supreme Court to consolidate similar challenges by Samsung and 22 other parties, arguing that companies are attempting to derail the investigation.

The case stems from findings earlier this year that Amazon, Flipkart, and smartphone companies like Samsung engaged in anti-competitive practices by favouring select sellers and using exclusive product launches. While Amazon and Flipkart deny wrongdoing, brick-and-mortar retailers have long criticised their pricing and market strategies. Samsung, a major smartphone brand in India with a 14% market share, maintains it was wrongly implicated and cooperated only as a third party in the investigation.

New rules aim at fair payments for content in Australia

Australia’s government is set to introduce new rules requiring major tech companies to pay Australian media outlets for news content. Companies such as Meta and Google could face millions in charges if they fail to reach commercial agreements with publishers. The Assistant Treasurer emphasised that the rules aim to foster fair negotiations, with charges applying only to platforms earning over $250 million in Australian revenue.

The proposed regulations follow previous efforts to hold tech firms accountable for news content. Laws passed in 2021 required firms to compensate publishers, leading to temporary disruptions on Meta’s platforms before agreements were reached. However, Meta announced it would end those arrangements by 2024, scaling back its promotion of news globally.

The plan has drawn criticism from tech companies, who argue that most users do not access platforms for news and that publishers willingly share content for exposure. Despite these objections, Australian media organisations, including News Corp, anticipate benefits. The government’s broader efforts to regulate Big Tech include banning under-16s from social media and targeting scams.

Australia’s bold stance continues to set precedents for handling global tech giants, adding to growing international scrutiny. News publishers are optimistic about forming new commercial relationships under the proposed framework.