Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Messaging app Viber blocked in Russia

Russian authorities have blocked access to the Viber messaging app, citing violations of rules aimed at curbing terrorism, extremism, and drug-related activities. The decision was announced by Roskomnadzor, the country’s communications regulator, marking the latest action in a series of restrictions on social media platforms.

Viber, owned by Japan’s Rakuten Group, had been a vocal opponent of Russian disinformation. Hiroshi Mikitani, Rakuten’s chief executive, previously described the app as a tool to combat propaganda, stating that the platform took a firm stance against fake news. However, Rakuten has yet to respond to the block.

This development comes amidst an ongoing digital crackdown in Russia, which has targeted various platforms perceived as threats to state narratives. Critics argue that such measures stifle free communication and independent information sharing. Viber now joins the list of restricted apps as Russia intensifies its grip on online spaces.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

Ethiopian content moderators claim neglect by Meta

Former moderators for Facebook’s parent company, Meta, have accused the tech giant of disregarding threats from Ethiopian rebels after their removal of inflammatory content. According to court documents filed in Kenya, members of the Oromo Liberation Army (OLA) targeted moderators reviewing Facebook posts, threatening dire consequences unless the posts were reinstated. Contractors hired by Meta allegedly dismissed these concerns, claiming the threats were fabricated, before later offering limited support, such as moving one exposed moderator to a safehouse.

The dispute stems from a lawsuit by 185 former moderators against Meta and two contractors, Sama and Majorel, alleging wrongful termination and blacklisting after they attempted to unionise. Moderators focusing on Ethiopia faced particularly acute risks, receiving threats that detailed their names and addresses, yet their complaints were reportedly met with inaction or suspicion. One moderator, fearing for his life, described living in constant terror of visiting family in Ethiopia.

The case has broader implications for Meta’s content moderation policies, as the company relies on third-party firms worldwide to handle disturbing and often dangerous material. In a related Kenyan lawsuit, Meta stands accused of allowing violent and hateful posts to flourish on its platform, exacerbating Ethiopia’s ongoing civil strife. While Meta, Sama, and the OLA have not commented, the allegations raise serious questions about the accountability of global tech firms in safeguarding their workers and addressing hate speech.

Digital futures at a crossroads: aligning WSIS and the Global Digital Compact

The path toward a cohesive digital future was the central theme at the ‘From WSIS to GDC: Harmonising Strategies Towards Coordination‘ session held at the Internet Governance Forum (IGF) 2024 in Riyadh. Experts, policymakers, and civil society representatives converged to address how the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC) can work in unison. At the heart of the debate lay two critical imperatives: coordination and avoiding fragmentation.

Panelists, including Jorge Cancio of the Swiss Government and David Fairchild of Canada, underscored the IGF’s central role as a multistakeholder platform for dialogue. However, concerns about its diminishing mandate and inadequate funding surfaced repeatedly. Fairchild warned of ‘a centralisation of digital governance processes,’ hinting at geopolitical forces that could undermine inclusive, global cooperation. Cancio urged an updated ‘Swiss Army knife’ approach to WSIS, where existing mechanisms, like the IGF, are strengthened rather than duplicated.

The session also highlighted emerging challenges since WSIS’s 2005 inception. Amrita Choudhury from MAG and Anita Gurumurthy of IT for Change emphasised that AI, data governance, and widening digital divides demand urgent attention. Gurumurthy lamented that ‘neo-illiberalism,’ characterised by corporate greed and authoritarian politics, threatens the vision of a people-centred information society. Meanwhile, Gitanjali Sah of ITU reaffirmed WSIS’s achievements, pointing to successes like digital inclusion through telecentres and distance learning.

Amid these reflections, the IGF emerged as an essential event for harmonising WSIS and GDC goals. Panellists, including Nigel Cassimire from the Caribbean Telecommunications Union, proposed that the IGF develop performance targets to implement GDC commitments effectively. Yet, as Jason Pielemeier of the Global Network Initiative cautioned, the IGF faces threats of co-optation in settings hostile to open dialogue, which ‘weakens its strength.’

Despite these tensions, hope remained for creative solutions and renewed international solidarity. The session concluded with a call to refocus on WSIS’s original principles—ensuring no one is left behind in the digital future. As Anita Gurumurthy aptly summarised: ‘We reject bad politics and poor economics. What we need is a solidarity vision of interdependence and mutual reciprocity.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

The EU to resolve dispute with India over ICT tariffs

The European Union is addressing a trade dispute with India over tariffs on ICT goods, which India has effectively blocked under the World Trade Organization (WTO) by appealing a favourable report for the EU to the non-functional WTO Appellate Body, stalling the resolution process. India has also rejected alternative dispute resolution methods, such as ad hoc appeal arbitration or a mutually agreed solution.

In response, the EU uses its Enforcement Regulation to enforce international trade obligations when disputes are blocked, ensuring that WTO rules are respected. The EU has launched a consultation for concerned entities, with responses due by 10 February 2025, to guide decisions on potential commercial policy measures should a mutually satisfactory solution not be reached.

At the same time, the EU continues to seek a resolution through alternative means, inviting India to join the Multi-Party Interim Appeal Arrangement (MPIA) or agree to ad hoc appeal arbitration. The dispute began in 2014 when India imposed customs duties of up to 20% on various ICT products, which the EU argues violates India’s WTO commitments to apply a zero-duty rate.

In 2019, the EU initiated the WTO dispute settlement process, and in April 2023, the panel ruled in favour of the EU, confirming that India’s tariffs were inconsistent with WTO rules. India appealed the decision in December 2023, prolonging the dispute.

Serbian spyware targets activists and journalists, Amnesty says

Serbia has been accused of using spyware to target journalists and activists, according to a new Amnesty International report. Investigations revealed that ‘NoviSpy,’ a homegrown spyware, extracted private data from devices and uploaded it to a government-controlled server. Some cases also involved the use of technology provided by Israeli firm Cellebrite to unlock phones before infecting them.

Activists reported unusual phone activity following meetings with Serbian authorities. Forensic experts confirmed NoviSpy exported contact lists and private photos to state-controlled servers. The Serbian government has yet to respond to requests for comment regarding these allegations.

Cellebrite, whose phone-cracking devices are widely used by law enforcement worldwide, stated it is investigating the claims. The company’s representative noted that misuse of their technology could violate end-user agreements, potentially leading to a suspension of use by Serbian officials.

Concerns over these practices are heightened due to Serbia’s EU integration programme, partially funded by Norway and administered by the UN Office for Project Services (UNOPS). Norway expressed alarm over the findings and plans to meet with Serbian authorities and UNOPS for clarification.

TikTok’s request to temporarily halt the US ban rejected by US court

TikTok’s deadline is approaching as its Chinese parent company, ByteDance, prepares to take its case to the US Supreme Court. A federal appeals court on Friday rejected TikTok’s request for more time to challenge a law mandating ByteDance to divest TikTok’s US operations by 19 January or face a nationwide ban. The platform, used by 170 million Americans, now has weeks to seek intervention from the Supreme Court to avoid a shutdown that would reshape the digital landscape.

The US government argues that ByteDance’s control over TikTok poses a persistent national security threat, claiming the app’s ties to China could expose American data to misuse. TikTok strongly disputes these assertions, stating that user data and content recommendation systems are stored on US-based Oracle servers and that moderation decisions are made domestically. A TikTok spokesperson emphasised the platform’s intention to fight for free speech, pointing to the Supreme Court’s history of defending such rights.

The ruling leaves TikTok’s immediate fate uncertain, placing the decision first in the hands of President Joe Biden, who could grant a 90-day extension if progress toward a divestiture is evident. However, Biden’s decision would give way to President-elect Donald Trump, who takes office just one day after the 19 January deadline. Despite his previous efforts to ban TikTok in 2020, Trump recently opposed the current law, citing concerns about its benefits to rival platforms like Facebook.

Adding to the urgency, US lawmakers have called on Apple and Google to prepare to remove TikTok from their app stores if ByteDance fails to comply. As the clock ticks, TikTok’s battle with the US government highlights a broader conflict over technology, data privacy, and national security. The legal outcome could force millions of users and businesses to rethink their digital strategies in a post-TikTok world.

European price comparison sites call for action against Google over search proposals

More than 20 price comparison websites across Europe, including Germany’s Idealo and France‘s LeGuide, criticised Google’s proposed changes to its search results, claiming they fail to comply with EU Digital Markets Act (DMA) requirements. The Act prohibits companies from favouring their own products and services on their platforms.

Google’s latest proposal includes redesigned search results to balance comparison sites and supplier websites, alongside testing an older ‘ten blue links’ format in some countries. However, the websites argue Google has disregarded feedback from over a year of discussions.

The critics, in an open letter, called on the European Commission to take decisive action, including fines, to ensure compliance. Google referred to a November statement highlighting efforts to meet DMA requirements.

SEC reopens investigation into Elon Musk and Neuralink

The US Securities and Exchange Commission (SEC) has reopened its investigation into Neuralink, Elon Musk’s brain-chip startup, according to a letter shared by Musk on X, formerly known as Twitter. The letter, dated Dec. 12 and written by Musk’s attorney Alex Spiro, also revealed that the SEC issued Musk a 48-hour deadline to settle a probe into his $44 billion takeover of Twitter or face charges. The settlement amount remains undisclosed.

Musk’s tumultuous relationship with the SEC has resurfaced amid allegations that he misled investors about Neuralink’s brain implant safety. Despite ongoing investigations, the extent to which the SEC can take action against Musk is uncertain. Musk, who also leads Tesla and SpaceX, is positioned to gain significant political leverage after investing heavily in supporting Donald Trump’s presidential campaign. Trump, in turn, has appointed Musk to a government reform task force, raising questions about potential regulatory leniency toward his ventures.

In the letter, Spiro criticised the SEC’s actions, stating Musk would not be “intimidated” and reserving his legal rights. This marks the latest in a series of clashes between Musk and the SEC, including a 2018 lawsuit over misleading Tesla-related tweets, which Musk settled by paying $20 million and stepping down as Tesla chairman. Both the SEC and Neuralink have yet to comment on the reopened investigation.