Hundreds arrested in Nigerian fraud bust targeting victims globally

Nigerian authorities have arrested 792 people in connection with an elaborate scam operation based in Lagos. The suspects, including 148 Chinese and 40 Filipino nationals, were detained during a raid on the Big Leaf Building, a luxury seven-storey complex that allegedly housed a call centre targeting victims in the Americas and Europe.

The fraudsters reportedly used social media platforms such as WhatsApp and Instagram to lure individuals with promises of romance or lucrative investment opportunities. Victims were then coerced into transferring funds for fake cryptocurrency ventures. Nigeria’s Economic and Financial Crimes Commission (EFCC) revealed that local accomplices were recruited to build trust with targets, before handing them over to foreign organisers to complete the scams.

The EFCC spokesperson stated that agents had seized phones, computers, and vehicles during the raid and were working with international partners to investigate links to organised crime. This operation highlights the growing use of sophisticated technology in transnational fraud, as well as Nigeria’s commitment to combating such criminal activities.

Enhancing parliamentary skills for a thriving digital future

As digital transformation accelerates, parliaments across the globe are challenged to keep pace with emerging technologies like AI and data governance. On the second day of IGF 2024 in Riyadh, an influential panel discussed how parliamentary capacity development is essential to shaping inclusive, balanced digital policies without stifling innovation.

The session ‘Building parliamentary capacity to effectively shape the digital realm,’ moderated by Rima Al-Yahya of Saudi Arabia’s Shura Council, brought together representatives from international organisations and tech giants, including ICANN, Google, GIZ, and UNESCO. Their message was that parliamentarians need targeted training and collaboration to effectively navigate AI regulation, data sovereignty, and the digital economy.

The debate on AI regulation reflected a global dilemma: how to regulate AI responsibly without halting progress. UNESCO’s Cedric Wachholz outlined flexible approaches, including risk-based frameworks and ethical principles, as seen in their Ethics of AI. Google’s Olga Skorokhodova reinforced this by saying that as AI develops, it’s becoming ‘too important not to regulate well,’ advocating with this known Google motto for multistakeholder collaboration and local capacity development.

Beckwith Burr, ICANN board member, stressed that while internet governance requires global coordination, legislative decisions are inherently national. ‘Parliamentarians must understand how the internet works to avoid laws that unintentionally break it,’ she cautioned and added that ICANN offers robust capacity-building programs to bridge knowledge gaps.

With a similar stance, Franz von Weizsäcker of GIZ highlighted Africa’s efforts to harmonise digital policies across 55 countries under the African Union’s Data Policy Framework. He noted that concerns about ‘data colonialism’, where local data benefits global corporations, must be tackled through innovative policies that protect data without hindering cross-border data flows.

Parliamentarians from Kenya, Egypt, and Gambia emphasised the need for widespread digital literacy among legislators, as poorly informed laws risk impeding innovation. ‘Over 95% of us do not understand the technical sector,’ said Kenyan Senator Catherine Muma, urging investments to empower lawmakers across all sectors (health, finance, or education) to legislate for an AI-driven future.

As Rima Al-Yahya trustworthily summarised, ‘Equipping lawmakers with tools and knowledge is pivotal to ensuring digital policies promote innovation, security, and accountability for all.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Messaging app Viber blocked in Russia

Russian authorities have blocked access to the Viber messaging app, citing violations of rules aimed at curbing terrorism, extremism, and drug-related activities. The decision was announced by Roskomnadzor, the country’s communications regulator, marking the latest action in a series of restrictions on social media platforms.

Viber, owned by Japan’s Rakuten Group, had been a vocal opponent of Russian disinformation. Hiroshi Mikitani, Rakuten’s chief executive, previously described the app as a tool to combat propaganda, stating that the platform took a firm stance against fake news. However, Rakuten has yet to respond to the block.

This development comes amidst an ongoing digital crackdown in Russia, which has targeted various platforms perceived as threats to state narratives. Critics argue that such measures stifle free communication and independent information sharing. Viber now joins the list of restricted apps as Russia intensifies its grip on online spaces.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

Ethiopian content moderators claim neglect by Meta

Former moderators for Facebook’s parent company, Meta, have accused the tech giant of disregarding threats from Ethiopian rebels after their removal of inflammatory content. According to court documents filed in Kenya, members of the Oromo Liberation Army (OLA) targeted moderators reviewing Facebook posts, threatening dire consequences unless the posts were reinstated. Contractors hired by Meta allegedly dismissed these concerns, claiming the threats were fabricated, before later offering limited support, such as moving one exposed moderator to a safehouse.

The dispute stems from a lawsuit by 185 former moderators against Meta and two contractors, Sama and Majorel, alleging wrongful termination and blacklisting after they attempted to unionise. Moderators focusing on Ethiopia faced particularly acute risks, receiving threats that detailed their names and addresses, yet their complaints were reportedly met with inaction or suspicion. One moderator, fearing for his life, described living in constant terror of visiting family in Ethiopia.

The case has broader implications for Meta’s content moderation policies, as the company relies on third-party firms worldwide to handle disturbing and often dangerous material. In a related Kenyan lawsuit, Meta stands accused of allowing violent and hateful posts to flourish on its platform, exacerbating Ethiopia’s ongoing civil strife. While Meta, Sama, and the OLA have not commented, the allegations raise serious questions about the accountability of global tech firms in safeguarding their workers and addressing hate speech.

Digital futures at a crossroads: aligning WSIS and the Global Digital Compact

The path toward a cohesive digital future was the central theme at the ‘From WSIS to GDC: Harmonising Strategies Towards Coordination‘ session held at the Internet Governance Forum (IGF) 2024 in Riyadh. Experts, policymakers, and civil society representatives converged to address how the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC) can work in unison. At the heart of the debate lay two critical imperatives: coordination and avoiding fragmentation.

Panelists, including Jorge Cancio of the Swiss Government and David Fairchild of Canada, underscored the IGF’s central role as a multistakeholder platform for dialogue. However, concerns about its diminishing mandate and inadequate funding surfaced repeatedly. Fairchild warned of ‘a centralisation of digital governance processes,’ hinting at geopolitical forces that could undermine inclusive, global cooperation. Cancio urged an updated ‘Swiss Army knife’ approach to WSIS, where existing mechanisms, like the IGF, are strengthened rather than duplicated.

The session also highlighted emerging challenges since WSIS’s 2005 inception. Amrita Choudhury from MAG and Anita Gurumurthy of IT for Change emphasised that AI, data governance, and widening digital divides demand urgent attention. Gurumurthy lamented that ‘neo-illiberalism,’ characterised by corporate greed and authoritarian politics, threatens the vision of a people-centred information society. Meanwhile, Gitanjali Sah of ITU reaffirmed WSIS’s achievements, pointing to successes like digital inclusion through telecentres and distance learning.

Amid these reflections, the IGF emerged as an essential event for harmonising WSIS and GDC goals. Panellists, including Nigel Cassimire from the Caribbean Telecommunications Union, proposed that the IGF develop performance targets to implement GDC commitments effectively. Yet, as Jason Pielemeier of the Global Network Initiative cautioned, the IGF faces threats of co-optation in settings hostile to open dialogue, which ‘weakens its strength.’

Despite these tensions, hope remained for creative solutions and renewed international solidarity. The session concluded with a call to refocus on WSIS’s original principles—ensuring no one is left behind in the digital future. As Anita Gurumurthy aptly summarised: ‘We reject bad politics and poor economics. What we need is a solidarity vision of interdependence and mutual reciprocity.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

The EU to resolve dispute with India over ICT tariffs

The European Union is addressing a trade dispute with India over tariffs on ICT goods, which India has effectively blocked under the World Trade Organization (WTO) by appealing a favourable report for the EU to the non-functional WTO Appellate Body, stalling the resolution process. India has also rejected alternative dispute resolution methods, such as ad hoc appeal arbitration or a mutually agreed solution.

In response, the EU uses its Enforcement Regulation to enforce international trade obligations when disputes are blocked, ensuring that WTO rules are respected. The EU has launched a consultation for concerned entities, with responses due by 10 February 2025, to guide decisions on potential commercial policy measures should a mutually satisfactory solution not be reached.

At the same time, the EU continues to seek a resolution through alternative means, inviting India to join the Multi-Party Interim Appeal Arrangement (MPIA) or agree to ad hoc appeal arbitration. The dispute began in 2014 when India imposed customs duties of up to 20% on various ICT products, which the EU argues violates India’s WTO commitments to apply a zero-duty rate.

In 2019, the EU initiated the WTO dispute settlement process, and in April 2023, the panel ruled in favour of the EU, confirming that India’s tariffs were inconsistent with WTO rules. India appealed the decision in December 2023, prolonging the dispute.

Serbian spyware targets activists and journalists, Amnesty says

Serbia has been accused of using spyware to target journalists and activists, according to a new Amnesty International report. Investigations revealed that ‘NoviSpy,’ a homegrown spyware, extracted private data from devices and uploaded it to a government-controlled server. Some cases also involved the use of technology provided by Israeli firm Cellebrite to unlock phones before infecting them.

Activists reported unusual phone activity following meetings with Serbian authorities. Forensic experts confirmed NoviSpy exported contact lists and private photos to state-controlled servers. The Serbian government has yet to respond to requests for comment regarding these allegations.

Cellebrite, whose phone-cracking devices are widely used by law enforcement worldwide, stated it is investigating the claims. The company’s representative noted that misuse of their technology could violate end-user agreements, potentially leading to a suspension of use by Serbian officials.

Concerns over these practices are heightened due to Serbia’s EU integration programme, partially funded by Norway and administered by the UN Office for Project Services (UNOPS). Norway expressed alarm over the findings and plans to meet with Serbian authorities and UNOPS for clarification.

TikTok’s request to temporarily halt the US ban rejected by US court

TikTok’s deadline is approaching as its Chinese parent company, ByteDance, prepares to take its case to the US Supreme Court. A federal appeals court on Friday rejected TikTok’s request for more time to challenge a law mandating ByteDance to divest TikTok’s US operations by 19 January or face a nationwide ban. The platform, used by 170 million Americans, now has weeks to seek intervention from the Supreme Court to avoid a shutdown that would reshape the digital landscape.

The US government argues that ByteDance’s control over TikTok poses a persistent national security threat, claiming the app’s ties to China could expose American data to misuse. TikTok strongly disputes these assertions, stating that user data and content recommendation systems are stored on US-based Oracle servers and that moderation decisions are made domestically. A TikTok spokesperson emphasised the platform’s intention to fight for free speech, pointing to the Supreme Court’s history of defending such rights.

The ruling leaves TikTok’s immediate fate uncertain, placing the decision first in the hands of President Joe Biden, who could grant a 90-day extension if progress toward a divestiture is evident. However, Biden’s decision would give way to President-elect Donald Trump, who takes office just one day after the 19 January deadline. Despite his previous efforts to ban TikTok in 2020, Trump recently opposed the current law, citing concerns about its benefits to rival platforms like Facebook.

Adding to the urgency, US lawmakers have called on Apple and Google to prepare to remove TikTok from their app stores if ByteDance fails to comply. As the clock ticks, TikTok’s battle with the US government highlights a broader conflict over technology, data privacy, and national security. The legal outcome could force millions of users and businesses to rethink their digital strategies in a post-TikTok world.