Kraken operator fined millions by Australian court

Bit Trade, the operator of Kraken in Australia, has been fined $8 million for offering an unapproved margin lending product to over 1,100 customers. The Federal Court of Australia ruled that the company breached financial regulations by failing to assess customer suitability and neglecting to provide a Target Market Determination (TMD), a document essential for ensuring products are appropriately matched to consumers’ needs.

The Australian Securities and Investments Commission (ASIC) revealed that customers lost $7.85 million due to the product, with one individual losing $6.3 million. Justice John Nicholas criticised Bit Trade’s actions as “serious” and profit-driven, calling out the company for its delayed response to compliance issues. In addition to the fine, Bit Trade was ordered to cover ASIC’s legal costs.

Kraken was disappointed with the ruling, arguing that Australia’s regulatory framework lacks clarity and calls for tailored cryptocurrency laws. However, ASIC Chair Joe Longo described the decision as a turning point for consumer protection, urging digital asset firms to meet compliance obligations. The regulator is currently consulting with the crypto industry on updates to its guidance, though critics claim the government’s inaction has left the sector in “regulatory limbo.”

Meta resolves Australian privacy dispute over Cambridge Analytica scandal

Meta Platforms, the parent company of Facebook, has settled a major privacy lawsuit in Australia with a record A$50 million payment. This settlement concludes years of legal proceedings over allegations that personal data of 311,127 Australian Facebook users was improperly exposed and risked being shared with consulting firm Cambridge Analytica. The firm was infamous for using such data for political profiling, including work on the Brexit campaign and Donald Trump’s election.

Australia’s privacy watchdog initiated the case in 2020 after uncovering that Facebook’s personality quiz app, This is Your Digital Life, was linked to the broader Cambridge Analytica scandal first revealed in 2018. The Australian Information Commissioner Elizabeth Tydd described the settlement as the largest of its kind in the nation, addressing significant privacy concerns.

Meta stated the agreement was reached on a “no admission” basis, marking an end to the legal battle. The case had already secured a significant victory for Australian regulators when the high court declined Meta’s appeal in 2023, forcing the company into mediation. This outcome highlights Australia’s growing resolve in holding global tech firms accountable for user data protection.

Hundreds arrested in Nigerian fraud bust targeting victims globally

Nigerian authorities have arrested 792 people in connection with an elaborate scam operation based in Lagos. The suspects, including 148 Chinese and 40 Filipino nationals, were detained during a raid on the Big Leaf Building, a luxury seven-storey complex that allegedly housed a call centre targeting victims in the Americas and Europe.

The fraudsters reportedly used social media platforms such as WhatsApp and Instagram to lure individuals with promises of romance or lucrative investment opportunities. Victims were then coerced into transferring funds for fake cryptocurrency ventures. Nigeria’s Economic and Financial Crimes Commission (EFCC) revealed that local accomplices were recruited to build trust with targets, before handing them over to foreign organisers to complete the scams.

The EFCC spokesperson stated that agents had seized phones, computers, and vehicles during the raid and were working with international partners to investigate links to organised crime. This operation highlights the growing use of sophisticated technology in transnational fraud, as well as Nigeria’s commitment to combating such criminal activities.

Enhancing parliamentary skills for a thriving digital future

As digital transformation accelerates, parliaments across the globe are challenged to keep pace with emerging technologies like AI and data governance. On the second day of IGF 2024 in Riyadh, an influential panel discussed how parliamentary capacity development is essential to shaping inclusive, balanced digital policies without stifling innovation.

The session ‘Building parliamentary capacity to effectively shape the digital realm,’ moderated by Rima Al-Yahya of Saudi Arabia’s Shura Council, brought together representatives from international organisations and tech giants, including ICANN, Google, GIZ, and UNESCO. Their message was that parliamentarians need targeted training and collaboration to effectively navigate AI regulation, data sovereignty, and the digital economy.

The debate on AI regulation reflected a global dilemma: how to regulate AI responsibly without halting progress. UNESCO’s Cedric Wachholz outlined flexible approaches, including risk-based frameworks and ethical principles, as seen in their Ethics of AI. Google’s Olga Skorokhodova reinforced this by saying that as AI develops, it’s becoming ‘too important not to regulate well,’ advocating with this known Google motto for multistakeholder collaboration and local capacity development.

Beckwith Burr, ICANN board member, stressed that while internet governance requires global coordination, legislative decisions are inherently national. ‘Parliamentarians must understand how the internet works to avoid laws that unintentionally break it,’ she cautioned and added that ICANN offers robust capacity-building programs to bridge knowledge gaps.

With a similar stance, Franz von Weizsäcker of GIZ highlighted Africa’s efforts to harmonise digital policies across 55 countries under the African Union’s Data Policy Framework. He noted that concerns about ‘data colonialism’, where local data benefits global corporations, must be tackled through innovative policies that protect data without hindering cross-border data flows.

Parliamentarians from Kenya, Egypt, and Gambia emphasised the need for widespread digital literacy among legislators, as poorly informed laws risk impeding innovation. ‘Over 95% of us do not understand the technical sector,’ said Kenyan Senator Catherine Muma, urging investments to empower lawmakers across all sectors (health, finance, or education) to legislate for an AI-driven future.

As Rima Al-Yahya trustworthily summarised, ‘Equipping lawmakers with tools and knowledge is pivotal to ensuring digital policies promote innovation, security, and accountability for all.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Messaging app Viber blocked in Russia

Russian authorities have blocked access to the Viber messaging app, citing violations of rules aimed at curbing terrorism, extremism, and drug-related activities. The decision was announced by Roskomnadzor, the country’s communications regulator, marking the latest action in a series of restrictions on social media platforms.

Viber, owned by Japan’s Rakuten Group, had been a vocal opponent of Russian disinformation. Hiroshi Mikitani, Rakuten’s chief executive, previously described the app as a tool to combat propaganda, stating that the platform took a firm stance against fake news. However, Rakuten has yet to respond to the block.

This development comes amidst an ongoing digital crackdown in Russia, which has targeted various platforms perceived as threats to state narratives. Critics argue that such measures stifle free communication and independent information sharing. Viber now joins the list of restricted apps as Russia intensifies its grip on online spaces.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

Ethiopian content moderators claim neglect by Meta

Former moderators for Facebook’s parent company, Meta, have accused the tech giant of disregarding threats from Ethiopian rebels after their removal of inflammatory content. According to court documents filed in Kenya, members of the Oromo Liberation Army (OLA) targeted moderators reviewing Facebook posts, threatening dire consequences unless the posts were reinstated. Contractors hired by Meta allegedly dismissed these concerns, claiming the threats were fabricated, before later offering limited support, such as moving one exposed moderator to a safehouse.

The dispute stems from a lawsuit by 185 former moderators against Meta and two contractors, Sama and Majorel, alleging wrongful termination and blacklisting after they attempted to unionise. Moderators focusing on Ethiopia faced particularly acute risks, receiving threats that detailed their names and addresses, yet their complaints were reportedly met with inaction or suspicion. One moderator, fearing for his life, described living in constant terror of visiting family in Ethiopia.

The case has broader implications for Meta’s content moderation policies, as the company relies on third-party firms worldwide to handle disturbing and often dangerous material. In a related Kenyan lawsuit, Meta stands accused of allowing violent and hateful posts to flourish on its platform, exacerbating Ethiopia’s ongoing civil strife. While Meta, Sama, and the OLA have not commented, the allegations raise serious questions about the accountability of global tech firms in safeguarding their workers and addressing hate speech.

Digital futures at a crossroads: aligning WSIS and the Global Digital Compact

The path toward a cohesive digital future was the central theme at the ‘From WSIS to GDC: Harmonising Strategies Towards Coordination‘ session held at the Internet Governance Forum (IGF) 2024 in Riyadh. Experts, policymakers, and civil society representatives converged to address how the World Summit on the Information Society (WSIS) framework and the Global Digital Compact (GDC) can work in unison. At the heart of the debate lay two critical imperatives: coordination and avoiding fragmentation.

Panelists, including Jorge Cancio of the Swiss Government and David Fairchild of Canada, underscored the IGF’s central role as a multistakeholder platform for dialogue. However, concerns about its diminishing mandate and inadequate funding surfaced repeatedly. Fairchild warned of ‘a centralisation of digital governance processes,’ hinting at geopolitical forces that could undermine inclusive, global cooperation. Cancio urged an updated ‘Swiss Army knife’ approach to WSIS, where existing mechanisms, like the IGF, are strengthened rather than duplicated.

The session also highlighted emerging challenges since WSIS’s 2005 inception. Amrita Choudhury from MAG and Anita Gurumurthy of IT for Change emphasised that AI, data governance, and widening digital divides demand urgent attention. Gurumurthy lamented that ‘neo-illiberalism,’ characterised by corporate greed and authoritarian politics, threatens the vision of a people-centred information society. Meanwhile, Gitanjali Sah of ITU reaffirmed WSIS’s achievements, pointing to successes like digital inclusion through telecentres and distance learning.

Amid these reflections, the IGF emerged as an essential event for harmonising WSIS and GDC goals. Panellists, including Nigel Cassimire from the Caribbean Telecommunications Union, proposed that the IGF develop performance targets to implement GDC commitments effectively. Yet, as Jason Pielemeier of the Global Network Initiative cautioned, the IGF faces threats of co-optation in settings hostile to open dialogue, which ‘weakens its strength.’

Despite these tensions, hope remained for creative solutions and renewed international solidarity. The session concluded with a call to refocus on WSIS’s original principles—ensuring no one is left behind in the digital future. As Anita Gurumurthy aptly summarised: ‘We reject bad politics and poor economics. What we need is a solidarity vision of interdependence and mutual reciprocity.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

The EU to resolve dispute with India over ICT tariffs

The European Union is addressing a trade dispute with India over tariffs on ICT goods, which India has effectively blocked under the World Trade Organization (WTO) by appealing a favourable report for the EU to the non-functional WTO Appellate Body, stalling the resolution process. India has also rejected alternative dispute resolution methods, such as ad hoc appeal arbitration or a mutually agreed solution.

In response, the EU uses its Enforcement Regulation to enforce international trade obligations when disputes are blocked, ensuring that WTO rules are respected. The EU has launched a consultation for concerned entities, with responses due by 10 February 2025, to guide decisions on potential commercial policy measures should a mutually satisfactory solution not be reached.

At the same time, the EU continues to seek a resolution through alternative means, inviting India to join the Multi-Party Interim Appeal Arrangement (MPIA) or agree to ad hoc appeal arbitration. The dispute began in 2014 when India imposed customs duties of up to 20% on various ICT products, which the EU argues violates India’s WTO commitments to apply a zero-duty rate.

In 2019, the EU initiated the WTO dispute settlement process, and in April 2023, the panel ruled in favour of the EU, confirming that India’s tariffs were inconsistent with WTO rules. India appealed the decision in December 2023, prolonging the dispute.