Meta resolves Australian privacy dispute over Cambridge Analytica scandal

Meta Platforms, the parent company of Facebook, has settled a major privacy lawsuit in Australia with a record A$50 million payment. This settlement concludes years of legal proceedings over allegations that personal data of 311,127 Australian Facebook users was improperly exposed and risked being shared with consulting firm Cambridge Analytica. The firm was infamous for using such data for political profiling, including work on the Brexit campaign and Donald Trump’s election.

Australia’s privacy watchdog initiated the case in 2020 after uncovering that Facebook’s personality quiz app, This is Your Digital Life, was linked to the broader Cambridge Analytica scandal first revealed in 2018. The Australian Information Commissioner Elizabeth Tydd described the settlement as the largest of its kind in the nation, addressing significant privacy concerns.

Meta stated the agreement was reached on a “no admission” basis, marking an end to the legal battle. The case had already secured a significant victory for Australian regulators when the high court declined Meta’s appeal in 2023, forcing the company into mediation. This outcome highlights Australia’s growing resolve in holding global tech firms accountable for user data protection.

Hundreds arrested in Nigerian fraud bust targeting victims globally

Nigerian authorities have arrested 792 people in connection with an elaborate scam operation based in Lagos. The suspects, including 148 Chinese and 40 Filipino nationals, were detained during a raid on the Big Leaf Building, a luxury seven-storey complex that allegedly housed a call centre targeting victims in the Americas and Europe.

The fraudsters reportedly used social media platforms such as WhatsApp and Instagram to lure individuals with promises of romance or lucrative investment opportunities. Victims were then coerced into transferring funds for fake cryptocurrency ventures. Nigeria’s Economic and Financial Crimes Commission (EFCC) revealed that local accomplices were recruited to build trust with targets, before handing them over to foreign organisers to complete the scams.

The EFCC spokesperson stated that agents had seized phones, computers, and vehicles during the raid and were working with international partners to investigate links to organised crime. This operation highlights the growing use of sophisticated technology in transnational fraud, as well as Nigeria’s commitment to combating such criminal activities.

US firm buys Israeli spyware company

Florida-based AE Industrial Partners has acquired Israeli spyware company Paragon for an estimated $500 million, with reports suggesting the deal could reach up to $900 million. Paragon, a competitor to NSO Group, is known for providing cybersecurity tools to government agencies that it claims meet “enlightened democracy” standards. The acquisition was completed on 13 December and reportedly approved by both US and Israeli officials.

Paragon, founded in 2019 by former Israeli intelligence officers and backed by ex-Prime Minister Ehud Barak, is merging with Virginia-based cybersecurity firm Red Lattice. This move aims to strengthen the firm’s presence in the global surveillance market. The US subsidiary of Paragon recently signed a one-year contract with US Immigration and Customs Enforcement, reflecting its growing footprint in government cybersecurity services.

The acquisition comes amid tightened scrutiny of spyware technologies after allegations of abuse involving competitors like NSO Group. In 2021, the US added NSO to its trade blacklist, citing its misuse in targeting activists and journalists. Paragon, however, positions itself as a provider of ethically guided surveillance tools, limiting its activities to messaging apps and governmental communications.

Dynamic Coalitions: Bridging digital divides and shaping equitable online governance

The session ‘Dynamic Coalitions and the Global Digital Compact’ at IGF 2024 in Riyadh highlighted the significant role of Dynamic Coalitions (DCs) in advancing the Global Digital Compact’s (GDC) objectives. Moderated by Jutta Croll, the discussion served as a platform to illustrate the alignment of DC efforts with the GDC’s goals, emphasising the need for broader collaboration and inclusion.

One of the pressing topics addressed was bridging digital divides, as emphasised by June Paris, an experienced nurse engaged in research on nutrition in pregnant women and a business development expert. She underscored the challenges faced by Small Island Developing States (SIDS), noting their increased vulnerability to digital marginalisation. Paris called on DCs to prioritise policies that combat polarisation and promote equitable internet access for underrepresented regions.

The conversation also delved into expanding the benefits of the digital economy. Muhammad Shabbir, a member of the Internet Society’s Accessibility Special Interest Group, a member of the Pakistan ISOC chapter, and a member of the Digital Coalition on Accessibility and Disability (DCAD), detailed the contributions of coalitions like the DC on Financial Inclusion, which advocates for accessible financial services, and the DC on Open Education, which focuses on enhancing learning opportunities. Shabbir also highlighted the DC on Accessibility’s work towards digital inclusivity for persons with disabilities and the DC on Environment’s initiatives to address the environmental impacts of digitalisation.

Founder and investor of the WAF lifestyle app and chair of Dynamic Coalition on Core Internet Values, Olivier Crepin-Leblond, provided insights on fostering safe and inclusive digital spaces, stressing the pivotal work of DCs like the DC on Internet Rights and Principles, which champions human rights online, and the DC on Child Online Safety, which ensures the protection of children in the digital realm. He highlighted the significant proportion of under-18 internet users, linking their rights to the UN Convention on the Rights of the Child.

Data governance and AI regulation also featured prominently. Tatevik Grigoryan, co-chair of Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators, discussed frameworks for responsible data management. At the same time, Yao Amevi Amnessinou Sossou, a research fellow for innovation and entrepreneurship, spotlighted AI-related initiatives. These included tackling gender biases through the DC on Gender and Internet Governance and exploring AI’s potential in healthcare and connected devices through other coalitions. Their contributions underscored the need for ethical and inclusive governance of emerging technologies.

The session’s open dialogue further enriched its value. The lead of three dynamic coalitions – Digital Economy, Digital Health and Environment, Dr Rajendra Pratap Gupta, highlighted the urgency of job creation and digital inclusion, while audience members raised critical points on data integrity and the transformative potential of gamification. Mark Carvell’s (co-moderator of the session) mention of the WSIS+20 Review added a forward-looking perspective, inviting DCs to contribute their expertise to this landmark evaluation.

By showcasing the diverse initiatives of Dynamic Coalitions, the session reinforced their essential role in shaping global internet governance. The call for greater inclusion, tangible outcomes, and multistakeholder collaboration resonated throughout, marking a clear path forward for advancing the GDC’s objectives.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Diplo Foundation explores AI’s ethical and philosophical challenges at IGF 2024

At the 2024 Internet Governance Forum (IGF) in Riyadh, a session featuring experts from DiploFoundation, addressed AI’s deep philosophical and ethical implications. The discussion moved beyond surface-level concerns about bias and ethics, focusing instead on more profound questions about human identity and agency in a world increasingly shaped by AI.

Jovan Kurbalija, Director of the Diplo Foundation, emphasised the need to critically examine AI’s impact on human knowledge and identity. He introduced the idea of a ‘right to be humanly imperfect,’ advocating for preserving human flaws and agency in an AI-dominated world.

That concept was echoed by other speakers, who expressed concern that the pursuit of AI-driven optimisation could erode essential human qualities. Sorina Teleanu, Diplo Foundation’s Director of Knowledge, raised important questions about the tendency to anthropomorphise AI, warning against attributing human traits to machines and urging a broader consideration of non-human forms of intelligence.

The panel also delved into the philosophical dimensions of AI, with Teleanu pointing out the lack of privacy protections surrounding brain data processing and the potential risks of attributing personhood to advanced AI. The discussion of Artificial General Intelligence (AGI) brought up the provocative idea that if AI becomes indistinguishable from humans, it could potentially deserve human rights, challenging our traditional notions of consciousness and personhood.

Addressing AI governance, Kurbalija focused on practical, immediate issues, such as AI’s impact on education, employment, and daily life, rather than speculative long-term concerns. He called for a decentralised approach to AI development that preserves diverse knowledge sources and prevents the centralisation of power by large tech companies. Henri-Jean Pollet from ISPA Belgium added to the conversation by advocating for open-source models and data licensing to ensure AI reliability and prevent inaccuracies in AI-generated content.

The conversation also explored the evolving dynamics of human-AI interaction. Teleanu highlighted the potential changes in human communication as AI-generated text becomes more prevalent, while Mohammad Abdul Haque Anu, Secretary-General of the Bangladesh Internet Governance Forum, stressed the need for AI ethics education, particularly in developing countries. Kurbalija shared a revealing anecdote about AI-generated speeches at conferences, illustrating how AI could influence professional communication in the future.

As the session concluded, Kurbalija highlighted the Diplo Foundation’s approach to AI development, focusing on tools that support diplomats and policymakers by enhancing human knowledge without replacing human decision-making. The discussion wrapped up with a demonstration of these AI tools in action, emphasising their potential to augment human capabilities in specialised fields. The speakers left the audience with an invitation for continued philosophical exploration of AI’s role in shaping humanity’s future.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Balancing regulation, innovation, and rights in the digital space

Global experts gathered at the Internet Governance Forum in Riyadh to explore collaborative solutions for addressing online harms, emphasising the importance of multistakeholder approaches. Jordan Hadfield of the FBI highlighted international partnerships like Interpol’s specialist groups and the Violent Crimes Against Children Task Force, while Australia’s Cyber Affairs Ambassador Brendan Dowling stressed government accountability measures, such as social media age restrictions.

Nighat Dad, representing the Oversight Board, called for culturally sensitive content moderation and independence in oversight to ensure balanced regulation. Protecting vulnerable groups, especially children and women, took centre stage.

Dowling shared Australia’s initiative to ban under-16s from social media, while Rajnesh Singh from the APNIC Foundation detailed programs empowering women in Southeast Asia’s tech sector. Nighat Dad highlighted how Meta’s Oversight Board advises on issues like the cultural implications of certain terms, such as the Arabic word ‘Shaheed.’

Parliamentarians Auhoud Al-Shehail (Member of Parliament from the Saudi Shura Council) and Jehad Abdulla Al Fadhel (Second Deputy Speaker of the Shura Council of Bahrain) advocated for intensified penalties against harmful practices and stronger educational campaigns to build digital literacy.

Balancing innovation with regulation was another focus, with Hadfield and Dowling urging proactive ‘safety by design’ principles in technology development. Singh emphasised fostering local innovation over dependency on foreign digital products, while Al-Shehail called for policies that evolve alongside technology.

Closing the digital divide, particularly between developed and developing nations, also emerged as a priority, with the president of Guinea’s parliament emphasising global digital solidarity. The discussion underscored the complexity of online harms and the need for flexible, inclusive solutions that respect diverse cultural contexts.

As Brendan Dowling noted, ‘Safety must be integrated at every stage,’ while Singh stressed, ‘We need creators, not just consumers.’ The consensus was clear – a safer, more equitable digital world can be achieved only through collaboration, innovation, and ongoing dialogue.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Balancing innovation and oversight: AI’s future requires shared governance

At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dilemmas in the AI age: how to foster innovation in large-scale AI systems while ensuring ethical governance and regulation. The session ‘Researching at the frontier: Insights from the private sector in developing large-scale AI systems’ reflected the urgency of navigating AI’s transformative power without losing sight of privacy, fairness, and societal impact.

Ivana Bartoletti, Chief Privacy and AI Governance Officer at Wipro called on governments to better use existing privacy and data protection laws rather than rush into new AI-specific legislation. ‘AI doesn’t exist in isolation. Privacy laws, consumer rights, and anti-discrimination frameworks already apply,’ she said, stressing the need for ‘privacy by design’ to protect individual freedoms at every stage of AI development.

Basma Ammari from Meta added a private-sector perspective, advocating for a risk-based and principles-driven regulatory approach. Highlighting Meta’s open-source strategy for its large language models, Ammari explained, ‘More diverse global input strips biases and makes AI systems fairer and more representative.’ She added that collaboration, rather than heavy-handed regulation, is key to safeguarding innovation.

Another expert, Fuad Siddiqui, EY’s Emerging Tech Leader, introduced the concept of an ‘intelligence grid,’ likening AI infrastructure to electricity networks. He detailed AI’s potential to address real-world challenges, citing applications in agriculture and energy sectors that improve productivity and reduce environmental impacts. ‘AI must be embedded into resilient national strategies that balance innovation and sovereignty,’ Siddiqui noted.

Parliamentarians played a central role in the discussion, raising concerns about AI’s societal impacts, particularly on jobs and education. ‘Legislators face a steep learning curve in AI governance,’ remarked Silvia Dinica, a Romanian senator with a background in mathematics. Calls emerged for upskilling initiatives and AI-driven tools to support legislative processes, with private-sector partnerships seen as crucial to addressing workforce disruption.

The debate over AI regulation remains unsettled, but a consensus emerged on transparency, fairness, and accountability. Panelists urged parliamentarians to define national priorities, invest in research on algorithm validation, and work with private stakeholders to create adaptable governance frameworks. As Bartoletti aptly summarised, ‘The future of AI is not just technological—it’s about the values we choose to protect.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

Ethiopian content moderators claim neglect by Meta

Former moderators for Facebook’s parent company, Meta, have accused the tech giant of disregarding threats from Ethiopian rebels after their removal of inflammatory content. According to court documents filed in Kenya, members of the Oromo Liberation Army (OLA) targeted moderators reviewing Facebook posts, threatening dire consequences unless the posts were reinstated. Contractors hired by Meta allegedly dismissed these concerns, claiming the threats were fabricated, before later offering limited support, such as moving one exposed moderator to a safehouse.

The dispute stems from a lawsuit by 185 former moderators against Meta and two contractors, Sama and Majorel, alleging wrongful termination and blacklisting after they attempted to unionise. Moderators focusing on Ethiopia faced particularly acute risks, receiving threats that detailed their names and addresses, yet their complaints were reportedly met with inaction or suspicion. One moderator, fearing for his life, described living in constant terror of visiting family in Ethiopia.

The case has broader implications for Meta’s content moderation policies, as the company relies on third-party firms worldwide to handle disturbing and often dangerous material. In a related Kenyan lawsuit, Meta stands accused of allowing violent and hateful posts to flourish on its platform, exacerbating Ethiopia’s ongoing civil strife. While Meta, Sama, and the OLA have not commented, the allegations raise serious questions about the accountability of global tech firms in safeguarding their workers and addressing hate speech.

Rhode Island suffers major data breach

Rhode Island officials have confirmed a major data breach in the state’s social services system, potentially exposing the personal and financial details of hundreds of thousands of residents. The hackers, believed to be an international cybercriminal group, accessed sensitive information through RIBridges, the state’s portal for government assistance programmes, including Social Security numbers and banking details.

The breach, which was detected earlier this month, affects users of the Supplemental Nutrition Assistance Program, Temporary Assistance for Needy Families, and healthcare services accessed through HealthSource RI since 2016. The attackers have demanded an undisclosed ransom, threatening to release the stolen data if unpaid. Deloitte, the system’s vendor, confirmed the breach on Friday, prompting the state to shut down the portal temporarily.

Residents impacted by the breach will be notified via letters detailing steps to secure their personal information and protect their bank accounts. For now, new applicants for state benefits must use paper applications as authorities work to secure the compromised system. Governor Dan McKee described the incident as extortion, calling for swift remediation and protection for affected citisens.