Meta data breach leads to huge EU fine

Meta has been fined €251 million by the European Union’s privacy regulator over a 2018 security breach that affected 29 million users worldwide. The breach involved the ‘View As’ feature, which cyber attackers exploited to access sensitive personal data such as names, contact details, and even information about users’ children.

The Irish Data Protection Commission, Meta’s lead EU regulator, highlighted the severity of the violation, which exposed users to potential misuse of their private information. Meta resolved the issue shortly after its discovery and notified affected users and authorities. Of the 29 million accounts compromised, approximately 3 million belonged to users in the EU and European Economic Area.

This latest fine brings Meta’s total penalties under the EU’s General Data Protection Regulation to nearly €3 billion. A Meta spokesperson stated that the company plans to appeal the decision and emphasised the measures it has implemented to strengthen user data protection. This case underscores the ongoing regulatory scrutiny faced by major technology firms in Europe.

Musk faces scrutiny over national security concerns

Elon Musk and his company SpaceX are facing multiple federal investigations into their compliance with security protocols designed to protect national secrets. According to reports, the reviews were initiated by the US Air Force, the Department of Defense Inspector General, and the undersecretary for intelligence and security. Concerns include Musk’s alleged failure to disclose meetings with foreign leaders and his reported contacts with Russian officials, including President Vladimir Putin.

The investigations follow longstanding concerns about Musk’s security practices. A previous review by the Pentagon was prompted in 2018 when Musk appeared on a live podcast and smoked marijuana, raising questions about his security clearance. Recently, the Air Force denied Musk high-level security access, citing potential risks.

SpaceX and Musk have declined to comment on the investigations. However, Pentagon officials emphasised the confidentiality of such probes, stating that the inquiries aim to protect the integrity of the process and those involved. National security concerns surrounding Musk have also been echoed by US allies and lawmakers.

Election integrity in the digital age: insights from IGF 2024

Election integrity and disinformation have been closely followed topics during the session ‘Internet governance and elections: maximising the potential for trust and addressing risks’ at the Internet Governance Forum (IGF) 2024 on Wednesday. Experts from across sectors convened to discuss the need to safeguard election integrity amid digital challenges. With more than 65 elections occurring globally this year, the so-called ‘super election year,’ the risks of being misguided have never been higher. From misinformation to AI deepfakes, the conversation underscored the escalating threats and the need for collaborative, multistakeholder solutions.

The Growing Threat of Disinformation

Tawfik Jelassi from UNESCO emphasised the exponential rise of disinformation, framing it as a key global risk. ‘Without facts, there is no trust, and without trust, democracy falters,’ he cautioned, adding that misinformation spreads ten times faster than verified content, exacerbating distrust in elections. Panellists, including William Bird of Media Monitoring Africa and Lina Viltrakiene of the Lithuanian government, described how malicious actors manipulate digital platforms to mislead voters, with deepfakes and coordinated inauthentic behaviour becoming increasingly pervasive.

Digital Inequality and Global Disparities

Elizabeth Orembo of ICT Africa highlighted the stark challenges faced by the Global South, where digital divides and unequal media access leave populations more vulnerable to misinformation. Unregulated influencers and podcasters wield significant power in Africa, often spreading unchecked narratives. ‘We cannot apply blanket policies from tech companies without addressing regional contexts,’ Orembo noted, pointing to the need for tailored approaches that account for infrastructural and cultural disparities.

AI, Social Media, and Platform Accountability

Meta’s Sezen Yesil shed light on the company’s efforts to combat election-related threats, including stricter measures against fake accounts, improved transparency for political ads, and collaboration with fact-checkers. While AI-driven disinformation remains a concern, Yesil observed that the anticipated impact of generative AI in the 2024 elections was modest. Nonetheless, panellists called for stronger accountability measures for tech companies, with Viltrakiene advocating for legal frameworks like the EU’s Digital Services Act to counter digital harms effectively.

A Multi-Stakeholder Solution

The session highlighted the importance of multistakeholder collaboration, a frequent theme across discussions. Rosemary Sinclair of Australia’s AUDA emphasised that safeguarding democracy is a ‘global team sport,’ requiring contributions from governments, civil society, academia, and the technical community. ‘The IGF is the ideal space for fostering such cooperation,’ she added, urging closer coordination between national and global IGF platforms.

Participants agreed that the fight for election integrity must extend beyond election cycles. Digital platforms, governments, and civil society must sustain efforts to build trust, address digital inequities, and create frameworks that protect democracy in the digital age. The IGF’s role as a forum for global dialogue and action was affirmed, with calls to strengthen its influence in shaping governance solutions for the future.

Election coalitions against misinformation

In our digital age where misinformation threatens the integrity of elections worldwide, a session at the IGF 2024 in Riyadh titled ‘Combating Misinformation with Election Coalitions’ strongly advocated for a collaborative approach to this issue. Panelists from diverse backgrounds, including Google, fact-checking organisations, and journalism, underscored the significance of election coalitions in safeguarding democratic processes. Mevan Babakar from Google introduced the ‘Elections Playbook,’ a public policy guide for forming effective coalitions, highlighting the necessity of trust, neutrality, and collaboration across varied stakeholders.

The session explored successful models like Brazil’s Comprova, which unites media outlets to fact-check election-related claims, and Facts First PH in the Philippines, promoting a ‘mesh’ approach where fact-checked information circulates through community influencers. Daniel Bramatti, an investigative journalist from Brazil, emphasised the importance of fact-checking as a response to misinformation, not a suppression of free speech. ‘Fact-checking is the free speech response to misinformation,’ he stated, advocating for context determination over censorship.

Challenges discussed included maintaining coalition momentum post-election, navigating government pressures, and dealing with the advent of AI-generated content. Alex Walden, Global Head of Human Rights for Google, addressed the delicate balance of engaging with governments while maintaining neutrality. ‘We have to be mindful of the role that we have in engaging neutrally,’ she noted, stressing the importance of clear, consistent policies for content moderation.

The conversation also touched on engaging younger, non-voting demographics in fact-checking initiatives, with David Ajikobi from Africa Check highlighting media literacy programs in Nigeria. The panellists agreed on the need for a multistakeholder approach, advocating for frameworks that focus on specific harms rather than the broad term ‘misinformation,’ as suggested by Peter Cunliffe-Jones’s work at Westminster University.

The session concluded with clear advice: for anyone looking to start or join an election coalition, prioritise relationship-building and choose coordinators with neutrality and independence. The call to action was for continued collaboration, innovation, and adaptation to local contexts to combat the evolving landscape of misinformation, ensuring that these coalitions survive and thrive beyond election cycles.

DR Congo sues Apple subsidiaries over alleged use of conflict minerals, challenges ethical sourcing claims

The Democratic Republic of Congo (DRC) has filed criminal complaints against Apple’s subsidiaries in France and Belgium, accusing the tech giant of indirectly benefiting from conflict minerals sourced from the region. The DRC, a major supplier of tin, tantalum, and tungsten — essential components in electronic devices — alleges that minerals smuggled through its conflict zones fuel violence and atrocities, including mass rapes and killings, often perpetrated by armed groups.

While Apple claims to audit suppliers and maintain a transparent supply chain, international lawyers representing the Congolese government argue the company relies on minerals pillaged from Congo. The legal filings accuse Apple of covering up war crimes, handling stolen goods, and misleading consumers about the integrity of its supply chain. The complaints also criticise the industry-funded ITSCI certification scheme, claiming it falsely legitimises minerals sourced from conflict zones.

Belgium’s historical role in the exploitation of Congo’s resources was highlighted by Congolese lawyers, who called on Belgium to support their legal efforts. Both France and Belgium are seen as jurisdictions that emphasise corporate accountability. Judicial authorities in these countries will decide whether to pursue criminal investigations against Apple and its subsidiaries.

This legal action reflects Congo’s broader struggle to end the illicit trade of its resources, which has contributed to decades of violence. Millions have died or been displaced due to conflicts linked to mineral exploitation, underscoring the urgent need for stricter enforcement of ethical supply chain practices.

TikTok appeals to Supreme Court to block looming US ban

TikTok and its parent company, ByteDance, have asked the Supreme Court to halt a US law that would force ByteDance to sell TikTok by 19 January or face a nationwide ban. The companies argue that the law violates the First Amendment, as it targets one of the most widely used social media platforms in the United States, which currently has 170 million American users. A group of TikTok users also submitted a similar request to prevent the shutdown.

The law, passed by Congress in April, reflects concerns over national security. The Justice Department claims TikTok poses a threat due to its access to vast user data and potential for content manipulation by a Chinese-owned company. A lower court in December upheld the law, rejecting TikTok’s argument that it infringes on free speech rights. TikTok maintains that users should be free to decide for themselves whether to use the app and that shutting it down for even a month could cause massive losses in users and advertisers.

With the ban set to take effect the day before President-elect Donald Trump’s inauguration, TikTok has urged the Supreme Court to decide by 6 January. Trump, who once supported banning TikTok, has since reversed his position and expressed willingness to reconsider. The case highlights rising trade tensions between the US and China and could set a precedent for other foreign-owned apps operating in America.

Kraken operator fined millions by Australian court

Bit Trade, the operator of Kraken in Australia, has been fined $8 million for offering an unapproved margin lending product to over 1,100 customers. The Federal Court of Australia ruled that the company breached financial regulations by failing to assess customer suitability and neglecting to provide a Target Market Determination (TMD), a document essential for ensuring products are appropriately matched to consumers’ needs.

The Australian Securities and Investments Commission (ASIC) revealed that customers lost $7.85 million due to the product, with one individual losing $6.3 million. Justice John Nicholas criticised Bit Trade’s actions as “serious” and profit-driven, calling out the company for its delayed response to compliance issues. In addition to the fine, Bit Trade was ordered to cover ASIC’s legal costs.

Kraken was disappointed with the ruling, arguing that Australia’s regulatory framework lacks clarity and calls for tailored cryptocurrency laws. However, ASIC Chair Joe Longo described the decision as a turning point for consumer protection, urging digital asset firms to meet compliance obligations. The regulator is currently consulting with the crypto industry on updates to its guidance, though critics claim the government’s inaction has left the sector in “regulatory limbo.”

Meta resolves Australian privacy dispute over Cambridge Analytica scandal

Meta Platforms, the parent company of Facebook, has settled a major privacy lawsuit in Australia with a record A$50 million payment. This settlement concludes years of legal proceedings over allegations that personal data of 311,127 Australian Facebook users was improperly exposed and risked being shared with consulting firm Cambridge Analytica. The firm was infamous for using such data for political profiling, including work on the Brexit campaign and Donald Trump’s election.

Australia’s privacy watchdog initiated the case in 2020 after uncovering that Facebook’s personality quiz app, This is Your Digital Life, was linked to the broader Cambridge Analytica scandal first revealed in 2018. The Australian Information Commissioner Elizabeth Tydd described the settlement as the largest of its kind in the nation, addressing significant privacy concerns.

Meta stated the agreement was reached on a “no admission” basis, marking an end to the legal battle. The case had already secured a significant victory for Australian regulators when the high court declined Meta’s appeal in 2023, forcing the company into mediation. This outcome highlights Australia’s growing resolve in holding global tech firms accountable for user data protection.

Hundreds arrested in Nigerian fraud bust targeting victims globally

Nigerian authorities have arrested 792 people in connection with an elaborate scam operation based in Lagos. The suspects, including 148 Chinese and 40 Filipino nationals, were detained during a raid on the Big Leaf Building, a luxury seven-storey complex that allegedly housed a call centre targeting victims in the Americas and Europe.

The fraudsters reportedly used social media platforms such as WhatsApp and Instagram to lure individuals with promises of romance or lucrative investment opportunities. Victims were then coerced into transferring funds for fake cryptocurrency ventures. Nigeria’s Economic and Financial Crimes Commission (EFCC) revealed that local accomplices were recruited to build trust with targets, before handing them over to foreign organisers to complete the scams.

The EFCC spokesperson stated that agents had seized phones, computers, and vehicles during the raid and were working with international partners to investigate links to organised crime. This operation highlights the growing use of sophisticated technology in transnational fraud, as well as Nigeria’s commitment to combating such criminal activities.

Enhancing parliamentary skills for a thriving digital future

As digital transformation accelerates, parliaments across the globe are challenged to keep pace with emerging technologies like AI and data governance. On the second day of IGF 2024 in Riyadh, an influential panel discussed how parliamentary capacity development is essential to shaping inclusive, balanced digital policies without stifling innovation.

The session ‘Building parliamentary capacity to effectively shape the digital realm,’ moderated by Rima Al-Yahya of Saudi Arabia’s Shura Council, brought together representatives from international organisations and tech giants, including ICANN, Google, GIZ, and UNESCO. Their message was that parliamentarians need targeted training and collaboration to effectively navigate AI regulation, data sovereignty, and the digital economy.

The debate on AI regulation reflected a global dilemma: how to regulate AI responsibly without halting progress. UNESCO’s Cedric Wachholz outlined flexible approaches, including risk-based frameworks and ethical principles, as seen in their Ethics of AI. Google’s Olga Skorokhodova reinforced this by saying that as AI develops, it’s becoming ‘too important not to regulate well,’ advocating with this known Google motto for multistakeholder collaboration and local capacity development.

Beckwith Burr, ICANN board member, stressed that while internet governance requires global coordination, legislative decisions are inherently national. ‘Parliamentarians must understand how the internet works to avoid laws that unintentionally break it,’ she cautioned and added that ICANN offers robust capacity-building programs to bridge knowledge gaps.

With a similar stance, Franz von Weizsäcker of GIZ highlighted Africa’s efforts to harmonise digital policies across 55 countries under the African Union’s Data Policy Framework. He noted that concerns about ‘data colonialism’, where local data benefits global corporations, must be tackled through innovative policies that protect data without hindering cross-border data flows.

Parliamentarians from Kenya, Egypt, and Gambia emphasised the need for widespread digital literacy among legislators, as poorly informed laws risk impeding innovation. ‘Over 95% of us do not understand the technical sector,’ said Kenyan Senator Catherine Muma, urging investments to empower lawmakers across all sectors (health, finance, or education) to legislate for an AI-driven future.

As Rima Al-Yahya trustworthily summarised, ‘Equipping lawmakers with tools and knowledge is pivotal to ensuring digital policies promote innovation, security, and accountability for all.’

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.