Global stakeholders chart the course for digital governance at the IGF in Riyadh

Global digital governance was the main topic in a key discussion led by moderator Timea Suto, gathering experts to tackle challenges in AI, data management, and internet governance. At the Internet Governance Forum (IGF) in Riyadh, Saudi Arabia, speakers emphasised balancing innovation with regulatory consistency while highlighting the need for inclusive frameworks that address societal biases and underrepresented voices.

Thomas Schneider of Ofcom Switzerland underscored the Council of Europe‘s AI convention as a promising standard for global interoperability. Meta’s Flavia Alves advocated for open-source AI to drive global collaboration and safer products. Meanwhile, Yoichi Iida from Japan‘s Ministry of Communications outlined the G7 Hiroshima AI code as an international step forward, while concerns about dataset biases were raised from the audience.

Data governance discussions focused on privacy and trust in cross-border flows. Maarit Palovirta of Connect Europe called for harmonised regulations to protect privacy while fostering innovation. Yoichi Iida highlighted OECD initiatives on trusted data sharing, with Amr Hashem of the GSMA stressing the need to develop infrastructure alongside governance, particularly in underserved regions.

The future of internet governance also featured prominently, with Irina Soeffky from Germany‘s Digital Ministry reinforcing the multi-stakeholder model amid calls to update WSIS structures. Audience member Bertrand de La Chapelle proposed reforming the Internet Governance Forum to reflect current challenges. Jacques Beglinger of EuroDIG stressed the importance of grassroots inclusion, while Desiree Milosevic-Evans highlighted gender representation gaps in governance.

Canada‘s Larisa Galadza framed the coming year as critical for advancing the Global Digital Compact, with priorities on AI governance under Canada’s G7 presidency. Maria Fernanda Garza of the International Chamber of Commerce (ICC) called for alignment in governance while maintaining flexibility for local needs amid ongoing multilateral challenges.

Speakers concluded that collaboration, inclusivity, and clear mandates are key to shaping effective digital governance. As technological change accelerates, the dialogue reinforces the need for adaptable, action-oriented strategies to ensure equity and innovation globally.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

IGF 2024 panel tackles global digital identity challenges

The 19th Internet Governance Forum (IGF 2024) in Riyadh, Saudi Arabia, brought together a distinguished panel to address global challenges and opportunities in developing trusted digital identity systems. Moderated by Shivani Thapa, the session featured insights from Bandar Al-Mashari, Emma Theofelus, Siim Sikkut, Sangbo Kim, Kurt Lindqvist, and other notable speakers.

The discussion focused on building frameworks for trusted digital identities, emphasising their role as critical infrastructure for digital transformation. Bandar Al-Mashari, Saudi Arabia’s Assistant Minister of Interior for Technology Affairs, highlighted the Kingdom’s innovative efforts, while Namibia’s Minister of Information, Emma Theofelus, stressed the importance of inclusivity and addressing regional needs.

The panellists examined the balance between enhanced security and privacy protection. Siim Sikkut, Managing Partner of Digital Nations, underscored the value of independent oversight and core principles to maintain trust. Emerging technologies like blockchain, biometrics, and artificial intelligence were recognised for their potential impact, though caution was urged against uncritical adoption.

Barriers to international cooperation, including the digital divide, infrastructure gaps, and the complexity of global systems, were addressed. Sangbo Kim of the World Bank shared insights on fostering collaboration across regions, while Kurt Lindqvist, CEO of ICANN, highlighted the need for a shared vision in navigating differing national priorities.

Speakers advocated for a phased approach to implementation, allowing countries to progress at their own pace while drawing lessons from successful initiatives, such as those in international travel and telecommunications. The call for collaboration was echoed by Prince Bandar bin Abdullah Al-Mishari, who emphasised Saudi Arabia’s commitment to advancing global solutions.

The discussion concluded on an optimistic note. Fatma, briefly mentioned as a participant, contributed to a shared vision of digital identity as a tool for accelerating inclusion and fostering global trust. The panellists agreed that a unified approach, guided by innovation and respect for privacy, is vital to building secure and effective digital identity systems worldwide.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Media giant Warner Bros realigns operations

Warner Bros Discovery has announced a significant restructuring of its operations, separating its traditional cable TV businesses like CNN and TNT from its growing streaming platforms such as Max and Discovery+. This move is aimed at adapting to the ongoing decline in cable subscriptions while positioning itself for potential sales or industry mergers.

The company’s shares rose over 15% following the announcement, with analysts noting that the split could make its linear TV networks more attractive to buyers. The restructuring mirrors similar efforts by media giants like Comcast, which recently launched a spin-off for its cable assets. Despite this, Warner Bros Discovery’s $40 billion debt remains a challenge in attracting buyers for its cable unit.

Streaming and studio operations, now placed in a separate division, continue to show promise, with growing returns on investment. CEO David Zaslav, known for orchestrating major deals, hinted at further industry consolidation in the near future. Warner Bros Discovery’s new structure is widely seen as a proactive measure to navigate a shifting media landscape.

New FDIC system targets fintech risks

The Federal Deposit Insurance Corporation (FDIC) has begun directly monitoring financial technology (fintech) companies partnering with banks across the United States. New system like this one aims to enhance oversight by identifying risks associated with these partnerships before they threaten banking stability. The monitoring system also allows regulators to maintain consistent supervision, even if fintech firms change their banking partners.

The move comes amid heightened scrutiny of bank-fintech collaborations, following the collapse of Synapse Financial Technologies in April. The startup, backed by Andreessen Horowitz, had provided critical services enabling fintech firms to offer financial products via FDIC-insured banks. Its failure left thousands of users without access to their funds and brought significant regulatory attention to the sector.

In response, the FDIC has proposed strengthening bank record-keeping requirements and expanding the definition of brokered deposits to include fintech-related funds. While these rules are not expected to take effect before 2025, the new monitoring framework provides examiners with an additional tool to safeguard financial stability without waiting for legislative approval.

FDIC Chairman Martin Gruenberg, who is stepping down in January, has played a central role in developing this regulatory approach. His leadership has been pivotal in navigating the challenges posed by the evolving relationship between traditional banking and fintech startups.

Mystery of David Mayer and ChatGPT resolved

Social media buzzed over the weekend as ChatGPT, the popular AI chatbot, mysteriously refused to generate the name ‘David Mayer.’ Users reported responses halting mid-sentence or error messages when attempting to input the name, sparking widespread speculation about Mayer’s identity and theories that he might have requested privacy through legal means.

OpenAI, the chatbot’s developer, attributed the issue to a system glitch. A spokesperson clarified, ‘One of our tools mistakenly flagged this name, which shouldn’t have happened. We’re working on a fix.’ The company has since resolved the glitch for ‘David Mayer,’ but other names continue to trigger errors.

Conspiracy theories emerged online, with some suggesting a link to David Mayer de Rothschild, who denied involvement, and others speculating connections to a deceased academic with ties to a security list. Experts noted the potential relevance of GDPR’s ‘right to be forgotten’ privacy rules, which allow individuals to request the removal of their data from digital platforms.

However, privacy specialists highlighted AI systems’ challenges in fully erasing personal data due to their reliance on massive datasets from public sources. While the incident has drawn attention to the complexities of AI data handling and privacy compliance, OpenAI remains tight-lipped on whether the glitch stemmed from a deletion request under GDPR guidelines. The situation underscores the tension between advancing AI capabilities and safeguarding individual privacy.

Turkey ends Meta investigation over Threads and Instagram data sharing

Turkey‘s competition board has concluded its investigation into Meta Platforms regarding data-sharing practices between Threads and Instagram. The inquiry, launched last year over potential competition law violations, ended after Meta addressed concerns through commitments deemed satisfactory by the authority.

Meta pledged that Threads users in Turkey will be able to access the platform without needing an Instagram account, once Threads becomes available again. Additionally, the company assured that data from Threads accounts will not be merged with Instagram unless users explicitly choose to link their profiles.

In April, Meta temporarily suspended Threads in Turkey to comply with an interim order from regulators. The resolution paves the way for the app’s reinstatement while easing concerns over anti-competitive practices.

Cate Blanchett critiques AI’s societal risks

Cate Blanchett has voiced her concerns about the societal implications of AI, describing the threat as ‘very real.’ In an interview with the BBC, the Australian actress shared her scepticism about advancements like driverless cars and AI‘s potential to replicate human voices, noting the broader risks for humanity. Blanchett emphasised that AI could replace anyone, not just actors, and criticised some technological advancements as ‘experimentation for its own sake.’

While promoting Rumours, her new apocalyptic comedy film, Blanchett described the plot as reflective of modern anxieties. The film, directed by Guy Maddin, portrays world leaders navigating absurd situations, offering both satire and a critique of detachment from reality. Blanchett highlighted how the story reveals the vulnerability and artificiality of political figures once removed from their structures of power.

Maddin shared that his characters emerged from initial disdain but evolved into figures of empathy as the narrative unfolds. Blanchett added that both actors and politicians face infantilisation within their respective systems, highlighting parallels in their perceived disconnection from the real world.

Meta tightens financial ad rules in Australia

Meta Platforms announced stricter regulations for advertisers promoting financial products and services in Australia, aiming to curb online scams. Following an October initiative where Meta removed 8,000 deceptive ‘celeb bait’ ads, the company now requires advertisers to verify beneficiary and payer details, including their Australian Financial Services License number, before running financial ads.

This move is part of Meta’s ongoing efforts to protect Australians from scams involving fake investment schemes using celebrity images. Verified advertisers must also display a “Paid for By” disclaimer, ensuring transparency in financial advertisements.

The updated policy follows a broader regulatory push in Australia, where the government recently abandoned plans to fine internet platforms for spreading misinformation. The crackdown on online platforms is part of a growing effort to assert Australian sovereignty over foreign tech companies, with a federal election looming.

Australia begins trial of teen social media ban

Australia‘s government is conducting a world-first trial to enforce its national social media ban for children under 16, focusing on age-checking technology. The trial, set to begin in January and run through March, will involve around 1,200 randomly selected Australians. It will help guide the development of effective age verification methods, as platforms like Meta, X (formerly Twitter), TikTok, and Snapchat must prove they are taking ‘reasonable steps’ to keep minors off their services or face fines of up to A$49.5 million ($32 million).

The trial is overseen by the Age Check Certification Scheme and will test several age-checking techniques, such as video selfies, document uploads for verification, and email cross-checking. Although platforms like YouTube are exempt, the trial is seen as a crucial step for setting a global precedent for online age restrictions, which many countries are now considering due to concerns about youth mental health and privacy.

The trial’s outcomes could influence how other nations approach enforcing age restrictions, despite concerns from some lawmakers and tech companies about privacy violations and free speech. The government has responded by ensuring that no personal data will be required without alternatives. The age-check process could significantly shape global efforts to regulate social media access for children in the coming years.

Australian social media ban sparked by politician’s wife’s call to action

Australia has passed a landmark law banning children under 16 from using social media, following a fast-moving push led by South Australian Premier Peter Malinauskas. The law, which takes effect in November 2025, aims to protect young people from the harmful effects of social media, including mental health issues linked to cyberbullying and body image problems. The bill has widespread support, with a government survey showing 77% of Australians backing the measure. However, it has sparked significant opposition from tech companies and privacy advocates, who argue that the law is rushed and could push young users to more dangerous parts of the internet.

The push for the national ban gained momentum after Malinauskas’s state-level initiative to restrict social media access for children under 14 in September. This led to a broader federal response, with Prime Minister Anthony Albanese’s government introducing a nationwide version of the policy. The legislation eliminates parental discretion, meaning no child under 16 will be able to use social media without facing fines for platforms that fail to enforce the rules. This move contrasts with policies in countries like France and Florida, where minors can access social media with parental permission.

While the law has garnered support from most of Australia’s political leaders, it has faced strong criticism from social media companies like Meta and TikTok. These platforms warn that the law could drive teens to hidden corners of the internet and that the rushed process leaves many questions unanswered. Despite the backlash, the law passed with bipartisan support, and a trial of age-verification technology will begin in January to prepare for its full implementation.

The debate over the law highlights growing concerns worldwide about the impact of social media on young people. Although some critics argue that the law is an overreach, others believe it is a necessary step to protect children from online harm. With the law now in place, Australia has set a precedent that could inspire other countries grappling with similar issues.