Experts at the IGF address the growing threat of misinformation in the digital age

In an Internet Governance Forum panel in Riyadh, Saudi Arabia, titled ‘Navigating the misinformation maze: Strategic cooperation for a trusted digital future’, moderated by Italian journalist Barbara Carfagna, experts from diverse sectors examined the escalating problem of misinformation and explored solutions for the digital era. Esam Alwagait, Director of the Saudi Data and AI Authority’s National Information Center, identified social media as the primary driver of false information, with algorithms amplifying sensational content.

Natalia Gherman of the UN Counter-Terrorism Committee noted the danger of unmoderated online spaces, while Mohammed Ali Al-Qaed of Bahrain’s Information and Government Authority emphasised the role of influencers in spreading false narratives. Khaled Mansour, a Meta Oversight Board member, pointed out that misinformation can be deadly, stating, ‘Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.’

Emerging technologies like AI were highlighted as both culprits and potential solutions. Alwagait and Al-Qaed discussed how AI-driven tools could detect manipulated media and analyse linguistic patterns, while Al-Qaed proposed ‘verify-by-design’ mechanisms to tag information at its source.

However, the panel warned of AI’s ability to generate convincing fake content, fueling an arms race between creators of misinformation and its detectors. Pearse O’Donohue of the European Commission’s DigiConnect Directorate praised the EU’s Digital Services Act as a regulatory model but questioned, ‘Who moderates the regulator?’ Meanwhile, Mansour cautioned against overreach, advocating for labelling content rather than outright removal to preserve freedom of expression.

Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised the importance of global collaboration, supported by Gherman, who called for unified strategies through international forums like the Internet Governance Forum. Al-Qaed suggested regional cooperation could strengthen smaller nations’ influence over tech platforms. The panel also stressed promoting credible information and digital literacy to empower users, with Mansour noting that fostering ‘good information’ is essential to counter misinformation at its root.

The discussion concluded with a consensus on the need for balanced, innovative solutions. Speakers called for collaborative regulatory approaches, advanced fact-checking tools, and initiatives that protect freedom of expression while tackling misinformation’s far-reaching consequences.

All transcripts from the Internet Governance Forum sessions can be found on dig.watch.

Google’s old search format criticised by hotels

Google has revealed that a trial of its traditional search result layout, featuring 10 blue links per page, negatively impacted both users and hotels. The test, conducted in Germany, Belgium, and Estonia, aimed to gauge the format’s viability under new EU digital regulations. The results showed users were less satisfied and took longer to find desired information, with hotel traffic dropping by over 10%.

The test was part of Google’s efforts to align with the EU’s Digital Markets Act, which prohibits favouritism towards its own services. However, the return to the older layout, implemented last month, left hotels at a disadvantage and reduced the ability of users to locate accommodations efficiently. “People had to conduct more searches and often gave up without finding what they needed,” stated Oliver Bethell, Google’s Competition Legal Director.

The trial results come as Google faces mounting pressure from price comparison websites and the European Commission. Over 20 comparison platforms have criticised Google’s compliance proposals, urging EU regulators to impose penalties. Google has indicated it will seek further guidance from the Commission to develop a suitable solution. This tension underscores the challenges tech giants face in balancing business interests with regulatory compliance and user experience, particularly in Europe’s increasingly stringent tech landscape.

Texas launches investigation into tech platforms over child safety

Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.

The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.

Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.

Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.

SEC reopens investigation into Elon Musk and Neuralink

The US Securities and Exchange Commission (SEC) has reopened its investigation into Neuralink, Elon Musk’s brain-chip startup, according to a letter shared by Musk on X, formerly known as Twitter. The letter, dated Dec. 12 and written by Musk’s attorney Alex Spiro, also revealed that the SEC issued Musk a 48-hour deadline to settle a probe into his $44 billion takeover of Twitter or face charges. The settlement amount remains undisclosed.

Musk’s tumultuous relationship with the SEC has resurfaced amid allegations that he misled investors about Neuralink’s brain implant safety. Despite ongoing investigations, the extent to which the SEC can take action against Musk is uncertain. Musk, who also leads Tesla and SpaceX, is positioned to gain significant political leverage after investing heavily in supporting Donald Trump’s presidential campaign. Trump, in turn, has appointed Musk to a government reform task force, raising questions about potential regulatory leniency toward his ventures.

In the letter, Spiro criticised the SEC’s actions, stating Musk would not be “intimidated” and reserving his legal rights. This marks the latest in a series of clashes between Musk and the SEC, including a 2018 lawsuit over misleading Tesla-related tweets, which Musk settled by paying $20 million and stepping down as Tesla chairman. Both the SEC and Neuralink have yet to comment on the reopened investigation.

Samsung challenges India watchdog over data seizure

Samsung has filed a legal challenge against India‘s Competition Commission (CCI), accusing the watchdog of unlawfully detaining employees and seizing data during a 2022 raid connected to an antitrust investigation involving Amazon and Walmart-owned Flipkart. The CCI claims Samsung colluded with the e-commerce giants to launch products exclusively online, a practice it argues violates competition laws.

In its filing with the northern city of Chandigarh’s High Court, Samsung alleged that confidential data was improperly taken from its employees during the raid and requested the return of the material. Samsung has secured an injunction to pause the CCI’s proceedings but seeks a broader ruling to prevent the use of the seized data. The CCI, in turn, has asked the Supreme Court to consolidate similar challenges by Samsung and 22 other parties, arguing that companies are attempting to derail the investigation.

The case stems from findings earlier this year that Amazon, Flipkart, and smartphone companies like Samsung engaged in anti-competitive practices by favouring select sellers and using exclusive product launches. While Amazon and Flipkart deny wrongdoing, brick-and-mortar retailers have long criticised their pricing and market strategies. Samsung, a major smartphone brand in India with a 14% market share, maintains it was wrongly implicated and cooperated only as a third party in the investigation.

New rules aim at fair payments for content in Australia

Australia’s government is set to introduce new rules requiring major tech companies to pay Australian media outlets for news content. Companies such as Meta and Google could face millions in charges if they fail to reach commercial agreements with publishers. The Assistant Treasurer emphasised that the rules aim to foster fair negotiations, with charges applying only to platforms earning over $250 million in Australian revenue.

The proposed regulations follow previous efforts to hold tech firms accountable for news content. Laws passed in 2021 required firms to compensate publishers, leading to temporary disruptions on Meta’s platforms before agreements were reached. However, Meta announced it would end those arrangements by 2024, scaling back its promotion of news globally.

The plan has drawn criticism from tech companies, who argue that most users do not access platforms for news and that publishers willingly share content for exposure. Despite these objections, Australian media organisations, including News Corp, anticipate benefits. The government’s broader efforts to regulate Big Tech include banning under-16s from social media and targeting scams.

Australia’s bold stance continues to set precedents for handling global tech giants, adding to growing international scrutiny. News publishers are optimistic about forming new commercial relationships under the proposed framework.

Australian court fines Kraken operator $5.1 million

Australia‘s Federal Court has fined Bit Trade, the local operator of cryptocurrency exchange Kraken, A$8 million ($5.1 million) for unlawfully offering credit facilities to over 1,100 customers. The ruling came after the Australian Securities and Investments Commission (ASIC) filed civil proceedings against the company, accusing it of non-compliance with regulations for its margin trading product.

ASIC revealed that Bit Trade failed to assess whether its margin extensions—a form of credit repayable in digital assets like bitcoin or national currencies—were suitable for customers. This led to combined customer losses exceeding $5 million, while Bit Trade charged over $7 million in fees and interest. The court classified the margin extension product as a credit facility requiring a specific consumer suitability document, which the company had not provided.

In a statement, Kraken expressed disappointment, arguing the ruling could stifle economic growth in Australia. The exchange emphasised its willingness to work with regulators to shape the evolving cryptocurrency framework. The case marks a milestone for ASIC, as it is the first penalty imposed on a company for failing to provide a target market determination for a financial product.

Justice Department pushes for TikTok divestment

The US Justice Department has urged a federal appeals court to reject TikTok‘s emergency request to delay a law requiring its Chinese parent company, ByteDance, to divest from the app by 19 January or face a nationwide ban. TikTok argued the law threatens to shut down one of America’s most popular social media platforms, which boasts over 170 million US users, while the Justice Department maintains that continued Chinese ownership poses a national security risk.

While the law would not immediately block users from accessing TikTok, the Justice Department admitted the lack of ongoing support would eventually render the app inoperable. A three-judge appeals court panel recently upheld the divestment requirement, and ByteDance has asked the US Supreme Court to review the case.

The controversy places TikTok’s future in the hands of the incoming presidential administration. President Joe Biden could grant a 90-day extension to the divestment deadline before President-elect Donald Trump, who has vowed to prevent a ban, takes office on January 20. Trump’s stance on TikTok has been consistent since his unsuccessful attempts to ban the app during his first term.

The law also strengthens the US government’s powers to ban other foreign-owned apps over data security concerns, following a broader trend initiated under Trump, including an earlier attempt to block Tencent-owned WeChat. As legal battles continue, TikTok’s operations in the US hang in the balance.

AI safeguards prove hard to define

Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised. Elizabeth Kelly, director of the US Artificial Intelligence Safety Institute, highlighted challenges such as “jailbreaks” that bypass AI security measures and the ease of tampering with digital watermarks meant to identify AI-generated content. Speaking at the Reuters NEXT conference, Kelly acknowledged the difficulty in establishing best practices without clear evidence of their effectiveness.

The US AI Safety Institute, launched under the Biden administration, is collaborating with academic, industry, and civil society partners to address these issues. Kelly emphasised that AI safety transcends political divisions, calling it a “fundamentally bipartisan issue” amid the upcoming transition to Donald Trump’s presidency. The institute recently hosted a global meeting in San Francisco, bringing together safety bodies from 10 countries to develop interoperable tests for AI systems.

Kelly described the gathering as a convergence of technical experts focused on practical solutions rather than typical diplomatic formalities. While the challenges remain significant, the emphasis on global cooperation and expertise offers a promising path forward.

Australian Federal Police leverage AI for investigations

The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.

The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.

While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.