Serie A has partnered with Meta to combat illegal live streaming of football matches, aiming to protect its broadcasting rights. Under the agreement, Serie A will gain access to Meta’s tools for real-time detection and swift removal of unauthorised streams on Facebook and Instagram.
Broadcasting revenue remains vital for Serie A clubs, including Inter Milan and Juventus, with €4.5 billion secured through deals with DAZN and Sky until 2029. The league’s CEO urged other platforms to follow Meta’s lead in fighting piracy.
Italian authorities have ramped up anti-piracy measures, passing laws that enable swift takedowns of illegal streams. Earlier this month, police dismantled a network with 22 million users, highlighting the scale of the issue.
The 19th Internet Governance Forum (IGF 2024) in Riyadh, Saudi Arabia, brought together a distinguished panel to address global challenges and opportunities in developing trusted digital identity systems. Moderated by Shivani Thapa, the session featured insights from Bandar Al-Mashari, Emma Theofelus, Siim Sikkut, Sangbo Kim, Kurt Lindqvist, and other notable speakers.
The discussion focused on building frameworks for trusted digital identities, emphasising their role as critical infrastructure for digital transformation. Bandar Al-Mashari, Saudi Arabia’s Assistant Minister of Interior for Technology Affairs, highlighted the Kingdom’s innovative efforts, while Namibia’s Minister of Information, Emma Theofelus, stressed the importance of inclusivity and addressing regional needs.
The panellists examined the balance between enhanced security and privacy protection. Siim Sikkut, Managing Partner of Digital Nations, underscored the value of independent oversight and core principles to maintain trust. Emerging technologies like blockchain, biometrics, and artificial intelligence were recognised for their potential impact, though caution was urged against uncritical adoption.
Barriers to international cooperation, including the digital divide, infrastructure gaps, and the complexity of global systems, were addressed. Sangbo Kim of the World Bank shared insights on fostering collaboration across regions, while Kurt Lindqvist, CEO of ICANN, highlighted the need for a shared vision in navigating differing national priorities.
Speakers advocated for a phased approach to implementation, allowing countries to progress at their own pace while drawing lessons from successful initiatives, such as those in international travel and telecommunications. The call for collaboration was echoed by Prince Bandar bin Abdullah Al-Mishari, who emphasised Saudi Arabia’s commitment to advancing global solutions.
The discussion concluded on an optimistic note. Fatma, briefly mentioned as a participant, contributed to a shared vision of digital identity as a tool for accelerating inclusion and fostering global trust. The panellists agreed that a unified approach, guided by innovation and respect for privacy, is vital to building secure and effective digital identity systems worldwide.
In an Internet Governance Forum panel in Riyadh, Saudi Arabia, titled ‘Navigating the misinformation maze: Strategic cooperation for a trusted digital future’, moderated by Italian journalist Barbara Carfagna, experts from diverse sectors examined the escalating problem of misinformation and explored solutions for the digital era. Esam Alwagait, Director of the Saudi Data and AI Authority’s National Information Center, identified social media as the primary driver of false information, with algorithms amplifying sensational content.
Natalia Gherman of the UN Counter-Terrorism Committee noted the danger of unmoderated online spaces, while Mohammed Ali Al-Qaed of Bahrain’s Information and Government Authority emphasised the role of influencers in spreading false narratives. Khaled Mansour, a Meta Oversight Board member, pointed out that misinformation can be deadly, stating, ‘Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.’
Emerging technologies like AI were highlighted as both culprits and potential solutions. Alwagait and Al-Qaed discussed how AI-driven tools could detect manipulated media and analyse linguistic patterns, while Al-Qaed proposed ‘verify-by-design’ mechanisms to tag information at its source.
However, the panel warned of AI’s ability to generate convincing fake content, fueling an arms race between creators of misinformation and its detectors. Pearse O’Donohue of the European Commission’s DigiConnect Directorate praised the EU’s Digital Services Act as a regulatory model but questioned, ‘Who moderates the regulator?’ Meanwhile, Mansour cautioned against overreach, advocating for labelling content rather than outright removal to preserve freedom of expression.
Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised the importance of global collaboration, supported by Gherman, who called for unified strategies through international forums like the Internet Governance Forum. Al-Qaed suggested regional cooperation could strengthen smaller nations’ influence over tech platforms. The panel also stressed promoting credible information and digital literacy to empower users, with Mansour noting that fostering ‘good information’ is essential to counter misinformation at its root.
The discussion concluded with a consensus on the need for balanced, innovative solutions. Speakers called for collaborative regulatory approaches, advanced fact-checking tools, and initiatives that protect freedom of expression while tackling misinformation’s far-reaching consequences.
Texas Attorney General Ken Paxton has initiated investigations into more than a dozen technology platforms over concerns about their privacy and safety practices for minors. The platforms under scrutiny include Character.AI, a startup specialising in AI chatbots, along with social media giants like Instagram, Reddit, and Discord.
The investigations aim to determine compliance with two key Texas laws designed to protect children online. The Securing Children Online through Parental Empowerment (SCOPE) Act prohibits digital service providers from sharing or selling minors’ personal information without parental consent and mandates privacy tools for parents. The Texas Data Privacy and Security Act (TDPSA) requires companies to obtain clear consent before collecting or using data from minors.
Concerns over the impact of social media on children have grown significantly. A Harvard study found that major platforms earned an estimated $11 billion in advertising revenue from users under 18 in 2022. Experts, including US Surgeon General Vivek Murthy, have highlighted risks such as poor sleep, body image issues, and low self-esteem among young users, particularly adolescent girls.
Paxton emphasised the importance of enforcing the state’s robust data privacy laws, putting tech companies on notice. While some platforms have introduced tools to enhance teen safety and parental controls, they have not yet commented on the ongoing probes.
BeReal, the selfie-sharing app acquired by French mobile games publisher Voodoo earlier this year, is under scrutiny for allegedly violating European data protection rules. A privacy complaint filed by Noyb, a European privacy rights organisation, accuses the app of using manipulative ‘dark patterns’ to coerce users into consenting to ad tracking, a tactic that may breach the General Data Protection Regulation (GDPR).
The controversy centres on a consent banner introduced in July 2024, which appears to offer users a straightforward choice to accept or refuse tracking. However, Noyb argues that users who decline tracking face daily pop-ups when they try to post, while those who consent are spared further interruptions. This practice, Noyb asserts, pressures users into compliance, undermining the GDPR’s requirement that consent be ‘freely given.’
The complaint has been filed with France’s data protection authority, CNIL, and demands that BeReal revise its consent process to comply with GDPR. It also calls for any improperly obtained data to be deleted and suggests a fine for the alleged violations. BeReal’s parent company, Voodoo, has yet to comment on the complaint.
This case highlights growing concerns over dark patterns in social media apps, with regulators emphasising the need for fair and transparent consent mechanisms in line with user privacy rights.
Lawmakers have called for urgent measures to strengthen US telecommunications security following a massive cyberattack linked to China. The hacking campaign, referred to as Salt Typhoon, targeted American telecom companies, compromising vast amounts of metadata and call records. Federal agencies have briefed Congress on the incident, which officials say could be the largest telecom breach in US history.
Senator Ben Ray Luján described the hack as a wake-up call, urging the full implementation of federal recommendations to secure networks. Senator Ted Cruz warned of future threats, emphasising the need to close vulnerabilities in critical infrastructure. Debate also surfaced over the role of offensive cybersecurity measures, with Senator Dan Sullivan questioning whether deterrence efforts are adequate.
The White House reported that at least eight telecommunications firms were affected, with significant data theft. In response, Federal Communications Commission Chairwoman Jessica Rosenworcel proposed annual cybersecurity certifications for telecom companies. Efforts to replace insecure Chinese-made equipment in US networks continue, but funding shortfalls have hampered progress.
China has dismissed the allegations, claiming opposition to all forms of cybercrime. However, US officials have cited evidence of data theft involving companies like Verizon, AT&T, and Lumen. Congress is set to vote on a defence bill allocating $3.1 billion to remove and replace vulnerable telecom hardware.
Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised. Elizabeth Kelly, director of the US Artificial Intelligence Safety Institute, highlighted challenges such as “jailbreaks” that bypass AI security measures and the ease of tampering with digital watermarks meant to identify AI-generated content. Speaking at the Reuters NEXT conference, Kelly acknowledged the difficulty in establishing best practices without clear evidence of their effectiveness.
The US AI Safety Institute, launched under the Biden administration, is collaborating with academic, industry, and civil society partners to address these issues. Kelly emphasised that AI safety transcends political divisions, calling it a “fundamentally bipartisan issue” amid the upcoming transition to Donald Trump’s presidency. The institute recently hosted a global meeting in San Francisco, bringing together safety bodies from 10 countries to develop interoperable tests for AI systems.
Kelly described the gathering as a convergence of technical experts focused on practical solutions rather than typical diplomatic formalities. While the challenges remain significant, the emphasis on global cooperation and expertise offers a promising path forward.
The Australian Federal Police (AFP) is increasingly turning to AI to handle the vast amounts of data it encounters during investigations. With investigations involving up to 40 terabytes of data on average, AI has become essential in sifting through information from sources like seized phones, child exploitation referrals, and cyber incidents. Benjamin Lamont, AFP’s manager for technology strategy, emphasised the need for AI, given the overwhelming scale of data, stating that AI is crucial to help manage cases, including reviewing massive amounts of video footage and emails.
The AFP is also working on custom AI solutions, including tools for structuring large datasets and identifying potential criminal activity from old mobile phones. One such dataset is a staggering 10 petabytes, while individual phones can hold up to 1 terabyte of data. Lamont pointed out that AI plays a crucial role in making these files easier for officers to process, which would otherwise be an impossible task for human investigators alone. The AFP is also developing AI systems to detect deepfake images and protect officers from graphic content by summarising or modifying such material before it’s viewed.
While the AFP has faced criticism over its use of AI, particularly for using Clearview AI for facial recognition, Lamont acknowledged the need for continuous ethical oversight. The AFP has implemented a responsible technology committee to ensure AI use remains ethical, emphasising the importance of transparency and human oversight in AI-driven decisions.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
Chinese drone manufacturers DJI and Autel Robotics face potential bans in the US under a proposed military bill. The legislation requires a national security review within a year to assess risks posed by their drones. If no review occurs, the companies will automatically join the Federal Communications Commission’s ‘Covered List,’ effectively blocking the sale of new models.
DJI, the world’s largest drone producer, claims the process is unfair, citing extensive security audits and enhanced privacy features. Autel Robotics, also impacted by the proposal, has previously been flagged for investigation over national security concerns.
US lawmakers remain concerned about potential surveillance risks and data vulnerabilities linked to Chinese drones. DJI has refuted these claims, emphasising that no forced labour is involved in its production, despite customs citing related concerns to block imports.
The controversy reflects escalating tensions in US-China relations, particularly in technology and national security domains. The outcome of the proposed bill could reshape the landscape of the commercial drone market in the United States.