Brazil’s data protection authority, ANPD, has ordered Tools for Humanity (TFH), the company behind the World ID project, to cease offering crypto or financial compensation for biometric data collection. The move comes after an investigation launched in November 2023, with the ANPD citing concerns over the potential influence of financial incentives on individuals’ consent to share sensitive biometric data, such as iris scans.
The World ID project, which aims to create a universal digital identity, uses eye-scanning technology developed by TFH. The ANPD’s decision also reflects its concerns over the irreversible nature of biometric data collection and the inability to delete this information once submitted. Under Brazilian law, consent for processing such sensitive data must be freely given and informed, without undue influence.
This is not the first regulatory issue for World ID, as Germany’s data protection authority also issued corrective measures in December 2023, requiring the project to comply with the EU’s General Data Protection Regulations. Meanwhile, the value of World Network’s native token, WLF, has dropped significantly, falling by over 8% in the past 24 hours and 83% from its peak in March 2023.
The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.
Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.
The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.
Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.
Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.
Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.
Google secured an injunction from London’s High Court on Wednesday, preventing the enforcement of Russian legal judgments against the company. The rulings related to lawsuits filed by Russian entities, including Tsargrad TV and RT, over the closure of Google and YouTube accounts. Judge Andrew Henshaw granted the permanent injunction, citing Google’s terms and conditions, which require disputes to be resolved in English courts.
The Russian judgments included severe ‘astreinte penalties,’ which increased daily and amounted to astronomical sums. Google’s lawyers argued that some fines levied on its Russian subsidiary reached numbers as large as an undecillion roubles—a figure with 36 zeroes. Judge Henshaw highlighted that the fines far exceeded the global GDP, supporting the court’s decision to block their enforcement.
A Google spokesperson expressed satisfaction with the ruling, criticising Russia’s legal actions as efforts to restrict information access and penalise compliance with international sanctions. Since 2022, Google has taken measures such as blocking over 1,000 YouTube channels, including state-sponsored news outlets, and suspending monetisation of content promoting Russia‘s actions in Ukraine.
Pope Francis has called on global leaders to exercise caution in the development of AI, warning it could deepen a ‘crisis of truth’ in society. In a statement read at the World Economic Forum in Davos by Cardinal Peter Turkson, the pontiff acknowledged the potential of AI but emphasised its ethical implications and the risks it poses to humanity’s future. The remarks come as AI becomes a key focus at this year’s summit.
Francis highlighted concerns about AI’s ability to produce outputs nearly indistinguishable from human work, raising questions about its impact on public trust and truth. He urged governments and businesses to maintain strict oversight of AI development to address these challenges effectively. The pope has been vocal on ethical issues surrounding AI in recent years, addressing its implications at high-profile events like the Group of Seven summit in Italy.
The leader of the Catholic Church has personal experience with AI-related controversies. In early 2024, a deepfake image of him wearing a white puffer coat went viral, underscoring the risks associated with the misuse of such technologies. Francis has consistently warned against relying on algorithms to shape human destiny, advocating for a more responsible and ethical approach to technological innovation.
UK citizens will soon be able to carry essential documents, such as their passport, driving licence, and birth certificates, in a digital wallet on their smartphones. This plan was unveiled by Peter Kyle, the Secretary of State for Science, Innovation and Technology, as part of a broader initiative to streamline interactions with government services. The digital wallet, set to launch in June, aims to simplify tasks like booking appointments and managing government communications.
Initially, the digital wallet will hold a driving licence and a veteran card, with plans to add other documents like student loans, vehicle tax, and benefits. The government is also working with the Home Office to include digital passports, although these will still exist alongside physical versions. The app will be linked to an individual’s ID and could be used for various tasks, such as sharing certification or claiming welfare discounts.
Security and privacy concerns have been addressed, with recovery systems in place for lost phones and strong data protection measures. Kyle emphasised that the app complies with current data laws and features like facial recognition would enhance security. He also reassured that while the system will be convenient for smartphone users, efforts will be made to ensure those without internet access aren’t left behind.
The technology, developed in the six months since Labour took power, is part of a push to modernise government services. Kyle believes the new digital approach will help create a more efficient and user-friendly relationship between citizens and the state, transforming the public service experience.
China’s foreign ministry stated on Monday that companies should make independent decisions regarding their business operations and agreements. The remarks came in response to United States President-elect Donald Trump’s proposal requiring 50% US ownership of TikTok.
The proposed ownership demand has reignited tensions over the popular social media app, owned by Chinese company ByteDance, as US officials continue to express concerns over national security and data privacy. Chinese officials have consistently emphasised the importance of allowing businesses to operate without undue government interference.
TikTok, which boasts millions of users worldwide, has faced scrutiny in several countries over its links to China. The foreign ministry’s statement highlights Beijing’s stance that such matters should remain in the hands of corporations rather than being dictated by political decisions.
A new poll by the Allensbach Institute reveals that Germans who rely on TikTok for news are less likely to view China as a dictatorship, criticise Russia’s invasion of Ukraine, or trust vaccines compared to consumers of traditional media. The findings suggest that the platform’s information ecosystem could contribute to scepticism about widely accepted narratives and amplify conspiracy theories. Among surveyed groups, TikTok users exhibited levels of distrust in line with users of X, formerly Twitter.
The study, commissioned by a foundation affiliated with Germany’s Free Democrats, comes amid ongoing US debates over the potential national security risks posed by the Chinese-owned app. The research highlights how young Germans, who make up TikTok’s largest user base, are more inclined to support the far-right Alternative for Germany (AfD) party, which has surged in popularity ahead of Germany’s upcoming election. By contrast, consumers of traditional media were significantly more supportive of Ukraine and critical of Russian aggression.
Concerns about misinformation on platforms like TikTok are echoed by researchers, who warn that foreign powers, particularly Russia, exploit social media to influence public opinion. The poll found that while 57% of newspaper readers believed China to be a dictatorship, only 28.1% of TikTok users shared the same view. Additionally, TikTok users were less likely to believe that China and Russia disseminate false information, while being more suspicious of their own government. Calls for action to address misinformation underscore the platform’s potential impact on younger, more impressionable audiences.
Spanish Labour Minister and Deputy Prime Minister Yolanda Díaz announced her decision to leave Elon Musk’s social media platform X, citing concerns over its promotion of xenophobia and far-right ideologies. In a TV interview, Díaz criticised Musk’s behaviour during events linked to Donald Trump’s inauguration, as well as his recent speeches and gestures, which some interpreted as controversial.
Díaz’s departure follows backlash against Musk for raising his arm in a gesture at an inauguration-related event. While critics compared it to a Nazi salute, the Anti-Defamation League dismissed the claim, calling it an awkward moment of enthusiasm. Musk himself rejected the criticism as baseless.
The Spanish minister said her decision extends to personal and political posts and noted that members of her left-wing Sumar party would also leave the platform. This move aligns with other recent departures, including Germany’s Defence and Foreign Ministries, which cited dissatisfaction with X’s direction, joining universities in Germany and the UK in distancing themselves from the platform.
Major tech companies, including Meta’s Facebook, Elon Musk’s X, YouTube, and TikTok, have committed to tackling online hate speech through a revised code of conduct now linked to the European Union’s Digital Services Act (DSA). Announced Monday by the European Commission, the updated agreement also includes platforms like LinkedIn, Instagram, Snapchat, and Twitch, expanding the coalition originally formed in 2016. The move reinforces the EU’s stance against illegal hate speech, both online and offline, according to EU tech commissioner Henna Virkkunen.
Under the revised code, platforms must allow not-for-profit organisations or public entities to monitor how they handle hate speech reports and ensure at least 66% of flagged cases are reviewed within 24 hours. Companies have also pledged to use automated tools to detect and reduce hateful content while disclosing how recommendation algorithms influence the spread of such material.
Additionally, participating platforms will provide detailed, country-specific data on hate speech incidents categorised by factors like race, religion, gender identity, and sexual orientation. Compliance with these measures will play a critical role in regulators’ enforcement of the DSA, a cornerstone of the EU’s strategy to combat illegal and harmful content online.