Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.
Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.
The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.
With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.
CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.
The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.
Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.
Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
The Vatican has urged governments to monitor AI closely, warning of its potential to spread misinformation and destabilise society. A new document, Antica et nova (Ancient and New), written by two Vatican departments and approved by Pope Francis, highlights the ethical concerns surrounding AI, particularly in its ability to fuel political polarisation and social unrest through fake media.
Pope Francis, who has focused on AI ethics in recent years, emphasised its societal risks in messages to global leaders, including at the World Economic Forum in Davos and the G7 summit in Italy. The pope has repeatedly warned against letting algorithms dictate human destiny, calling for AI to be ethically guided to serve humanity.
The document examines AI’s influence across sectors like labour, healthcare, and education, noting the moral responsibility tied to its use. It stresses that careful regulation is essential to prevent the misuse of AI technologies, as they hold both the promise of progress and the potential for harm.
Brazil’s data protection authority, ANPD, has ordered Tools for Humanity (TFH), the company behind the World ID project, to cease offering crypto or financial compensation for biometric data collection. The move comes after an investigation launched in November 2023, with the ANPD citing concerns over the potential influence of financial incentives on individuals’ consent to share sensitive biometric data, such as iris scans.
The World ID project, which aims to create a universal digital identity, uses eye-scanning technology developed by TFH. The ANPD’s decision also reflects its concerns over the irreversible nature of biometric data collection and the inability to delete this information once submitted. Under Brazilian law, consent for processing such sensitive data must be freely given and informed, without undue influence.
This is not the first regulatory issue for World ID, as Germany’s data protection authority also issued corrective measures in December 2023, requiring the project to comply with the EU’s General Data Protection Regulations. Meanwhile, the value of World Network’s native token, WLF, has dropped significantly, falling by over 8% in the past 24 hours and 83% from its peak in March 2023.
The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.
Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.
The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.
Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.
Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.
Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.
Google secured an injunction from London’s High Court on Wednesday, preventing the enforcement of Russian legal judgments against the company. The rulings related to lawsuits filed by Russian entities, including Tsargrad TV and RT, over the closure of Google and YouTube accounts. Judge Andrew Henshaw granted the permanent injunction, citing Google’s terms and conditions, which require disputes to be resolved in English courts.
The Russian judgments included severe ‘astreinte penalties,’ which increased daily and amounted to astronomical sums. Google’s lawyers argued that some fines levied on its Russian subsidiary reached numbers as large as an undecillion roubles—a figure with 36 zeroes. Judge Henshaw highlighted that the fines far exceeded the global GDP, supporting the court’s decision to block their enforcement.
A Google spokesperson expressed satisfaction with the ruling, criticising Russia’s legal actions as efforts to restrict information access and penalise compliance with international sanctions. Since 2022, Google has taken measures such as blocking over 1,000 YouTube channels, including state-sponsored news outlets, and suspending monetisation of content promoting Russia‘s actions in Ukraine.
Pope Francis has called on global leaders to exercise caution in the development of AI, warning it could deepen a ‘crisis of truth’ in society. In a statement read at the World Economic Forum in Davos by Cardinal Peter Turkson, the pontiff acknowledged the potential of AI but emphasised its ethical implications and the risks it poses to humanity’s future. The remarks come as AI becomes a key focus at this year’s summit.
Francis highlighted concerns about AI’s ability to produce outputs nearly indistinguishable from human work, raising questions about its impact on public trust and truth. He urged governments and businesses to maintain strict oversight of AI development to address these challenges effectively. The pope has been vocal on ethical issues surrounding AI in recent years, addressing its implications at high-profile events like the Group of Seven summit in Italy.
The leader of the Catholic Church has personal experience with AI-related controversies. In early 2024, a deepfake image of him wearing a white puffer coat went viral, underscoring the risks associated with the misuse of such technologies. Francis has consistently warned against relying on algorithms to shape human destiny, advocating for a more responsible and ethical approach to technological innovation.
UK citizens will soon be able to carry essential documents, such as their passport, driving licence, and birth certificates, in a digital wallet on their smartphones. This plan was unveiled by Peter Kyle, the Secretary of State for Science, Innovation and Technology, as part of a broader initiative to streamline interactions with government services. The digital wallet, set to launch in June, aims to simplify tasks like booking appointments and managing government communications.
Initially, the digital wallet will hold a driving licence and a veteran card, with plans to add other documents like student loans, vehicle tax, and benefits. The government is also working with the Home Office to include digital passports, although these will still exist alongside physical versions. The app will be linked to an individual’s ID and could be used for various tasks, such as sharing certification or claiming welfare discounts.
Security and privacy concerns have been addressed, with recovery systems in place for lost phones and strong data protection measures. Kyle emphasised that the app complies with current data laws and features like facial recognition would enhance security. He also reassured that while the system will be convenient for smartphone users, efforts will be made to ensure those without internet access aren’t left behind.
The technology, developed in the six months since Labour took power, is part of a push to modernise government services. Kyle believes the new digital approach will help create a more efficient and user-friendly relationship between citizens and the state, transforming the public service experience.