India bans use of AI tools in government offices

India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.

This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.

Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.

EU bans AI tracking of workers’ emotions and manipulative online tactics

The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.

The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.

Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.

Belgium plans AI use for law enforcement and telecom strategy

Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.

Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.

The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.

German authorities on alert for election disinformation

With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.

CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.

The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.

Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.

Australia’s social media laws face criticism over YouTube exemption

Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.

Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.

To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.

Vatican urges regulation of AI to prevent misinformation

The Vatican has urged governments to monitor AI closely, warning of its potential to spread misinformation and destabilise society. A new document, Antica et nova (Ancient and New), written by two Vatican departments and approved by Pope Francis, highlights the ethical concerns surrounding AI, particularly in its ability to fuel political polarisation and social unrest through fake media.

Pope Francis, who has focused on AI ethics in recent years, emphasised its societal risks in messages to global leaders, including at the World Economic Forum in Davos and the G7 summit in Italy. The pope has repeatedly warned against letting algorithms dictate human destiny, calling for AI to be ethically guided to serve humanity.

The document examines AI’s influence across sectors like labour, healthcare, and education, noting the moral responsibility tied to its use. It stresses that careful regulation is essential to prevent the misuse of AI technologies, as they hold both the promise of progress and the potential for harm.

World ID forced to stop offering crypto for biometrics in Brazil

Brazil’s data protection authority, ANPD, has ordered Tools for Humanity (TFH), the company behind the World ID project, to cease offering crypto or financial compensation for biometric data collection. The move comes after an investigation launched in November 2023, with the ANPD citing concerns over the potential influence of financial incentives on individuals’ consent to share sensitive biometric data, such as iris scans.

The World ID project, which aims to create a universal digital identity, uses eye-scanning technology developed by TFH. The ANPD’s decision also reflects its concerns over the irreversible nature of biometric data collection and the inability to delete this information once submitted. Under Brazilian law, consent for processing such sensitive data must be freely given and informed, without undue influence.

This is not the first regulatory issue for World ID, as Germany’s data protection authority also issued corrective measures in December 2023, requiring the project to comply with the EU’s General Data Protection Regulations. Meanwhile, the value of World Network’s native token, WLF, has dropped significantly, falling by over 8% in the past 24 hours and 83% from its peak in March 2023.

Tech firms urged to remove violent content after Southport murders

The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.

Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.

The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.

Germany urges social media platforms to tackle disinformation before election

Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.

Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.

Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.

Google wins court battle over Russian judgments

Google secured an injunction from London’s High Court on Wednesday, preventing the enforcement of Russian legal judgments against the company. The rulings related to lawsuits filed by Russian entities, including Tsargrad TV and RT, over the closure of Google and YouTube accounts. Judge Andrew Henshaw granted the permanent injunction, citing Google’s terms and conditions, which require disputes to be resolved in English courts.

The Russian judgments included severe ‘astreinte penalties,’ which increased daily and amounted to astronomical sums. Google’s lawyers argued that some fines levied on its Russian subsidiary reached numbers as large as an undecillion roubles—a figure with 36 zeroes. Judge Henshaw highlighted that the fines far exceeded the global GDP, supporting the court’s decision to block their enforcement.

A Google spokesperson expressed satisfaction with the ruling, criticising Russia’s legal actions as efforts to restrict information access and penalise compliance with international sanctions. Since 2022, Google has taken measures such as blocking over 1,000 YouTube channels, including state-sponsored news outlets, and suspending monetisation of content promoting Russia‘s actions in Ukraine.