UK gambling websites breach data protection laws

Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.

Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.

The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.

The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.

RBI to introduce secure domain names to combat digital payment fraud

India‘s central bank has raised concerns over the increasing fraud in digital payments and announced new measures to improve security. Reserve Bank of India (RBI) Governor Sanjay Malhotra warned that cyber fraud and data breaches are becoming more frequent as banks and consumers adopt new technology. To counter this, the RBI will introduce exclusive website domain names to reduce the risk of deceptive online practices.

Fraudsters often use misleading domain names to trick users into revealing sensitive information or making fraudulent transactions. To enhance online security and credibility, the RBI will launch dedicated domains for financial institutions. Banks will use ‘bank.in’ while non-bank financial entities will operate under ‘fin.in’. These exclusive domains will provide a unique digital identity, making it easier for users to recognise legitimate platforms.

The Institute for Development and Research in Banking Technology (IDRBT) will oversee the registration process for these domains, with actual registrations set to begin in April 2025. The initiative is part of the RBI’s broader effort to strengthen cybersecurity and protect consumers in the rapidly growing digital payments sector.

China looks to build consensus on AI at Global Summit

Chinese Vice Premier Zhang Guoqing will visit France from Sunday until February 12 to attend the AI Action Summit as a special representative of President Xi Jinping. The summit will bring together representatives from nearly 100 countries to discuss the safe development of AI.

A foreign ministry spokesperson, Lin Jian, said China is eager to strengthen communication and collaboration with other nations at the event. China also aims to foster consensus on AI cooperation and contribute to the implementation of the United Nations Global Digital Compact.

Vice President JD Vance is leading the US delegation to the summit, but reports suggest that the US team will not include technical staff from the AI Safety Institute.

ByteDance unveils AI that creates uncannily realistic deepfakes

ByteDance, the company behind TikTok, has introduced OmniHuman-1, an advanced AI system capable of generating highly realistic deepfake videos from just a single image and an audio clip. Unlike previous deepfake technology, which often displayed telltale glitches, OmniHuman-1 produces remarkably smooth and lifelike footage. The AI can also manipulate body movements, allowing for extensive editing of existing videos.

Trained on 19,000 hours of video content from undisclosed sources, the system’s potential applications range from entertainment to more troubling uses, such as misinformation. The rise of deepfake content has already led to cases of political and financial deception worldwide, from election interference to multimillion-dollar fraud schemes. Experts warn that the technology’s increasing sophistication makes it harder to detect AI-generated fakes.

Despite calls for regulation, deepfake laws remain limited. While some governments have introduced measures to combat AI-generated disinformation, enforcement remains a challenge. With deepfake content spreading at an alarming rate, many fear that systems like OmniHuman-1 could further blur the line between reality and fabrication.

India bans use of AI tools in government offices

India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.

This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.

Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.

EU bans AI tracking of workers’ emotions and manipulative online tactics

The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.

The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.

Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.

Belgium plans AI use for law enforcement and telecom strategy

Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.

Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.

The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.

German authorities on alert for election disinformation

With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.

CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.

The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.

Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.

Australia’s social media laws face criticism over YouTube exemption

Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.

Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.

To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.

Vatican urges regulation of AI to prevent misinformation

The Vatican has urged governments to monitor AI closely, warning of its potential to spread misinformation and destabilise society. A new document, Antica et nova (Ancient and New), written by two Vatican departments and approved by Pope Francis, highlights the ethical concerns surrounding AI, particularly in its ability to fuel political polarisation and social unrest through fake media.

Pope Francis, who has focused on AI ethics in recent years, emphasised its societal risks in messages to global leaders, including at the World Economic Forum in Davos and the G7 summit in Italy. The pope has repeatedly warned against letting algorithms dictate human destiny, calling for AI to be ethically guided to serve humanity.

The document examines AI’s influence across sectors like labour, healthcare, and education, noting the moral responsibility tied to its use. It stresses that careful regulation is essential to prevent the misuse of AI technologies, as they hold both the promise of progress and the potential for harm.