Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.
The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.
Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.
The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.
The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.
This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.
Multiple Russian cybersecurity firms have published research reports on emerging threats, including a large-scale information-stealing campaign targeting local organisations using the Nova malware.
According to a report from Moscow-based BI.ZONE, Nova is a commercial malware sold as a service on dark web marketplaces. Prices range from $50 for a monthly license to $630 for a lifetime license. Nova is a variant of SnakeLogger, a widely used malware known for stealing sensitive information.
While the developers of Nova remain unidentified, the code contains strings in Polish, and a Telegram group dedicated to promoting and supporting the malware was created in August 2024. The scale of the campaign and the full extent of its impact on Russian organisations remain unclear.
Over the weekend, F.A.C.C.T. reported a cyberespionage campaign targeting chemical, food, and pharmaceutical companies in Russia, attributing the attacks to a state-backed group named Rezet (or Rare Wolf). Meanwhile, Solar reported an attack on Russian industrial facilities by the newly identified group APT NGC4020, which exploited a vulnerability in a SolarWinds tool.
The Nova malware collects a wide range of data, including saved authentication credentials, keystrokes, screenshots, and clipboard content. This stolen data can be used in a variety of malicious activities, such as facilitating ransomware attacks. The malware is distributed through phishing emails, often disguised as contracts, to trick employees in organisations that handle high volumes of email correspondence.
Ofcom has ended its investigation into whether under-18s are accessing OnlyFans but will continue to examine whether the platform provided complete and accurate information during the inquiry. The media regulator stated that it would remain engaged with OnlyFans to ensure the platform implements appropriate measures to prevent children from accessing restricted content.
The investigation, launched in May, sought to determine whether OnlyFans was doing enough to protect minors from pornography. Ofcom stated that while no findings were made, it reserves the right to reopen the case if new evidence emerges.
OnlyFans maintains that its age assurance measures, which require users to be at least 20 years old, are sufficient to prevent underage access. A company spokesperson reaffirmed its commitment to compliance and child protection, emphasising that its policies have always met regulatory standards.
Kaspersky Labs has uncovered a dangerous malware hidden in software development kits used to create Android and iOS apps. The malware, known as SparkCat, scans images on infected devices to find crypto wallet recovery phrases, allowing hackers to steal funds without needing passwords. It also targets other sensitive data stored in screenshots, such as passwords and private messages.
The malware uses Google’s ML Kit OCR to extract text from images and has been downloaded around 242,000 times, primarily affecting users in Europe and Asia. It is embedded in dozens of real and fake apps on Google’s Play Store and Apple’s App Store, disguised as analytics modules. Kaspersky’s researchers suspect a supply chain attack or intentional embedding by developers.
While the origin of the malware remains unclear, analysis of its code suggests the developer is fluent in Chinese. Security experts advise users to avoid storing sensitive information in images and to remove any suspicious apps. Google and Apple have yet to respond to the findings.
The European Commission has launched the OpenEuroLLM Project, a new initiative aimed at developing open-source, multilingual AI models. The project, which began on February 1, is supported by a consortium of 20 European research institutions, companies, and EuroHPC centres. Coordinated by Jan Hajič from Charles University and co-led by Peter Sarlin of AMD Silo AI, the project is designed to produce large language models (LLMs) that are proficient in all EU languages and comply with the bloc’s regulatory framework.
The OpenEuroLLM Project has been awarded the Strategic Technologies for Europe Platform (STEP) Seal, a recognition granted to high-quality initiatives under the Digital Europe Programme. This endorsement highlights the project’s importance as a critical technology for Europe. The LLMs developed will be open-sourced, allowing their use for commercial, industrial, and public sector purposes. The project promises full transparency, with public access to documentation, training codes, and evaluation metrics once the models are released.
The initiative aims to democratise access to high-quality AI technologies, helping European companies remain competitive globally and empowering public organisations to deliver impactful services. While the timeline for model release and specific focus areas have not yet been detailed, the European Commission has already committed funding and anticipates attracting further investors in the coming weeks.
India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.
This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.
Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.
A former Google software engineer faces additional charges in the US for allegedly stealing AI trade secrets to benefit Chinese companies. Prosecutors announced a 14-count indictment against Linwei Ding, also known as Leon Ding, accusing him of economic espionage and theft of trade secrets. Each charge carries significant prison terms and fines.
Ding, a Chinese national, was initially charged last March and remains free on bond. His case is being handled by a US task force established to prevent the transfer of advanced technology to countries such as China and Russia.
Prosecutors claim Ding stole information on Google’s supercomputing data centres used to train large AI models, including confidential chip blueprints intended to give the company a competitive edge.
Ding allegedly began his thefts in 2022 after being recruited by a Chinese technology firm. By 2023, he had uploaded over 1,000 confidential files and shared a presentation with employees of a startup he founded, citing China’s push for AI development.
Google has cooperated with authorities but has not been charged in the case. Discussions between prosecutors and defence lawyers indicate the case may go to trial.
The European Commission has unveiled new guidelines restricting how AI can be used in workplaces and online services. Employers will be prohibited from using AI to monitor workers’ emotions, while websites will be banned from using AI-driven techniques that manipulate users into spending money. These measures are part of the EU’s Artificial Intelligence Act, which takes full effect in 2026, though some rules, including the ban on certain practices, apply from February 2024.
The AI Act also prohibits social scoring based on unrelated personal data, AI-enabled exploitation of vulnerable users, and predictive policing based solely on biometric data. AI-powered facial recognition CCTV for law enforcement will be heavily restricted, except under strict conditions. The EU has given member states until August to designate authorities responsible for enforcing these rules, with breaches potentially leading to fines of up to 7% of a company’s global revenue.
Europe’s approach to AI regulation is significantly stricter than that of the United States, where compliance is voluntary, and contrasts with China‘s model, which prioritises state control. The guidelines aim to provide clarity for businesses and enforcement agencies while ensuring AI is used ethically and responsibly across the region.
Belgium‘s data protection authority has received a complaint about Chinese AI firm DeepSeek, potentially leading to an investigation. A spokesperson confirmed the complaint but declined to provide further details while the case is being handled.
Regulators in Luxembourg have not received any complaints but are monitoring DeepSeek’s latest AI model, citing potential risks for users. The country’s data protection agency is considering a broader review in collaboration with European regulators.
Authorities across Europe may examine how DeepSeek processes user data. The European Data Protection Board could play a role in assessing the AI company’s compliance with privacy laws.