Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.
The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.
Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.
The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.
The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.
This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.
OpenAI CEO Sam Altman met with India’s IT Minister Ashwini Vaishnaw on Wednesday to discuss India’s vision of developing a low-cost AI ecosystem. Vaishnaw shared on X that the meeting centred on India’s strategy to build a comprehensive AI stack, including GPUs, models, and applications. He noted that OpenAI expressed interest in collaborating on all three aspects.
Altman’s visit to India, his first since 2023, comes amid ongoing legal challenges the company faces in the country, which is its second-largest market by user numbers. Vaishnaw recently praised Chinese startup DeepSeek for its affordable AI assistant, drawing parallels between DeepSeek’s cost-effective approach and India’s goal of creating a budget-friendly AI model. Vaishnaw highlighted India’s ability to achieve major technological feats at a fraction of the cost, as demonstrated by its moon mission.
Altman’s trip also included stops in Japan and South Korea, where he secured deals with SoftBank and Kakao. In Seoul, he discussed the Stargate AI data centre project with SoftBank and Samsung, a venture backed by US President Donald Trump.
The European Commission has launched the OpenEuroLLM Project, a new initiative aimed at developing open-source, multilingual AI models. The project, which began on February 1, is supported by a consortium of 20 European research institutions, companies, and EuroHPC centres. Coordinated by Jan Hajič from Charles University and co-led by Peter Sarlin of AMD Silo AI, the project is designed to produce large language models (LLMs) that are proficient in all EU languages and comply with the bloc’s regulatory framework.
The OpenEuroLLM Project has been awarded the Strategic Technologies for Europe Platform (STEP) Seal, a recognition granted to high-quality initiatives under the Digital Europe Programme. This endorsement highlights the project’s importance as a critical technology for Europe. The LLMs developed will be open-sourced, allowing their use for commercial, industrial, and public sector purposes. The project promises full transparency, with public access to documentation, training codes, and evaluation metrics once the models are released.
The initiative aims to democratise access to high-quality AI technologies, helping European companies remain competitive globally and empowering public organisations to deliver impactful services. While the timeline for model release and specific focus areas have not yet been detailed, the European Commission has already committed funding and anticipates attracting further investors in the coming weeks.
Google is set to transform its Search engine into a more advanced AI-driven assistant, CEO Sundar Pichai revealed during an earnings call. The company’s ongoing AI evolution began with controversial “AI overviews” and is now expanding to include new capabilities developed by its research division, DeepMind. Google’s goal is to allow Search to browse the web, analyse information, and deliver direct answers, reducing reliance on traditional search results.
Among the upcoming innovations is Project Astra, a multimodal AI system capable of interpreting live video and responding to real-time questions. Another key development is Gemini Deep Research, an AI agent designed to generate in-depth reports, effectively automating research tasks that users previously conducted themselves. Additionally, Project Mariner could enable AI to interact with websites on behalf of users, potentially reshaping how people navigate the internet.
The shift towards AI-powered Search has sparked debate, particularly among businesses that depend on Google’s traffic and advertising. Google’s first attempt at AI integration resulted in embarrassing errors, such as incorrect and bizarre search responses. Despite initial setbacks, the company is pushing ahead, believing AI-enhanced Search will redefine how people find and interact with information online.
ByteDance, the company behind TikTok, has introduced OmniHuman-1, an advanced AI system capable of generating highly realistic deepfake videos from just a single image and an audio clip. Unlike previous deepfake technology, which often displayed telltale glitches, OmniHuman-1 produces remarkably smooth and lifelike footage. The AI can also manipulate body movements, allowing for extensive editing of existing videos.
Trained on 19,000 hours of video content from undisclosed sources, the system’s potential applications range from entertainment to more troubling uses, such as misinformation. The rise of deepfake content has already led to cases of political and financial deception worldwide, from election interference to multimillion-dollar fraud schemes. Experts warn that the technology’s increasing sophistication makes it harder to detect AI-generated fakes.
Despite calls for regulation, deepfake laws remain limited. While some governments have introduced measures to combat AI-generated disinformation, enforcement remains a challenge. With deepfake content spreading at an alarming rate, many fear that systems like OmniHuman-1 could further blur the line between reality and fabrication.
At the annual Almaty Digital Forum, experts highlighted the growing importance of preparing for the AI revolution sparked by the sudden rise of the Chinese AI company DeepSeek. The company’s appearance at the forum raised questions about the future of AI and humanity, particularly due to the affordability of DeepSeek’s AI models, which cost just $6 million to develop, compared to the $40-100 million investments from other global players. This has made AI solutions more accessible to smaller developers and countries.
During the forum, Kaan Teryioglu, CEO of VEON Group, emphasised that AI’s potential lies in enhancing human capabilities across various sectors. Experts agreed that AI will no longer be dominated by tech giants, with smaller developers now able to harness its power. However, concerns were raised about the risk of cultural homogenisation if AI technologies are not adapted to local languages and values.
The forum also showcased Central Asia’s ambition to keep up with global AI developments, with high-level representatives from several countries, including Kazakhstan, Armenia, and Uzbekistan, in attendance. Kazakhstan, in particular, is planning to train a million AI professionals by 2030, with the goal of boosting AI exports to $5 billion by 2029. The government is also launching Alem.ai, a hub for AI research, start-ups, and international collaboration, expected to play a key role in the country’s AI future.
Kazakhstan’s ambitious plans have attracted the attention of global tech giants, who are already in discussions about establishing offices at Alem.ai. With a focus on developing local talent and fostering innovation, Kazakhstan aims to position itself as Central Asia’s intellectual capital and a key player in the global AI landscape. The forum’s success, with over 220 tech companies and 80 local start-ups participating, signals that the country’s plans may not be overly ambitious after all.
India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.
This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.
Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.
A former Google software engineer faces additional charges in the US for allegedly stealing AI trade secrets to benefit Chinese companies. Prosecutors announced a 14-count indictment against Linwei Ding, also known as Leon Ding, accusing him of economic espionage and theft of trade secrets. Each charge carries significant prison terms and fines.
Ding, a Chinese national, was initially charged last March and remains free on bond. His case is being handled by a US task force established to prevent the transfer of advanced technology to countries such as China and Russia.
Prosecutors claim Ding stole information on Google’s supercomputing data centres used to train large AI models, including confidential chip blueprints intended to give the company a competitive edge.
Ding allegedly began his thefts in 2022 after being recruited by a Chinese technology firm. By 2023, he had uploaded over 1,000 confidential files and shared a presentation with employees of a startup he founded, citing China’s push for AI development.
Google has cooperated with authorities but has not been charged in the case. Discussions between prosecutors and defence lawyers indicate the case may go to trial.
AMD has announced it will release its next-generation data centre GPUs, the Instinct MI350 series, earlier than originally planned. CEO Lisa Su revealed during the company’s Q4 2024 earnings call that strong demand and smooth development have allowed AMD to move up production to mid-2025, rather than the latter half of the year.
The move comes as AMD looks to gain ground on industry leader Nvidia, whose dominance in the data centre market continues to pose a challenge. Despite this, AMD’s Instinct GPU sales surpassed $5 billion in 2024, and the company expects its data centre division to see double-digit growth in 2025. Major customers such as Meta, Microsoft, and IBM have contributed to AMD’s momentum in the AI computing sector.
Su expressed confidence in the expansion of AMD’s data centre business, forecasting substantial growth in AI-related computing over the coming years. Investors responded positively to the announcement, with AMD’s stock rising by over 4% following the earnings report.