France’s ANSSI and international partners advocate risk-based approach for secure AI systems

The French National Cybersecurity Agency (ANSSI) has released new guidance on securing AI systems, emphasising a risk-based approach to AI deployment. Several international partners, including Canada, Singapore, Germany, Italy, Norway, the United Kingdom, Estonia and others, have co-signed the document.

The publication highlights the growing integration of AI across sectors and the need for organisations to assess and mitigate associated risks, particularly as they adopt large language models (LLMs).

ANSSI outlines key security challenges specific to AI, including vulnerabilities in data integrity, supply chain risks, and the potential for AI systems to be exploited as attack vectors. The report identifies major risks such as:

  • Compromises in AI hosting and management infrastructure
  • Supply chain attacks targeting AI components
  • Interconnections between AI and IT systems increasing attack surfaces
  • Long-term loss of control over AI-driven processes
  • Malfunctions affecting AI system reliability

To address these challenges, the document advocates for a structured approach to AI security, recommending that organisations:

  • Align AI system autonomy with risk assessments and operational criticality
  • Map AI supply chains and monitor interconnections with IT infrastructure
  • Implement continuous monitoring and maintenance of AI systems
  • Anticipate regulatory and technological developments impacting AI security
  • Strengthen training and awareness on AI-related risks

The publication also advises against using AI for automating critical actions without safeguards, urging organisations to conduct dedicated risk analyses and assess security measures at every stage of the AI system lifecycle.

For more information on these topics, visit diplomacy.edu.

Singapore unveils new AI governance initiatives to strengthen global safety standards

The Singapore government introduced three new AI governance initiatives to promote safety and global best practices. The initiatives include the Global AI Assurance Pilot, which focuses on testing generative AI applications; a joint testing report with Japan to enhance AI safety across different linguistic environments; and the publication of the Singapore AI Safety Red Teaming Challenge evaluation report, aimed at addressing AI performance across languages and cultures.

The announcement was made by Josephine Teo, Singapore’s Minister for Digital Development and Information, at the AI Action Summit (AIAS) in Paris. During her speech, Minister Teo emphasised Singapore’s commitment to fostering international collaboration on AI safety, noting the importance of understanding public concerns and ensuring AI systems are tested for safety and responsibility. She also highlighted the role of private sector partnerships in shaping AI use cases and risk management strategies.

The new initiatives include practical efforts to ensure AI models, particularly large language models (LLMs), are secure and culturally sensitive. The AI Assurance Pilot, for instance, will bring together global AI assurance vendors and companies deploying real-life GenAI applications to establish future standards for AI governance. The joint testing report with Japan aims to improve the safety of LLMs across multiple languages, addressing potential gaps in non-English safeguards. Additionally, the Red Teaming Challenge provided insights into AI performance and cultural bias, with participants testing LLMs for issues such as violent crime and privacy violations.

For more information on these topics, visit diplomacy.edu.

Data centre growth in Europe set to break records

Europe is on track for an unprecedented expansion in data centre capacity this year, according to new research from CBRE. The commercial real estate firm projects that 937 megawatts of new capacity will come online in 2025, a 43% increase from 2024. This surge is being fuelled by growing demand for artificial intelligence and cloud computing, despite challenges in securing power and land.

Over half of this new capacity is expected in key markets such as Frankfurt, London, Amsterdam, Paris, and Dublin. Secondary markets, including Milan and Madrid, are also experiencing rapid growth, with seven locations forecast to surpass 100MW of supply by the end of the year.

The ongoing boom is driven by several factors, including government incentives, land availability, and the ambitions of major cloud providers. ‘The data centre construction boom will continue unabated,’ said Kevin Restivo, CBRE’s head of European data centre research, highlighting the sector’s resilience despite infrastructure challenges.

For more information on these topics, visit diplomacy.edu.

Apptronik expands humanoid robot production with new investment

AI robotics company Apptronik has raised $350 million in a funding round led by B Capital and Capital Factory, with participation from Google. The Texas-based firm is focused on scaling production of Apollo, its humanoid robot designed to perform warehouse and manufacturing tasks such as moving packages and handling logistics.

Apptronik is competing with major players like Tesla and Figure AI in the rapidly advancing field of humanoid robotics, where artificial intelligence is driving new breakthroughs. CEO Jeff Cardenas compared this moment in robotics to the rise of large language models in 2023, predicting that 2025 will see significant developments in automation.

The company plans to expand Apollo’s capabilities into other industries, including elder care and healthcare. It has also partnered with Google DeepMind’s robotics team and secured commercial agreements with Mercedes-Benz and GXO Logistics, positioning itself as a key player in the evolving robotics landscape.

For more information on these topics, visit diplomacy.edu.

Google’s India policy head resigns amid market challenges

Google’s head of public policy in India, Sreenivasa Reddy, has stepped down, marking the second high-profile exit from the role in two years. Reddy, who joined the company in September 2023 after stints at Microsoft and Apple, played a crucial role in navigating regulatory challenges while Google expanded its services in India. The company confirmed his departure but declined to provide further details.

India remains a critical market for Google, with the majority of the country’s smartphones running on its Android system. The tech giant has faced increasing scrutiny from regulators over antitrust issues, even as it continues to grow its presence with local manufacturing and AI investments.

In the interim, Iarla Flynn, Google’s policy head for northern Europe, will take over the role. The company reaffirmed its commitment to the Indian market, emphasising its long-term vision despite the ongoing leadership changes.

For more information on these topics, visit diplomacy.edu.

South Korea halts new downloads of DeepSeek over privacy concerns

South Korea’s data protection authority has suspended new downloads of the Chinese AI app DeepSeek, citing concerns over non-compliance with the country’s privacy laws. The decision came after DeepSeek acknowledged failing to adhere to South Korea’s data protection rules fully. According to the Personal Information Protection Commission (PIPC), the service will be reinstated once necessary improvements are implemented.

The restriction, which took effect on Saturday, prevents new users from downloading the app in South Korea. However, DeepSeek’s web service remains operational in the country. The Chinese startup recently appointed legal representatives in South Korea and admitted to having overlooked some aspects of the nation’s data privacy regulations, the PIPC revealed during a media briefing.

DeepSeek has faced similar regulatory hurdles elsewhere. Italy’s data protection authority, Garante, ordered the company to block its chatbot in the country last month due to unresolved concerns over its privacy policy. These developments highlight growing scrutiny over data protection practices among AI-powered services, particularly those originating from China.

DeepSeek has yet to respond to requests for comment regarding the suspension in South Korea. Meanwhile, when asked about earlier restrictions imposed on the app, a Chinese foreign ministry spokesperson emphasised Beijing’s commitment to data privacy and security. The spokesperson stated that the Chinese government does not require companies or individuals to collect or store data violating applicable laws, distancing itself from the controversy surrounding DeepSeek’s compliance issues.

Why does it matter?

The regulatory action against DeepSeek signals the increasing global focus on AI-related privacy concerns. With authorities in multiple countries tightening their data security oversight, AI firms, particularly those operating across borders, face mounting pressure to ensure compliance with regional privacy laws.

Stay updated on DeepSeek developments!

Elon Musk’s xAI unveils Grok-3, taking on AI giants

Elon Musk’s AI startup, xAI, has unveiled its latest AI model, Grok-3, which the billionaire claims is the most advanced chatbot technology. In a live-streamed presentation, Musk and his engineers demonstrated how Grok-3 outperforms competitors, including OpenAI’s GPT-4o and Google’s Gemini, across math, science, and coding benchmarks. With over ten times the computational power of its predecessor, Grok-3 completed pre-training in early January and is now continuously evolving, Musk said, promising visible improvements within just 24 hours.

A key innovation introduced with Grok-3 is DeepSearch, an advanced reasoning chatbot designed to enhance search capabilities by providing transparent explanations of how it processes queries. The feature allows users to engage in research, brainstorming, and data analysis more deeply and clearly. The model is being rolled out immediately to X’s Premium+ subscribers, with an upcoming SuperGrok subscription planned for mobile and web platforms.

The launch marks another escalation in the rivalry between Musk’s xAI and OpenAI, the company he co-founded but later distanced himself from. Musk has been openly critical of OpenAI’s shift toward a for-profit model and recently filed lawsuits against the organisation, accusing it of betraying its founding principles. His bid to acquire OpenAI’s nonprofit arm for $97.4 billion was rejected last week, with OpenAI’s CEO, Sam Altman, dismissing the offer as an attempt to hinder the company’s progress.

Why does it matter?

The AI sector is experiencing an unprecedented investment boom, with xAI reportedly seeking to raise $10 billion in new funding, potentially pushing its valuation to $75 billion. Meanwhile, OpenAI is in talks to raise as much as $40 billion, which could boost its valuation to an astonishing $300 billion. These soaring numbers highlight the capital-intensive nature of AI development, with global tech giants and investment groups pouring billions into the race to dominate AI.

However, new challenges are emerging. Last month, Chinese AI firm DeepSeek introduced R1, an open-source model that matched or surpassed leading American AI systems on key industry benchmarks. The company claims it developed R1 at a fraction of the cost incurred by its US counterparts, suggesting that the dominance of firms like OpenAI and xAI could face disruption from more cost-efficient alternatives shortly.

Indian music industry joins lawsuit against OpenAI

Several of India’s leading Bollywood music labels, including T-Series, Saregama, and Sony, seek to join a lawsuit against OpenAI in New Delhi. They are concerned that the company’s AI models may have used their sound recordings without permission, potentially violating copyright laws. The legal action follows a previous lawsuit filed by Indian news agency ANI, which accused OpenAI’s ChatGPT of using content without authorisation to train its models. The music labels argue that this issue has significant implications for the global music industry.

The music companies, which represent major Indian and international music acts, claim that OpenAI’s AI systems could extract lyrics, compositions, and sound recordings from the internet without consent. T-Series, known for releasing thousands of songs annually, and Saregama, which holds a vast catalogue of iconic Indian music, are leading the charge. The Indian Music Industry (IMI), which also represents global labels like Sony Music and Warner Music, is pushing for the case to be heard in court, as the outcome could impact the future use of copyrighted content in AI training.

OpenAI, backed by Microsoft, argues that it adheres to fair-use principles by using publicly available data to build its AI models. However, the company is facing increasing legal pressure from multiple sectors worldwide, including recent lawsuits in Germany, where GEMA accused OpenAI of unlicensed use of song lyrics. OpenAI has opposed the Indian lawsuit, claiming that Indian courts do not have jurisdiction over the matter, given the company’s US base.

The next court hearing, which could shape the future of AI and copyright law in India, is scheduled for 21 February. This legal battle is gaining attention, particularly as OpenAI’s chief, Sam Altman, recently visited India to discuss the country’s plans for developing low-cost AI technology.

For more information on these topics, visit diplomacy.edu.

Study warns of AI’s role in fueling bank runs

A new study from the UK has raised concerns about the risks of bank runs fueled by AI-generated fake news spread on social media. The research, published by Say No to Disinfo and Fenimore Harper, highlights how generative AI can create false stories or memes suggesting that bank deposits are at risk, leading to panic withdrawals. The study found that a significant portion of UK bank customers would consider moving their money after seeing such disinformation, especially with the speed at which funds can be transferred through online banking.

The issue is gaining traction globally, with regulators and banks worried about the growing role of AI in spreading malicious content. Following the collapse of Silicon Valley Bank in 2023, which saw $42 billion in withdrawals within a day, financial institutions are increasingly focused on detecting disinformation that could trigger similar crises. The study estimates that a small investment in social media ads promoting fake content could cause millions in deposit withdrawals.

The report calls for banks to enhance their monitoring systems, integrating social media tracking with withdrawal monitoring to better identify when disinformation is impacting customer behaviour. Revolut, a UK fintech, has already implemented real-time monitoring for emerging threats, urging financial institutions to be prepared for potential risks. While banks remain optimistic about AI’s potential, the financial stability challenges it poses are still a growing concern for regulators.

As financial institutions work to mitigate AI-related risks, the broader industry is also grappling with how to balance the benefits of AI with the threats it may pose. UK Finance, the industry body, emphasised that banks are making efforts to manage these risks, while regulators continue to monitor the situation closely.

For more information on these topics, visit diplomacy.edu.

EU denies US influence over AI regulation rollback

The European Union has dismissed claims that recent decisions to scale back planned AI regulations were influenced by pressure from the US Trump administration. The bloc recently scrapped the AI Liability Directive, a draft law intended to make it easier for consumers to sue over AI-related harms. EU digital chief Henna Virkkunen stated that the move was driven by a desire to enhance competitiveness by reducing bureaucracy and regulatory burdens.

Washington has encouraged a more lenient approach to AI rules, with US Vice President JD Vance urging European lawmakers to embrace the ”AI opportunity” during a speech in Paris.

The timing of the European Commission‘s 2025 work programme release—one day after Vance’s remarks—has fuelled speculation about US influence over the bloc’s regulatory decisions. However, the EU insists that its focus remains on fostering regional AI development rather than bowing to external pressure.

The upcoming AI code of practice will align reporting requirements with existing AI legislation, ensuring a streamlined regulatory framework. The Commission’s work programme emphasises a ”bolder, simpler, faste” approach, aiming to accelerate AI adoption across Europe while maintaining regulatory oversight.

For more information on these topics, visit diplomacy.edu.