AI safety institute launches £8.5 million initiative to enhance systemic safety research

The AI Safety Institute is launching an £8.5 million funding scheme to support research on AI system safety, while the initiative will back studies on preventing unexpected failures in AI technologies and addressing challenges linked to their rapid deployment.

The Systemic Safety Grants Programme, run in partnership with the Engineering and Physical Sciences Research Council and Innovate UK, will initially fund around 20 projects. Each project can receive up to £200,000 to explore risks AI might present to society in the near future. Additional funding will follow as further phases are introduced.

Systemic AI safety focuses on the broader infrastructure supporting AI across sectors, including healthcare and energy. Ian Hogarth, chair of the institute, emphasised the importance of addressing risks in critical industries. He highlighted that diverse research teams will contribute to building essential knowledge about AI-related threats, such as deepfakes and system malfunctions.

Applications are open until 26 November, with successful projects to be announced by January 2025. Grants will be awarded the following month, supporting efforts to ensure AI systems remain safe, reliable, and trustworthy as their use expands across the economy.

Microsoft’s GenAI head Sebastien Bubeck departs for OpenAI

Sebastien Bubeck, Microsoft’s vice president of GenAI research, is leaving the company to join OpenAI, the AI startup behind ChatGPT. Microsoft has not provided details on the role Bubeck will assume at OpenAI but has confirmed it will continue its relationship with him through its backing of the company.

While Bubeck did not respond to requests for confirmation, Microsoft stated that he is departing to further his work on artificial general intelligence (AGI). Despite his exit, the majority of his team working on Microsoft’s smaller Phi large language models (LLMs) will stay on to continue their work.

This follows a series of recent departures from OpenAI, including longtime chief technology officer Mira Murati. However, OpenAI CEO Sam Altman has denied that these exits are linked to any planned restructuring of the company.

Bubeck’s departure marks a significant shift in Microsoft’s AI research landscape but highlights ongoing collaboration with OpenAI, with whom it shares a deep investment in the future of AGI.

Export controls for Nvidia AI chips under US review

US officials are considering restricting the sale of advanced AI chips from Nvidia and other American firms to certain countries, focusing on the Persian Gulf region. These deliberations aim to limit exports based on national security concerns, Bloomberg News has reported, citing sources familiar with the discussions.

The idea has gained traction in recent weeks, although plans remain in early stages and may change. Neither the US Commerce Department nor Nvidia commented on the matter. Intel and AMD also did not immediately respond to inquiries from Reuters.

Recent regulatory updates from the Commerce Department could simplify the export process. Data centres in the Middle East may apply for Validated End User status, enabling them to obtain AI chips through a general authorisation, bypassing the need for individual export licences.

In 2023, the Biden administration expanded licensing rules to tighten AI chip exports to over 40 countries, including some Middle Eastern nations, amid concerns that exports might be diverted to China or used in ways conflicting with US security interests.

Singapore launches comprehensive guidelines to secure AI systems

The Cyber Security Agency of Singapore (CSA) has launched its Guidelines and Companion Guide on Securing AI Systems at the Singapore International Cyber Week (SICW) 2024, highlighting the critical need for AI systems to be secure by design and by default. These guidelines aim to assist organisations in implementing AI securely by identifying potential threats such as adversarial attacks and data breaches.

Furthermore, they provide essential security controls and best practices principles, referencing established international standards to ensure alignment with global best practices. To effectively mitigate risks throughout the system’s lifespan, CSA advocates for a holistic approach across five key stages of the AI life cycle – Planning and Design, Development, Deployment, Operations and Maintenance, and End of Life.

In addition, the Companion Guide serves as a community-driven resource that offers practical measures for system owners, thereby reinforcing the importance of collaboration in addressing AI security challenges. Moreover, the development of the Guidelines was enriched by a public consultation conducted from 31 July to 15 September 2024, which received valuable feedback from various stakeholders, including AI and tech companies, cybersecurity firms, and professional associations.

That input was instrumental in refining the guidelines, improving clarity, and ensuring alignment with international standards. Consequently, CSA encourages organisational leaders, business owners, and AI and cybersecurity practitioners to adopt these Guidelines as a strategic imperative to enhance the overall cybersecurity posture of AI systems. By doing so, organisations can foster user confidence in their AI implementations, ultimately promoting innovative, safe, and effective outcomes.

Fujitsu unveils AI tool to optimise 5G networks

Fujitsu has launched a new AI-powered service aimed at boosting 5G network performance by predicting traffic surges and adjusting base station operations. The application ensures users experience minimal disruptions during peak periods by activating additional base stations when needed.

The system measures network quality in real time, identifying early signs of increased demand to prevent performance drops. It promises improved energy efficiency and reduced operational costs through smarter base station management. Commercial availability is scheduled for next month, integrated into Fujitsu’s open RAN-compliant orchestration platform.

Trials revealed that the technology enhances the user experience for individual applications, supporting 19% more users per base station. The predictive system is particularly effective during events, allowing networks to anticipate pedestrian traffic and adapt without compromising service quality.

Fujitsu’s tool represents a breakthrough in network management by combining traffic forecasting with dynamic resource allocation. Operators can now ensure smoother connectivity and reduce power consumption while keeping pace with fluctuating demand.

OpenAI’s SearchGPT may increase publisher traffic

OpenAI‘s head of media partnerships, Varun Shetty, recently stated that the company does not intend to share advertising revenue from its SearchGPT product with publishers. During his address at the Twipe Digital Growth Summit in Brussels, Shetty highlighted OpenAI’s belief that it can provide value to publishers by driving significant traffic from new audiences rather than offering financial compensation. He also acknowledged the importance of a mutually beneficial relationship and indicated that OpenAI is exploring ways to ensure publishers find enough value to remain included in SearchGPT results.

Varun Shetty compared OpenAI’s approach to that of Google’s AI Overviews, which have been criticised for diminishing publishers’ visibility in search results. In contrast, the AI-powered search engine Perplexity has established revenue-sharing agreements with multiple publishers, and Microsoft has announced plans to pay publishers for content featured by its productivity assistant, Copilot. Currently, in an experimental phase, SearchGPT aims to provide answers in natural language while clearly indicating sources. OpenAI intends to integrate SearchGPT into its flagship ChatGPT product by the end of the year.

Shetty stressed the need to balance user experience with publisher needs, noting that while users seek answers, they also want to verify information. He assured publishers they could opt out of SearchGPT results if desired, and any publisher wanting to participate only needs to permit OpenAI’s search bot on their site. He emphasised that SearchGPT has the potential to drive significant traffic without complicating the decision-making regarding content training.

In addition to discussing SearchGPT, Shetty expressed how OpenAI could assist the news industry, noting that while audiences are not interested in AI-generated news, AI can help streamline journalistic tasks, such as story recommendations and multimedia management. He also hinted at advancements in the next GPT model, which will enable more complex user requests, enhancing its usefulness for various applications.

RBI highlights risks of AI in banking and private credit markets

The increasing use of AI and machine learning in financial services globally could lead to financial stability risks, according to the Governor of the Reserve Bank of India (RBI), Shaktikanta Das. Speaking at an event in New Delhi, Das cautioned that the reliance on a small number of technology providers could lead to concentration risks in the sector.

Disruptions or failures in these AI-driven systems could trigger cascading effects throughout the financial industry, amplifying systemic risks, Das warned. In India, financial institutions are already employing AI to improve customer experience, reduce operational costs, and enhance risk management through services like chatbots and personalised banking.

However, AI adoption comes with vulnerabilities, including increased exposure to cyber attacks and data breaches. Das also raised concerns about the ‘opacity’ of AI algorithms, which makes them difficult to audit and could lead to unpredictable market consequences.

Das further emphasised the risks posed by the rapid growth of private credit markets, which operate with limited regulation. He warned that these markets have not been tested under economic downturns, presenting potential challenges to financial stability.

Russian forces ramp up AI-driven drone deployment

Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defense Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proving successful in combat situations. Speaking at a next-generation drone technology center, he called for more intensive training for troops to operate these systems effectively.

Belousov revealed that two units equipped with AI drones are currently stationed in eastern Ukraine and along Russia’s Belgorod and Kursk borders, where they are engaged in active combat. The AI technology enables drones to autonomously lock onto targets and continue missions even if control is lost. Plans are underway to form five additional units to conduct around-the-clock drone operations.

Russia‘s ramped-up use of AI drones comes alongside a broader military strategy to increase drone production by tenfold, with President Putin aiming to produce 1.4 million units by the year’s end. Both Russia and Ukraine have heavily relied on drones throughout the war, with Ukraine also using them to strike targets deep inside Russian territory.

ESA enhances Destination Earth with AI for climate solutions

The European Space Agency (ESA) is enhancing its Destination Earth platform, an initiative by the European Commission to create a highly accurate digital replica of the Earth, known as a digital twin. The platform focuses on climate-related issues, helping policymakers model the effects of climate change on critical areas such as extreme weather events, sea level rise, rainfall and drought, and biodiversity.

The first version of Destination Earth launched in June 2024, featuring two initial digital twins, with plans to introduce additional twins over the next six years, culminating in a fully operational digital replica by 2030. To enrich its capabilities, the ESA is integrating AI technologies, including machine learning, deep learning, and generative AI, with the support of three selected French firms – Atos, Mews Partners, and ACRI-ST.

As a result of these advancements, users will gain access to various algorithms, digital tools, models, simulations, and visualisations, significantly improving the platform’s utility for climate adaptation and mitigation policy-making. The integration of AI is expected to streamline the development process and enhance the overall effectiveness of Destination Earth in addressing climate challenges.

AI pioneer says concerns over AI are exaggerated

In a recent interview with The Wall Street Journal, AI pioneer Yann LeCun dismissed concerns about AI poses an existential threat to humanity, calling them ‘complete B.S.’ LeCun, a professor at New York University and senior researcher at Meta, has been vocal about his scepticism, emphasising that current AI technology is far from achieving human-level intelligence. He previously tweeted that before worrying about super-intelligent AI, we need to first create a system that surpasses the intelligence of a house cat.

LeCun argued that today’s large language models (LLMs) lack essential capabilities like persistent memory, reasoning, planning, and a comprehension of the physical world—skills even a cat possesses. In his view, while these models are adept at manipulating language, this does not equate to true intelligence, and they are not advancing toward developing artificial general intelligence (AGI).

Despite his scepticism about current AI capabilities, LeCun is not entirely dismissive of the potential for AGI in the future. He suggested that developing AGI will require new approaches and pointed to ongoing work by his team at Meta, which is exploring ways to process and understand real-world video data.