Nokia, Windstream Wholesale, and Colt Technology Services have completed the world’s first 800 Gigabit Ethernet (800GbE) service trial, which connects London and Chicago across an impressive 8,500 km subsea and terrestrial route. This groundbreaking collaboration showcased advanced power-saving networking technologies and enhanced capacity, speed, and latency while reducing power consumption on this critical Europe-US route.
By leveraging Colt’s powerful transatlantic subsea cables alongside Windstream’s Intelligent Converged Optical Network (ICON), the trial effectively demonstrated the ability of 800GbE technology to double bandwidth capacity. Consequently, this advancement supports essential applications such as AI data centre networking, content delivery networks, and financial data hub connections.
Moreover, key executives from Colt, Windstream, and Nokia emphasised the trial’s significance in enhancing global connectivity. Buddy Bayer, Chief Operating Officer of Colt, highlighted the commitment to innovation, while Joe Scattareggia, President of Windstream, called it a game-changer for AI-powered applications.
Federico Guillén, President of Network Infrastructure at Nokia, noted the ambitious nature of the project and its potential to set high standards for network reliability. Following the successful trial, the organisations are now exploring options to bring 800GbE connectivity services to market, signalling a proactive approach to meet the evolving demands of the digital landscape.
The increasing use of AI and machine learning in financial services globally could lead to financial stability risks, according to the Governor of the Reserve Bank of India (RBI), Shaktikanta Das. Speaking at an event in New Delhi, Das cautioned that the reliance on a small number of technology providers could lead to concentration risks in the sector.
Disruptions or failures in these AI-driven systems could trigger cascading effects throughout the financial industry, amplifying systemic risks, Das warned. In India, financial institutions are already employing AI to improve customer experience, reduce operational costs, and enhance risk management through services like chatbots and personalised banking.
However, AI adoption comes with vulnerabilities, including increased exposure to cyber attacks and data breaches. Das also raised concerns about the ‘opacity’ of AI algorithms, which makes them difficult to audit and could lead to unpredictable market consequences.
Das further emphasised the risks posed by the rapid growth of private credit markets, which operate with limited regulation. He warned that these markets have not been tested under economic downturns, presenting potential challenges to financial stability.
Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defense Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proving successful in combat situations. Speaking at a next-generation drone technology center, he called for more intensive training for troops to operate these systems effectively.
Belousov revealed that two units equipped with AI drones are currently stationed in eastern Ukraine and along Russia’s Belgorod and Kursk borders, where they are engaged in active combat. The AI technology enables drones to autonomously lock onto targets and continue missions even if control is lost. Plans are underway to form five additional units to conduct around-the-clock drone operations.
Russia‘s ramped-up use of AI drones comes alongside a broader military strategy to increase drone production by tenfold, with President Putin aiming to produce 1.4 million units by the year’s end. Both Russia and Ukraine have heavily relied on drones throughout the war, with Ukraine also using them to strike targets deep inside Russian territory.
The European Space Agency (ESA) is enhancing its Destination Earth platform, an initiative by the European Commission to create a highly accurate digital replica of the Earth, known as a digital twin. The platform focuses on climate-related issues, helping policymakers model the effects of climate change on critical areas such as extreme weather events, sea level rise, rainfall and drought, and biodiversity.
The first version of Destination Earth launched in June 2024, featuring two initial digital twins, with plans to introduce additional twins over the next six years, culminating in a fully operational digital replica by 2030. To enrich its capabilities, the ESA is integrating AI technologies, including machine learning, deep learning, and generative AI, with the support of three selected French firms – Atos, Mews Partners, and ACRI-ST.
As a result of these advancements, users will gain access to various algorithms, digital tools, models, simulations, and visualisations, significantly improving the platform’s utility for climate adaptation and mitigation policy-making. The integration of AI is expected to streamline the development process and enhance the overall effectiveness of Destination Earth in addressing climate challenges.
In a recent interview with The Wall Street Journal, AI pioneer Yann LeCun dismissed concerns about AI poses an existential threat to humanity, calling them ‘complete B.S.’ LeCun, a professor at New York University and senior researcher at Meta, has been vocal about his scepticism, emphasising that current AI technology is far from achieving human-level intelligence. He previously tweeted that before worrying about super-intelligent AI, we need to first create a system that surpasses the intelligence of a house cat.
LeCun argued that today’s large language models (LLMs) lack essential capabilities like persistent memory, reasoning, planning, and a comprehension of the physical world—skills even a cat possesses. In his view, while these models are adept at manipulating language, this does not equate to true intelligence, and they are not advancing toward developing artificial general intelligence (AGI).
Despite his scepticism about current AI capabilities, LeCun is not entirely dismissive of the potential for AGI in the future. He suggested that developing AGI will require new approaches and pointed to ongoing work by his team at Meta, which is exploring ways to process and understand real-world video data.
Google has signed the first-ever corporate agreement to source electricity from small modular reactors (SMRs) to power its AI operations. Partnering with Kairos Power, the tech giant plans to bring its first SMR online by 2030, with further installations expected by 2035. The innovative approach aims to ensure a reliable, around-the-clock supply of clean energy, addressing the growing energy demands triggered by the expansion of AI technology.
The agreement outlines Google’s commitment to purchasing 500 megawatts of power from six to seven SMRs, though details regarding the plants’ financial terms and locations remain undisclosed. The power output from these SMRs is significantly smaller than traditional nuclear reactors, but Google’s strategic investment signals a push toward long-term sustainability.
The tech industry’s focus on nuclear energy has gained momentum this year, with companies like Amazon and Microsoft entering similar agreements. According to Goldman Sachs, the demand for data centres in the US is expected to triple between 2023 and 2030. The surge in energy consumption has prompted technology companies to explore alternative energy sources, including nuclear, wind, and solar, to meet future needs.
Kairos Power must navigate regulatory hurdles, including securing permits from the US Nuclear Regulatory Commission (NRC) and local agencies, which could take several years. However, the company achieved a key milestone last year by obtaining a construction permit to build a demonstration reactor in Tennessee, signalling progress toward deploying SMRs.
Despite the enthusiasm for SMRs, critics point to potential challenges, including high costs and the production of long-lasting nuclear waste. However, Google’s decision to commit to an order book framework with Kairos rather than purchasing individual reactors represents a strategic investment to accelerate the development of SMRs while ensuring cost-effectiveness and timely project delivery.
In a lengthy blog post, Anthropic CEO Dario Amodei presented an optimistic vision for the future of AI, asserting that powerful AI could emerge as soon as 2026. He envisions AI that surpasses human intelligence in key fields, capable of performing complex tasks such as solving mathematical theorems and conducting sophisticated experiments. Amodei believes this advanced technology could lead to groundbreaking advancements in healthcare, potentially curing diseases and doubling human lifespans within the next few decades.
Critics are sceptical about Anthropic CEO Dario Amodei’s ambitious claims regarding the future of AI, pointing out current limitations such as the technology’s inability to think independently and the challenges in applying AI solutions in real-world healthcare settings. While Amodei envisions AI tackling global issues like hunger and climate change and boosting economies in developing countries, he concedes that achieving these goals will necessitate substantial global cooperation and philanthropic efforts.
Despite acknowledging the potential risks and biases associated with AI, Dario Amodei does not present concrete solutions for the economic disruptions that may occur as AI replaces human jobs. He suggests that society will need to rethink its economic structure in an AI-dominated future but offers minimal guidance on navigating these changes. While he frames AI as a transformative force for good, sceptics remain cautious about the significant challenges and ethical dilemmas it presents.
Google has introduced ‘Checks by Google’, a new tool designed to assist developers and compliance teams ensure that apps, websites, and AI adhere to various standards and regulations. Initially used internally within Google, this tool is now publicly accessible and focuses on three key areas of compliance – app compliance, code compliance, and AI safety.
The app compliance feature evaluates adherence to regulations such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and Brazil’s Lei Geral de Proteção de Dados (LGPD). Meanwhile, the code compliance aspect aids developers in identifying regulatory issues during the app development process.
Furthermore, the AI safety component addresses compliance and ethical standards related to AI, particularly targeting potential biases and safety concerns in AI-generated outputs. In addition to these features, ‘Checks by Google’ employs a fine-tuned Large Language Model and a smart AI crawler for thorough assessments, thereby providing insights into compliance without offering legal advice.
Moreover, the tool is customisable to meet the specific needs of various industries, such as finance and healthcare. Currently available for free, with additional paid services for enterprises, ‘Checks by Google’ has the potential to transform how developers navigate compliance in an increasingly complex regulatory environment.
Suki, a healthcare startup is developing AI-powered voice assistants, has raised $70 million in a Series D funding round led by London-based Hedosophia, with participation from Venrock and March Capital. The latest funding brings Suki’s total to $165 million and reportedly values the company at around $500 million. The Redwood City-based startup aims to reduce the administrative burden on healthcare providers with AI tools that streamline tasks like clinical documentation.
Founded in 2017 by former Google and Flipkart executive Punit Soni, Suki has seen growing demand for its products, particularly its Suki Assistant and Suki Platform, as more healthcare systems adopt generative AI technology. The startup now partners with over 300 health systems, including St. Mary’s Healthcare in New York, and integrates with major Electronic Health Record (EHR) systems such as Epic and Oracle’s Cerner.
Suki plans to use the new funding to further develop its AI assistant, adding new features and tools to manage multiple AI models. Competing in the same space as Microsoft’s Nuance and other startups like Abridge, Suki continues to expand its footprint in the AI healthcare market.
TikTok, owned by ByteDance, is cutting hundreds of jobs globally as it pivots towards greater use of AI in content moderation. Among the hardest hit is Malaysia, where fewer than 500 employees were affected, mostly involved in moderation roles. The layoffs come as TikTok seeks to improve the efficiency of its moderation system, relying more heavily on automated detection technologies.
The firm’s spokesperson explained that the move is part of a broader plan to optimise its global content moderation model, aiming for more streamlined operations. TikTok has announced plans to invest $2 billion in global trust and safety measures, with 80% of harmful content already being removed by AI.
The layoffs in Malaysia follow increased regulatory pressure on technology companies operating in the region. Malaysia’s government recently urged social media platforms, including TikTok, to enhance their monitoring systems and apply for operating licences to combat rising cybercrime.
ByteDance, which employs over 110,000 people worldwide, is expected to continue restructuring next month as it consolidates some of its regional operations. These changes highlight the company’s ongoing shift towards automation in its content management strategy.