Panthalassa raises $140m to develop wave-powered AI computing

Panthalassa has raised $140 million in a Series B funding round led by investor Peter Thiel to advance technology that uses ocean wave energy to power AI computing systems.

According to the company, the funding will support the development of offshore nodes that generate electricity from wave energy and run AI computing onboard. Data from these systems is transmitted via low-Earth-orbit satellites.

Panthalassa said the initiative responds to increasing demand for computing capacity and constraints faced by terrestrial data centres, including electricity supply, cooling requirements, and infrastructure limitations.

The company stated that its systems operate in offshore environments and use locally generated energy to power computing equipment, with ocean conditions providing cooling.

Panthalassa has previously deployed prototype systems and said the new funding will support completion of a pilot manufacturing facility and deployment of additional nodes, with commercial operations targeted for 2027.

Investor Peter Thiel said the approach expands computing infrastructure beyond traditional locations, while company representatives described the technology as a potential source of clean energy for AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UNDP supports AI training for Tajikistan parliament members

The United Nations Development Programme has supported training sessions for members of the Parliament of Tajikistan, focusing on AI and modern digital tools. The initiative aims to strengthen legislative processes and institutional capacity.

Discussions covered AI use in policymaking, legislative analysis and public engagement, alongside topics such as strategic planning and anti corruption measures. The UNDP sessions brought together parliamentarians and staff to share international and national experience.

Officials highlighted that AI can support evidence based decision making and improve efficiency, while requiring attention to transparency, ethics and accountability. Cooperation with UNDP was described as key to adapting global best practices.

The programme includes an ongoing needs assessment to identify priorities for further development and institutional strengthening. The activities are being carried out with UNDP support in Tajikistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan government reviews plans to expand AI across sectors under digital strategy

The Government of the Republic of Kazakhstan has reviewed plans to expand AI across all sectors under the proposed Digital Qazaqstan strategy. The initiative aims to drive long-term economic modernisation through digital technologies.

Officials highlighted AI as a key tool for improving productivity, industrial safety and economic planning. The strategy also focuses on strengthening infrastructure, including computing capacity and data systems.

The government stressed the need for better data access, investment incentives and stronger private sector involvement. Measures will also target skills development and support for smaller businesses adopting AI.

Authorities said AI could enhance forecasting and policy effectiveness, but that safeguards for personal data and intellectual property are required. The strategy is being developed and implemented in Kazakhstan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US Federal Reserve highlights AI risks and benefits in the banking system

A US Federal Reserve speech highlights the growing role of AI and emerging technologies in the banking sector and notes that they introduce new risks alongside potential benefits. The remarks stress the need for regulators to closely monitor these developments.

The speech notes that AI could affect areas such as risk management, decision-making and operational processes within financial institutions. It emphasises that rapid adoption may outpace existing oversight frameworks.

Officials said supervision and governance are important to ensure AI is used responsibly. Banks are expected to manage risks effectively while maintaining transparency and accountability in their use of technology.

The Federal Reserve said adapting regulatory approaches will be essential to address technological change while preserving financial stability. The speech was delivered as part of policy discussions in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The Academy introduces rules excluding AI-generated work from Oscar eligibility

The Academy’s Board of Governors has introduced new rules excluding AI-generated performances and screenplays from eligibility for the Oscars. The updated rules require that recognised work be created and performed by humans.

Under the updated framework, only performances credited in a film’s legal billing and demonstrably carried out by individuals with their consent will qualify for an Oscar. Screenplays must also be authored by humans, with the academy reserving the right to request further disclosure on the use of AI in production.

The update comes as AI technologies are increasingly used in filmmaking, including digital recreations of actors and synthetic performers. Industry tensions around AI have grown in recent years, including during the 2023 writers’ and actors’ strikes.

The move is described as part of efforts within the creative sector to preserve human authorship and artistic control as generative AI tools expand across media production.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK AI sector survey to map growth trends and policy direction

The UK government is stepping up efforts to better understand the structure and growth of its AI sector through an updated national survey led by the Department for Science, Innovation and Technology.

The research, conducted by Ipsos and supported by Perspective Economics, aims to gather direct insights from businesses operating in the UK AI ecosystem. The findings are expected to inform future government policy on AI and sector development.

Participation is voluntary and confidential. Respondents are drawn from senior leadership roles, including chief executives, chief technology officers, company directors, and senior members of AI or data science teams. The survey focuses on business activity, products and services, and longer-term growth plans across the sector.

Fieldwork is taking place between late April and the end of May 2026 using online questionnaires and telephone interviews. Each session is expected to last around 15 to 20 minutes, allowing businesses to contribute structured input without significant disruption to normal operations.

The initiative reflects a wider UK policy priority: ensuring that government strategy keeps pace with developments in AI innovation and commercial growth. By drawing on direct industry evidence rather than relying only on secondary analysis, policymakers are trying to build a more accurate picture of the country’s evolving AI landscape. This last sentence is an inference based on the survey’s stated purpose of informing government AI policy.

Why does it matter?

AI policy is much easier to design in theory than in a market that is changing quickly and unevenly. If the government lacks current information on how AI firms are growing, what products they are developing, and where the main constraints lie, it risks shaping policy based on outdated assumptions. Direct input from businesses gives policymakers a stronger basis for decisions on support, regulation, skills, and investment, especially at a time when the UK is trying to turn AI ambition into measurable economic capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Code for America highlights challenges in measuring AI use in public services in the US states

According to Code for America, AI is reshaping how public services are delivered across the United States, yet adoption remains uneven and difficult to measure. They added that state governments are rapidly embracing AI through low-risk pilot programmes while still lacking clear frameworks to evaluate impact.

The report describes AI adoption as following a staged progression beginning with readiness, where leadership structures, workforce skills and infrastructure are developed.

Piloting then introduces experimentation through sandboxes and limited deployments, while implementation embeds AI into operational systems such as fraud detection, document automation, research support and citizen-facing chat assistants.

The report also notes that despite growing experimentation, most US states have not yet transitioned into fully operational and measurable systems.

Leading states, including Utah, New Jersey, Pennsylvania, North Carolina, Maryland, Texas and Vermont, are advancing institutional capabilities required to govern AI as a long-term public asset. Others, such as West Virginia, Wyoming, Nebraska, Alaska, Florida and Kansas, remain at earlier stages of readiness and adoption.

The report identifies measuring outcomes as a key challenge. It states that while AI promises efficiency gains and cost reductions, short-term deployment often increases workload for public employees before benefits materialise.

It adds that evaluation frameworks remain underdeveloped, leaving governments with strong governance structures but limited visibility into real performance improvements.

According to Amanda Renteria, CEO of Code for America, the opportunity extends beyond adoption alone, as governments must shape AI in ways that are human-centred and grounded in measurable public outcomes.

The report suggests that states that succeed in aligning technology with real community impact will move beyond experimentation and define the future of public service in the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s NCSC warns AI could expose software vulnerabilities at scale

The NCSC says that AI is reshaping cybersecurity by exposing vulnerabilities across software ecosystems.
The National Cyber Security Centre (NCSC) warns that organisations must prepare for a large-scale patch wave. AI enables faster identification and exploitation of weaknesses than traditional defences can handle.

Technical debt, built through years of prioritising short-term efficiency instead of long-term resilience, is now being exposed at scale.

The NCSC notes that AI capabilities enable attackers to identify weaknesses faster and more comprehensively, creating pressure on organisations to respond with rapid and coordinated patching strategies across entire technology environments.

The recommended approach by NCSC prioritises internet-facing systems and external attack surfaces, followed by internal infrastructure and critical security assets.

Automated updates and hot patching are encouraged where available, while organisations lacking such capabilities must adopt scalable and risk-based update processes. Legacy systems without support present a particular risk, requiring replacement instead of reliance on patching alone.

NCSC adds that beyond software updates, the challenge reflects a deeper structural issue within digital ecosystems. Stronger cyber resilience depends on reducing systemic vulnerabilities through secure design practices, improved monitoring and supply chain readiness.

They also said that organisations that fail to prepare for continuous, large-scale patching cycles risk increased exposure as AI continues to reshape the cybersecurity landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US military expands AI deployment across classified networks

The US Department of Defence has announced agreements with leading technology firms to deploy advanced AI capabilities across classified military networks. The initiative forms part of a broader effort to position the United States as a more AI-enabled military power.

Companies including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, and SpaceX are reported to be involved in supporting deployment within high-security Impact Level 6 and 7 environments. The integration is intended to improve data synthesis, situational awareness, and operational decision-making across defence systems.

The department’s internal platform, GenAI.mil, is also being presented as a central part of this push, with senior officials describing it as a way to put advanced AI tools into the hands of personnel across the department and across different classification levels.

Officials have emphasised that maintaining access to a range of AI providers is important to avoid vendor lock-in and preserve long-term flexibility. In that sense, the move reflects a wider attempt to strengthen national security through advanced technology while keeping the military AI stack diversified rather than dependent on a single company or model family. However, this is an inference based on the reported Pentagon framing of the agreements.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Victorian officials outline approach to managing AI risks in public sector

Ian Pham at the Victorian Managed Insurance Authority (VMIA) outlined approaches to managing AI adoption during the PSN Victorian Government Cyber Security Showcase. Organisations face the challenge of adopting AI while maintaining effective risk management as these systems become more embedded in government operations.

Cybersecurity teams have traditionally operated with a risk-averse approach focused on minimising threats. Such an approach can slow innovation when applied to AI systems used in public sector environments.

A shift towards managing risk in line with organisational objectives is presented as necessary. This includes prioritising relevant risks and moving from reactive responses towards supporting decision-making processes.

AI adoption involves secure environments for experimentation with defined guardrails, including synthetic or non-sensitive data, monitoring mechanisms, usage conditions, and identity and access controls. Exposure can then be increased gradually, supported by governance and continuous reassessment.

Risks linked to AI systems include data leakage, privacy concerns, unauthorised use, and data quality issues. These risks are described as requiring visibility and management, alongside organisational awareness and engagement to support confidence in AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot