Data centre power demand set to triple by 2035

Data centre electricity use is forecast to surge almost threefold by 2035. BloombergNEF reported that global facilities are expected to consume around 106 gigawatts by then.

Analysts linked the growth to larger sites and rising AI workloads, pushing utilisation rates higher. New projects are expanding rapidly, with many planned facilities exceeding 500 megawatts.

Major capacity is heading to states within the PJM grid, alongside significant additions in Texas. Regulators warned that grid operators must restrict connections when capacity risks emerge.

Industry monitors argued that soaring demand contributes to higher regional electricity prices. They urged clearer rules to ensure reliability as early stage project numbers continue accelerating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI use turns dangerous for diplomats

Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising documents, a trend Jovan Kurbalija describes as the rise of ‘Shadow AI.’ These platforms, often used through personal accounts or consumer apps, offer speed and convenience that overstretched diplomatic services struggle to match.

But the same ease of use that makes Shadow AI attractive also creates a direct clash with diplomacy’s long-standing foundations of discretion and controlled ambiguity.

Kurbalija warns that this quiet reliance on commercial AI platforms exposes sensitive information in ways diplomats may not fully grasp. Every prompt, whether drafting talking points, translating notes, or asking for negotiation strategies, reveals assumptions, priorities, and internal positions.

Over time, this builds a detailed picture of a country’s concerns and behaviour, stored on servers outside diplomatic control and potentially accessible through foreign legal systems. The risk is not only data leakage but also the erosion of diplomatic craft, as AI-generated text encourages generic language, inflates documents, and blurs the national nuances essential to negotiation.

The problem, Kurbalija argues, is rooted in a ‘two-speed’ system. Technology evolves rapidly, while institutions adapt slowly.

Diplomatic services can take years to develop secure, in-house tools, while commercial AI is instantly available on any phone or laptop. Yet the paradox is that safe, locally controlled AI, based on open-source models, is technically feasible and financially accessible. What slows progress is not technology, but how ministries manage and value knowledge, their core institutional asset.

Rather than relying on awareness campaigns or bans, which rarely change behaviour, Kurbalija calls for a structural shift, where foreign ministries must build trustworthy, in-house AI ecosystems that keep all prompts, documents, and outputs within controlled government environments. That requires redesigning workflows, integrating AI into records management, and empowering the diplomats who have already experimented informally with these tools.

Only by moving AI from the shadows into a secure, well-governed framework, he argues, can diplomacy preserve its confidentiality, nuance, and institutional memory in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNDP highlights rising inequality in the AI era

AI is developing at an unprecedented speed, but a growing number of countries lack the necessary infrastructure, digital skills, and governance systems to benefit from it. According to a new UNDP report, this imbalance is already creating economic and social strain, especially in states that are unprepared for rapid technological change.

The report warns that the risk is the emergence of a ‘Next Great Divergence,’ in which global inequalities deepen as advanced economies adopt AI while others fall further behind.

The study, titled ‘The Next Great Divergence: Why AI May Widen Inequality Between Countries,’ highlights Asia and the Pacific as the region where these trends are most visible. Home to some of the world’s fastest-growing economies as well as countries with limited digital capacity, the region faces a widening gap in digital readiness and institutional strength.

Without targeted investment and smarter governance, many nations may struggle to harness AI’s potential while becoming increasingly vulnerable to its disruptions.

To counter this trajectory, the UNDP report outlines practical strategies for governments to build resilient digital ecosystems, expand access to technology, and ensure that AI supports inclusive human development. These recommendations aim to help countries adopt AI in a manner that strengthens, rather than undermines, economic and social progress.

The publication is the result of a multinational effort involving researchers and institutions across Asia, Europe, and North America. Contributors include teams from the Massachusetts Institute of Technology, the London School of Economics and Political Science, the Max Planck Institute for Human Development, Tsinghua University, the University of Science and Technology of China, the Aapti Institute, and India’s Digital Future Lab, whose collective insights shaped the report’s findings and policy roadmap.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia launches national AI plan to drive innovation

The Australian Government has unveiled its National AI Plan, aiming to harness AI to build a fairer, stronger nation. The plan helps government, industry, research and communities collaborate to ensure everyone benefits as technology transforms the economy and society.

AI is reshaping work, learning and service delivery across Australia, boosting productivity, competitiveness and resilience. The plan outlines a path for developing trusted AI solutions while promoting investment, innovation and national capability.

Key initiatives focus on spreading benefits widely, supporting small businesses, regional communities and groups at risk of digital exclusion.

Programs such as the AI Adopt Program and the National AI Centre provide guidance and resources. At the same time, digital skills initiatives aim to increase AI literacy across schools, TAFEs and community organisations.

Safety and trust remain central, with the government establishing the AI Safety Institute to monitor risks and ensure the ethical adoption of AI. Legal, regulatory and ethical frameworks will be reviewed to protect Australians and establish the country as a leader in global AI standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Philips launches AI-powered spectral CT system

Philips has unveiled Verida, the world’s first detector-based spectral CT fully powered by AI. The system integrates AI across the imaging chain, enhancing image quality, lowering system noise, and streamlining clinical workflow for faster, more precise diagnostics.

Spectral CT allows tissues to be distinguished based on how they absorb different x-ray energies, providing insights that conventional scans cannot. Verida reconstructs 145 images per second, completing exams in under 30 seconds, allowing up to 270 scans daily with lower doses and up to 45% less energy use.

Clinicians are already seeing benefits, especially in cardiac imaging. Prof. Eliseo Vañó Galván, Chairman of CT & MR at Hospital Nuestra. Sra. Del Rosario in Madrid, said the system could boost confidence, reduce invasive procedures, and expand spectral imaging.

Built for high-demand environments, Verida combines AI-driven reconstruction with Philips’ Nano-panel dual-layer detector and proprietary Spectral Precise Image technology. The system is CE-marked and 510k pending, with availability in select markets expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Accenture and OpenAI expand AI adoption worldwide

Accenture partners with OpenAI to embed ChatGPT Enterprise, upskilling tens of thousands of professionals with AI skills through OpenAI Certifications. The initiative represents the most extensive professional upskilling programme powered by OpenAI.

A new flagship AI client programme will combine OpenAI’s enterprise products with Accenture’s deep industry expertise. The programme will help clients adopt AI in key functions like customer service, finance, HR and supply chain, automating workflows and improving decision-making.

The collaboration will leverage OpenAI’s AgentKit and other advanced tools to design, test and deploy custom AI agents rapidly. By integrating agentic AI, Accenture aims to accelerate enterprise reinvention and create measurable economic value for its clients.

Accenture and OpenAI have already worked with many of the world’s largest enterprises, including Walmart, Salesforce, PayPal and Morgan Stanley. The partnership enhances both firms’ global AI adoption and helps organisations unlock new growth opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Most German researchers now use AI

Around 80 percent of researchers at Germany’s Max Planck and Fraunhofer societies report using AI in their work, according to a survey of more than 6,200 respondents published in Research Policy.

Nearly half said they were very familiar with AI tools, while another 44 percent had used AI a few times.

The study shows a rapid rise in AI use since 2023, when just 17 percent of researchers used generative AI weekly. Many respondents now employ AI for core and creative research tasks, with 37 percent citing its use in innovative work processes.

Demographic trends reveal that older researchers and women are less likely to use AI, although lower familiarity rather than scepticism drives these differences. Researchers view AI as transformative, acting increasingly as a co-creator or manager rather than merely an automation tool.

Measures such as training, supportive learning environments and legal guidance could further boost AI adoption. Despite the study being limited to Germany, the findings point to a profound transformation in research driven by AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake AI product photos spark concerns for online retailers

Chinese shoppers are increasingly using AI to create fake product photos to claim refunds, raising moral and legal concerns. The practice was highlighted during the Double 11 festival, with sellers receiving images of allegedly damaged goods.

Some buyers manipulated photos of fruit to appear mouldy or altered images of electric toothbrushes to look rusty. Clothing and ceramic product sellers also detected AI-generated inconsistencies, such as unnatural lighting, distorted edges, or visible signs of manipulation.

In some cases, requests were withdrawn after sellers asked for video evidence.

E-commerce platforms have historically favoured buyers, granting refunds even when claims seem unreasonable. In response, major platforms such as Taobao and Tmall removed the ‘refund only’ option and introduced buyer credit ratings based on purchase and refund histories.

Sellers are also increasingly turning to AI tools to verify images.

China’s AI content rules, effective from 1 September, require AI-generated material to be labelled, but detection remains difficult. Legal experts warn that using AI to claim refunds could constitute fraud, with calls for stricter enforcement to prevent abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!