AI boom drives massive surge in data centre power demand

According to Goldman Sachs, the surge in AI is set to transform global energy markets, with data centres expected to consume 165% more electricity by 2030 compared to 2023. The bank reports that US spending on data centre construction has tripled in just three years, while occupancy rates at existing facilities remain close to record highs.

The demand is driven by hyperscale operators like Amazon Web Services, Microsoft Azure, and Google Cloud, which are rapidly expanding their infrastructure to meet the power-hungry needs of AI systems.

Global data centres use about 55 gigawatts of electricity, more than half of which supports cloud computing. Traditional workloads like email and storage still account for a third, while AI represents just 14%.

However, Goldman Sachs projects that by 2027, overall consumption could rise to 84 gigawatts, with AI’s share growing to over a quarter. That shift is straining grids and pushing operators toward new solutions as AI servers can consume ten times more electricity than traditional racks.

Meeting this demand will require massive investment. Goldman Sachs estimates that global grid upgrades could cost as much as US$720 billion by 2030, with US utilities alone needing an additional US$50 billion in new generation capacity for data centres.

While renewables like wind and solar are increasingly cost-competitive, their intermittent output means operators lean on hybrid models with backup gas and battery storage. At the same time, technology companies are reviving interest in nuclear power, with contracts for over 10 gigawatts of new capacity signed in the US last year.

The expansion is most evident in Europe and North America, with Nordic countries, Spain, and France attracting investment due to their renewable energy resources. At the same time, hubs like Germany, Britain, and Ireland rely on incentives and established ecosystems. Yet, uncertainty remains.

Advances like DeepSeek, a Chinese AI model reportedly as capable as US systems but more efficient, could temper power demand growth. For now, however, the trajectory is clear, AI is reshaping the data centre industry and the global energy landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI scams target seniors’ savings

Cybersecurity experts have warned that AI is being used to target senior citizens in sophisticated financial scams. The Phantom Hacker scam impersonates tech support, bank, and government workers to steal seniors’ life savings.

The first stage involves a fake tech support worker accessing the victim’s computer to check accounts under the pretence of spotting fraud. A fraud department impersonator then tells victims to transfer funds to a ‘safe’ account allegedly at risk from foreign hackers.

A fake government worker then directs the victim to transfer money to an alias account controlled by the scammers. Check Point CIO Pete Nicoletti says AI helps scammers identify targets by analysing social media and online activity.

Experts stress that reporting the theft immediately is crucial. Delays significantly reduce the chance of recovering stolen funds, leaving many victims permanently defrauded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple creates Asa chatbot for staff training

Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.

The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.

Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.

The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Walmart rolls out AI agents to transform shopping and operations

Walmart has unveiled four AI agents to ease the workloads of shoppers, employees, and suppliers. The tools, revealed at the company’s Retail Rewired event, include Marty for suppliers, Sparky for customers, an Associate Agent for staff, and a Developer Agent.

The retailer is leaning on AI as inflation, tariffs, and policy pressures weigh on consumer spending. Its agents cover payroll, time-off requests, merchandising, and personalised shopping recommendations.

Sparky is set to eventually handle automatic reordering of staples, aiming to simplify everyday restocking for households.

Walmart is also investing in ‘digital twins,’ virtual replicas of stores that allow early detection of operational issues. The company says this technology cut emergency alerts by 30% last year and reduced refrigeration maintenance costs by nearly a fifth.

Machine learning is further being applied to improve delivery-time predictions, helping to boost efficiency and customer satisfaction.

Rival retailers are making similar moves. Amazon reported a surge in generative AI use during its Prime Day sales, while Google Cloud AI has partnered with Lush to cut training costs.

Analysts suggest such tools could reshape the retail experience as companies search for ways to hold margins in a tighter economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Beijing seeks to curb excess AI investment while sustaining growth

China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.

The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.

The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.

While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.

At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces turmoil as AI hiring spree backfires

Mark Zuckerberg’s ambitious plan to assemble a dream team of AI researchers at Meta has instead created internal instability.

High-profile recruits poached from rival firms have begun leaving within weeks of joining, citing cultural clashes and frustration with the company’s working style. Their departures have disrupted projects and unsettled long-time executives.

Meta had hoped its aggressive hiring spree would help the company rival OpenAI, Google, and Anthropic in developing advanced AI systems.

Instead of strengthening the company’s position, the strategy has led to delays in projects and uncertainty about whether Meta can deliver on its promises of achieving superintelligence.

The new arrivals were given extensive autonomy, fuelling tensions with existing teams and creating leadership friction. Some staff viewed the hires as destabilising, while others expressed concern about the direction of the AI division.

The resulting turnover has left Meta struggling to maintain momentum in its most critical area of research.

As Meta faces mounting pressure to demonstrate progress in AI, the setbacks highlight the difficulty of retaining elite talent in a fiercely competitive field.

Zuckerberg’s recruitment drive, rather than propelling Meta ahead, risks slowing down the company’s ability to compete at the highest level of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!