Amazon launches AU$ 20 bn investment in Australian solar-powered data centres

Amazon will invest AU$ 20 billion to expand its data centre infrastructure in Australia, using solar and wind power instead of traditional energy sources.

The plan includes power purchase agreements with three utility-scale solar plants developed by European Energy, one of which—Mokoan Solar Park in Victoria—is already operational. The other two projects, Winton North and Bullyard Solar Parks, are expected to lift total solar capacity to 333MW.

The investment supports Australia’s aim to enhance its cloud and AI capabilities. Amazon’s commitment includes purchasing over 170MW of power from these projects, contributing to both data centre growth and the country’s renewable energy transition.

According to the International Energy Agency, electricity demand from data centres is expected to more than double by 2030, driven by AI.

Amazon Web Services CEO Matt Garman said the move positions Australia to benefit from AI’s economic potential. The company, already active in solar projects across New South Wales, Queensland and Victoria, continues to prioritise renewables to decarbonise operations and meet surging energy needs.

Instead of pursuing growth through conventional means, Amazon’s focus on clean energy could set a precedent for other tech giants expanding in the region.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI turns to Google Cloud in shift from solo AI race

OpenAI has entered into an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure needs.

Despite being fierce competitors in AI, the two tech giants recognise that long-term success may require collaboration instead of isolation.

As the demand for high-performance hardware soars, traditional rivals join forces to keep pace. OpenAI, previously backed heavily by Microsoft, now draws from Google’s vast cloud resources, hinting at a changing attitude in the AI race.

Rather than going it alone, firms may benefit more by leveraging each other’s strengths to accelerate development.

Google CEO Sundar Pichai, speaking on a podcast, suggested there is room for multiple winners in the AI sector. He even noted that a major competitor had ‘invited me to a dance’, underscoring a new phase of pragmatic cooperation.

While Google still faces threats to its search dominance from tools like ChatGPT, business incentives may override rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Taiwan tightens rules on chip shipments to China

Taiwan has officially banned the export of chips and chiplets to China’s Huawei and SMIC, joining the US in tightening restrictions on advanced semiconductor transfers.

The decision follows reports that TSMC, the world’s largest contract chipmaker, was unknowingly misled into supplying chiplets used in Huawei’s Ascend 910B AI accelerator. The US Commerce Department had reportedly considered a fine of over $1 billion against TSMC for that incident.

Taiwan’s new rules aim to prevent further breaches by requiring export permits for any transactions with Huawei or SMIC.

The distinction between chips and chiplets is key to the case. Traditional chips are built as single-die monoliths using the same process node, while chiplets are modular and can combine various specialised components, such as CPU or AI cores.

Huawei allegedly used shell companies to acquire chiplets from TSMC, bypassing existing US restrictions. If TSMC had known the true customer, it likely would have withheld the order. Taiwan’s new export controls are designed to ensure stricter oversight of future transactions and prevent repeat deceptions.

The broader geopolitical stakes are clear. Taiwan views the transfer of advanced chips to China as a national security threat, given Beijing’s ambitions to reunify with Taiwan and the potential militarisation of high-end semiconductors.

With Huawei claiming its processors are nearly on par with Western chips—though analysts argue they lag two to three generations behind—the export ban could further isolate China’s chipmakers.

Speculation persists that Taiwan’s move was partly influenced by negotiations with the US to avoid the proposed fine on TSMC, bringing both countries into closer alignment on chip sanctions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Switzerland’s unique AI path: Blending innovation, governance, and local empowerment

In his recent blog post ‘Advancing Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and Communal subsidiarity,’ Jovan Kurbalija proposes a distinctive roadmap for Switzerland to navigate the rapidly evolving landscape of AI. Rather than mimicking the AI power plays of the United States or China, Kurbalija argues that Switzerland can lead by integrating three national strengths: Zurich’s thriving innovation ecosystem, Geneva’s global leadership in governance, and the country’s foundational principle of subsidiarity rooted in local decision-making.

Zurich, already a global tech hub, is positioned to drive cutting-edge development through its academic excellence and robust entrepreneurial culture. Institutions like ETH Zurich and the presence of major tech firms provide a fertile ground for collaborations that turn research into practical solutions.

With AI tools becoming increasingly accessible, Kurbalija emphasises that success now depends on how societies harness the interplay of human and machine intelligence—a field where Switzerland’s education and apprenticeship systems give it a competitive edge. Meanwhile, Geneva is called upon to spearhead balanced international governance and standard-setting for AI.

Kurbalija stresses that AI policy must go beyond abstract discussions and address real-world issues—health, education, the environment—by embedding AI tools in global institutions and negotiations. He notes that Geneva’s experience in multilateral diplomacy and technical standardisation offers a strong foundation for shaping ethical, inclusive AI frameworks.

The third pillar—subsidiarity—empowers Swiss cantons and communities to develop AI that reflects local values and needs. By supporting grassroots innovation through mini-grants, reimagining libraries as AI learning hubs, and embedding AI literacy from primary school to professional training, Switzerland can build an AI model that is democratic and inclusive.

Why does it matter?

Kurbalija’s call to action is clear: with its tools, talent, and traditions aligned, Switzerland must act now to chart a future where AI serves society, not the other way around.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepseek searches soar after ChatGPT outage

ChatGPT users faced widespread disruption on 10 June 2025 after a global outage hit OpenAI’s services, affecting both the chatbot and associated APIs. OpenAI has yet to confirm the cause, stating only that users are experiencing high error rates and delays.

The blackout halted work for many creative teams who rely on the tool to generate content and meet deadlines. While some were stalled, others turned to alternatives, sparking a surge in interest in rival AI chatbots.

Searches for DeepSeek, a Chinese-developed AI model, jumped 109% to over 2.1 million on the outage day. Claude AI saw a 95% increase in queries, while interest in Google Gemini and Microsoft Copilot also spiked significantly.

Industry experts say the incident underscores the risk of overdependence on a single platform and highlights the growing maturity of competing AI tools. While frustrating for many, the disruption appears to be fuelling broader competition and diversification in the generative AI market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman says GPT-4o demand overwhelmed OpenAI’s GPU supply

OpenAI faced a significant infrastructure strain after its GPT-4o image generator went viral for producing Ghibli-style memes. The sudden influx of user demand added a million new users in under an hour, putting immense pressure on the company’s systems.

CEO Sam Altman admitted that OpenAI had to slow feature rollouts and borrow computing power from its research division to keep the service running. The platform temporarily introduced rate limits as it coped with overloaded GPUs.

Altman described the situation as unprecedented, saying no other company has had to manage such intense viral spikes. He noted that image generation with GPT-4o requires significant compute resources, which the company could not fully meet with its current GPU inventory.

Despite the challenges, Altman maintained that OpenAI is committed to managing high user demand while continuing development. The company is also considering watermarking the AI images created by free users to help manage scale and traceability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini now summarizes PDFs with actionable prompts in Drive

Google is expanding Gemini’s capabilities by allowing the AI assistant to summarize PDF documents directly in Google Drive—and it’s doing more than just generating summaries.

Users will now see clickable suggestions like drafting proposals or creating interview questions based on resume content, making Gemini a more proactive productivity tool.

However, this update builds on earlier integrations of Gemini in Drive, which now surface pop-up summaries and action prompts when a PDF is opened.

Users with smart features and personalization turned on will notice a new preview window interface, eliminating the need to open a separate tab.

Gemini’s PDF summaries are now available in over 20 languages and will gradually roll out over the next two weeks.

The feature supports personal and business accounts, including Business Standard/Plus users, Enterprise tiers, Gemini Education, and Google AI Pro and Ultra plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time, on-device security: The only way to stop modern mobile Trojans

Mobile banking faces a serious new threat: AI-powered Trojans operating silently within legitimate apps. These advanced forms of malware go beyond stealing login credentials—they use AI to intercept biometrics, manipulate app flows in real-time, and execute fraud without raising alarms.

Today’s AI Trojans adapt on the fly. They bypass signature-based detection and cloud-based threat engines by completing attacks directly on the device before traditional systems can react.

Most current security tools weren’t designed for this level of sophistication, exposing banks and users.

To counter this, experts advocate for AI-native security built directly into mobile apps—systems that operate on the device itself, monitoring user interactions and app behaviour in real-time to detect anomalies and stop fraud before it begins.

As these AI threats grow more common, the message is clear: mobile apps must defend themselves from within. Real-time, on-device protection is now essential to safeguarding users and staying ahead of a rapidly evolving risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!