Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage turned a mundane DNS slip into global chaos

Cloudflare’s boss summed up the mood after Monday’s chaos, relieved his firm wasn’t to blame as outages rippled across more than 1,000 companies. Snapchat, Reddit, Roblox, Fortnite, banks, and government portals faltered together, exposing how much of the web leans on Amazon Web Services.

AWS is the backbone for a vast slice of the internet, renting compute, storage, and databases so firms avoid running their own stacks. However, a mundane Domain Name System error in its Northern Virginia region scrambled routing, leaving services online yet unreachable as traffic lost its map.

Engineers call it a classic failure mode: ‘It’s always DNS.’ Misconfigurations, maintenance slips, or server faults can cascade quickly across shared platforms. AWS says teams moved to mitigate, but the episode showed how a small mistake at scale becomes a global headache in minutes.

Experts warned of concentration risk: when one hyperscaler stumbles, many fall. Yet few true alternatives exist at AWS’s scale beyond Microsoft Azure and Google Cloud, with smaller rivals from IBM to Alibaba, and fledgling European plays, far behind.

Calls for UKEU cloud sovereignty are growing, but timelines and costs are steep. Monday’s outage is a reminder that resilience needs multi-region and multi-cloud designs, tested failovers, and clear incident comms, not just faith in a single provider.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI still struggles to mimic natural human conversation

A recent study reveals that large language models such as ChatGPT-4, Claude, Vicuna, and Wayfarer still struggle to replicate natural human conversation. Researchers found AI over-imitates, misuses filler words, and struggles with natural openings and closings, revealing its artificial nature.

The research, led by Eric Mayor with contributions from Lucas Bietti and Adrian Bangerter, compared transcripts of human phone conversations with AI-generated ones. AI can speak correctly, but subtle social cues like timing, phrasing, and discourse markers remain hard to mimic.

Misplaced words such as ‘so’ or ‘well’ and awkward conversation transitions make AI dialogue recognisably non-human. Openings and endings also pose a challenge. Humans naturally engage in small talk or closing phrases such as ‘see you soon’ or ‘alright, then,’ which AI systems often fail to reproduce convincingly.

These gaps in social nuance, researchers argue, prevent large language models from consistently fooling people in conversation tests.

Despite rapid progress, experts caution that AI may never fully capture all elements of human interaction, such as empathy and social timing. Advances may narrow the gap, but key differences will likely remain, keeping AI speech subtly distinguishable from real human dialogue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI is transforming patient care and medical visits

AI is increasingly shaping the patient experience, from digital intake forms to AI-powered ambient scribes in exam rooms. Stanford experts explain that while these tools can streamline processes, patients should remain aware of how their data is collected, stored, and used.

De-identified information may still be shared for research, marketing, or AI training, raising privacy considerations.

AI is also transforming treatment planning. Platforms like Atropos Health allow doctors to query hundreds of millions of records, generating real-world evidence to inform faster and more effective care.

Patients may benefit from data-driven treatment decisions, but human oversight remains essential to ensure accuracy and safety.

Outside the clinic, AI is being integrated into health apps and devices. From mental health support to disease detection, these tools offer convenience and early insights. Experts warn that stronger evaluation and regulation are needed to confirm their reliability and effectiveness.

Patients are encouraged to ask providers about data storage, third-party access, and real-time recording during visits. While AI promises to improve healthcare, realistic expectations are vital, and individuals should actively monitor how their personal health information is used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IAEA launches initiative to protect AI in nuclear facilities

The International Atomic Energy Agency (IAEA) has launched a new research project to strengthen computer security for AI in the nuclear sector. The initiative aims to support safe adoption of AI technologies in nuclear facilities, including small modular reactors and other applications.

AI and machine learning systems are increasingly used in the nuclear industry to improve operational efficiency and enhance security measures, such as threat detection. These technologies bring risks like data manipulation or misuse, requiring strong cybersecurity and careful oversight.

The Coordinated Research Project (CRP) on Enhancing Computer Security of Artificial Intelligence Applications for Nuclear Technologies will develop methodologies to identify vulnerabilities, implement protection mechanisms, and create AI-enabled security assessment tools.

Training frameworks will also be established to develop human resources capable of managing AI securely in nuclear environments.

Research organisations from all IAEA member states are invited to join the CRP. Proposals must be submitted by 30 November 2025, with participation encouraged for women and young researchers. The IAEA offers further details through its CRP contact page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China leads the global generative AI adoption with 515 million users

In China, the use of generative AI has expanded unprecedentedly, reaching 515 million users in the first half of 2025.

The figure, released by the China Internet Network Information Centre, shows more than double the number recorded in December and represents an adoption rate of 36.5 per cent.

Such growth is driven by strong digital infrastructure and the state’s determination to make AI a central tool of national development.

The country’s ‘AI Plus’ strategy aims to integrate AI across all sectors of society and the economy. The majority of users rely on domestic platforms such as DeepSeek, Alibaba Cloud’s Qwen and ByteDance’s Doubao, as access to leading Western models remains restricted.

Young and well-educated citizens dominate the user base, underlining the government’s success in promoting AI literacy among key demographics.

Microsoft’s recent research confirms that China has the world’s largest AI market, surpassing the US in total users. While the US adoption has remained steady, China’s domestic ecosystem continues to accelerate, fuelled by policy support and public enthusiasm for generative tools.

China also leads the world in AI-related intellectual property, with over 1.5 million patent applications accounting for nearly 39 per cent of the global total.

The rapid adoption of home-grown AI technologies reflects a strategic drive for technological self-reliance and positions China at the forefront of global digital transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

British Columbia unveils major plan to power economic growth with clean energy

The Government of British Columbia has announced a sweeping economic and energy plan aimed at driving industrial growth through clean electricity. Centred on the North Coast Transmission Line, the plan aims to boost the province’s economy while ensuring First Nations share in the benefits.

Premier David Eby said the new legislation would make British Columbia the ‘economic engine’ of Canada, powered by clean energy and local partnerships. Set to begin in 2026, the NCTL will provide clean, affordable power to major industries such as mining, natural gas, and manufacturing.

Once operational, it is projected to create nearly 9,700 direct jobs, contribute around $10 billion to GDP, and cut millions of tonnes of carbon emissions annually.

To manage rising energy demand, the government will limit crypto mining and prioritise projects with strong economic and environmental benefits. A power allocation process for data centres, AI, and hydrogen projects will start in 2026 to support responsible growth.

The plan also enables greater First Nations participation through potential equity ownership in new energy infrastructure. Industry leaders say the project could attract billions in investment and strengthen British Columbia’s position in clean energy and critical minerals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google Cloud and NVIDIA join forces to accelerate enterprise AI and industrial digitalisation

NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads.

The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications.

An announcement that also makes NVIDIA Omniverse and Isaac Sim available on the Google Cloud Marketplace, offering enterprises new tools for digital twin development, robotics simulation, and AI-driven industrial operations.

These integrations enable customers to build realistic virtual environments, train intelligent systems, and streamline design processes.

Powered by the Blackwell architecture, the RTX PRO 6000 GPUs support next-generation AI inference and advanced graphics capabilities. Enterprises can use them to accelerate complex workloads ranging from generative and agentic AI to high-fidelity simulations.

The partnership strengthens Google Cloud’s AI infrastructure and cements NVIDIA’s role as the leading provider of end-to-end computing for enterprise transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PAHO issues new guide on designing AI prompts for public health

The Pan American Health Organization (PAHO) has released a guide with practical advice on creating effective AI prompts for public health. The guide AI prompt design for public health helps professionals use AI responsibly to generate accurate and culturally appropriate content.

PAHO says generative AI aids in public health alerts, reports, and educational materials, but its effectiveness depends on clear instructions. The guide highlights that well-crafted prompts enable AI systems to generate meaningful content efficiently, reducing review time while maintaining quality.

The organisation advises health institutions to treat prompts as ‘living protocols’ that can be tested and refined to suit different audiences and languages. It also recommends developing prompt libraries to improve consistency across public health operations.

Human oversight remains crucial, especially when AI-generated content could influence public behaviour or policy decisions.

The initiative forms part of PAHO’s broader Digital Literacy Programme, which seeks to strengthen the digital skills of health professionals throughout the Americas. Better prompt design aims to boost communication, accelerate decision-making, and advance digital transformation in healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot