Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.
The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.
Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.
The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.
CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In China, the use of generative AI has expanded unprecedentedly, reaching 515 million users in the first half of 2025.
The figure, released by the China Internet Network Information Centre, shows more than double the number recorded in December and represents an adoption rate of 36.5 per cent.
Such growth is driven by strong digital infrastructure and the state’s determination to make AI a central tool of national development.
The country’s ‘AI Plus’ strategy aims to integrate AI across all sectors of society and the economy. The majority of users rely on domestic platforms such as DeepSeek, Alibaba Cloud’s Qwen and ByteDance’s Doubao, as access to leading Western models remains restricted.
Young and well-educated citizens dominate the user base, underlining the government’s success in promoting AI literacy among key demographics.
Microsoft’s recent research confirms that China has the world’s largest AI market, surpassing the US in total users. While the US adoption has remained steady, China’s domestic ecosystem continues to accelerate, fuelled by policy support and public enthusiasm for generative tools.
China also leads the world in AI-related intellectual property, with over 1.5 million patent applications accounting for nearly 39 per cent of the global total.
The rapid adoption of home-grown AI technologies reflects a strategic drive for technological self-reliance and positions China at the forefront of global digital transformation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads.
The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications.
An announcement that also makes NVIDIA Omniverse and Isaac Sim available on the Google Cloud Marketplace, offering enterprises new tools for digital twin development, robotics simulation, and AI-driven industrial operations.
These integrations enable customers to build realistic virtual environments, train intelligent systems, and streamline design processes.
Powered by the Blackwell architecture, the RTX PRO 6000 GPUs support next-generation AI inference and advanced graphics capabilities. Enterprises can use them to accelerate complex workloads ranging from generative and agentic AI to high-fidelity simulations.
The partnership strengthens Google Cloud’s AI infrastructure and cements NVIDIA’s role as the leading provider of end-to-end computing for enterprise transformation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.
The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.
Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.
The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.
In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.
At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’
However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.
As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively.
In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable.
Experts praised the system’s potential but warned against excessive dependence on algorithms. Keio University’s Professor Makiko Nakamuro said educators must balance data-driven insights with privacy safeguards and human judgment. Toda City has already banned discriminatory use of AI results.
AI’s role is also expanding in language learning. Universities such as Waseda and Kyushu now use a Tokyo-developed conversation AI that assesses grammar, pronunciation, and confidence. Students say they feel more comfortable practising with a machine than in front of classmates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has launched new ‘AI Antennas’ across 13 European countries to strengthen AI infrastructure. Seven EU states, including Belgium, Ireland, and Malta, will gain access to high-performance computing through the EuroHPC network.
Six non-EU partners, such as the UK and Switzerland, have also joined the initiative. Their inclusion reflects the EU’s growing cooperation on digital innovation with neighbouring countries despite Brexit and other trade tensions.
Each AI Antenna will serve as a local gateway to the bloc’s supercomputing hubs, providing technical support, training, and algorithmic resources. Countries without an AI Factory of their own can now connect remotely to major systems like Jupiter.
The Commission says the network aims to spread AI skills and research capabilities across Europe, narrowing regional gaps in digital development. However, smaller nations hosting only antennas are unlikely to house the bloc’s future ‘AI Gigafactories’, which will be up to four times more powerful.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.
Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.
Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.
Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.
First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canada’s cyber insurance market is stabilising, with stronger underwriting, steadier loss trends, and more product choice, the Insurance Bureau of Canada says. But the threat landscape is accelerating as attackers weaponise AI, leaving many small and medium-sized enterprises exposed and underinsured.
Rapid market growth brought painful losses during the ransomware surge: from 2019 to 2023, combined loss ratios averaged about 155%, forcing tighter pricing and coverage. Insurers have recalibrated, yet rising AI-enabled phishing and deepfake impersonations are lifting complexity and potential severity.
Policy is catching up unevenly. Bill C-8 in Canada would revive critical-infrastructure cybersecurity standards, stronger oversight, and baseline rules for risk management and incident reporting. Public–private programmes signal progress but need sustained execution.
SMEs remain the pressure point. Low uptake means minor breaches can cost tens or hundreds of thousands, while severe incidents can be fatal. Underinsurance shifts shock to the wider economy, challenging insurers to balance affordability with long-term viability.
The Bureau urges practical resilience: clearer governance, employee training, incident playbooks, and fit-for-purpose cover. Education campaigns and free guidance aim to demystify coverage, boost readiness, and help SMEs recover faster when attacks hit, supporting a more durable digital economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
As the European Commission’s public consultation on the Digital Fairness Act enters its final phase, growing criticism points to flaws in how citizen feedback is collected.
Critics say the survey’s structure favours those who support additional regulation while restricting opportunities for dissenting voices to explain their reasoning. The issue raises concerns over how such results may influence the forthcoming impact assessment.
The Call for Evidence and Public Consultation, hosted on the Have Your Say portal, allows only supporters of the Commission’s initiative to provide detailed responses. Those who oppose new regulation are reportedly limited to choosing a single option with no open field for justification.
Such an approach risks producing a partial view of European opinion rather than a balanced reflection of stakeholders’ perspectives.
Experts argue that this design contradicts the EU’s Better Regulation principles, which emphasise inclusivity and objectivity.
They urge the Commission to raise its methodological standards, ensuring surveys are neutral, questions are not loaded, and all respondents can present argument-based reasoning. Without these safeguards, consultations may become instruments of validation instead of genuine democratic participation.
Advocates for reform believe the Commission’s influence could set a positive precedent for the entire policy ecosystem. By promoting fairer consultation practices, the EU could encourage both public and private bodies to engage more transparently with Europe’s diverse digital community.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!