Australian government highlights geopolitical risks to critical infrastructure

According to the federal government’s latest Critical Infrastructure Annual Risk Review, Australia’s critical infrastructure is increasingly vulnerable due to global geopolitical uncertainty, supply chain vulnerabilities, and advancements in technology.

The report, released by the Department of Home Affairs, states that geopolitical tensions and instability are affecting all sectors essential to national functioning, such as energy, healthcare, banking, aviation and the digital systems supporting them.

It notes that operational environments are becoming increasingly uncertain both domestically and internationally, requiring new approaches to risk management.

The review highlights a combination of pressures, including cyber threats, supply chain disruptions, climate-related risks and the potential for physical sabotage. It also points to challenges linked to “malicious insiders”, geostrategic shifts and declining public trust in institutions.

According to the report, Australia’s involvement in international policy discussions has, at times, exposed it to possible retaliation from foreign actors through activities ranging from grey zone operations to preparations for state-sponsored sabotage.

It further notes that the effects of overseas conflicts have influenced domestic sentiment and social cohesion, contributing to risks such as ideologically driven vandalism, politically motivated violence and lone-actor extremism.

To address these challenges, the government emphasises the need for adaptable risk management strategies that reflect shifting dependencies, short- and long-term supply chain issues and ongoing geopolitical tensions.

The report divides priority risks into two categories: those considered most plausible and those deemed most harmful. Among the most convincing are extreme-impact cyber incidents and geopolitically driven supply chain disruption.

The most damaging risks include disrupted fuel supplies, major cyber incidents and state-sponsored sabotage. The review notes that because critical sectors are increasingly interdependent, disruption in one area could have cascading impacts on others.

Australia currently imports 61 percent of its fuel from the Middle East, with shipments transiting maritime routes that are vulnerable to regional tensions. Many global shipping routes also pass through the Taiwan Strait, where conflict would significantly affect supply chains.

Home Affairs Minister Tony Burke said the review aims to increase understanding of the risks facing Australia’s essential services and inform efforts to enhance resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Maps launches AI-powered live lane guidance for safer driving

Google has introduced AI-powered live lane guidance for cars with Google built in, marking a significant step toward intelligent in-vehicle navigation.

A new feature that enables Google Maps to interpret roads and lanes like a driver, offering real-time audio and visual cues to help motorists make timely lane changes and avoid missed exits.

Using AI that analyses lane markings and road signs through the vehicle’s front-facing camera, Google Maps integrates the live data with its navigation system, used by over two billion people monthly. The result is more accurate guidance alongside existing traffic, ETA, and hazard updates.

The feature will debut in Polestar 4 vehicles in the US and Sweden, with plans to expand across more models and road types in collaboration with major automakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS launches Fastnet, a subsea cable to strengthen transatlantic cloud and AI connectivity

Amazon Web Services has announced Fastnet, a high-capacity transatlantic subsea cable connecting Maryland and County Cork.

Set to be operational in 2028, Fastnet will expand AWS’s network resilience and deliver faster, more reliable cloud and AI services between the US and Europe.

The cable’s unique route provides critical redundancy, ensuring service continuity even when other cables face disruptions. Capable of transmitting over 320 terabits per second, Fastnet supports large-scale cloud computing and AI workloads while integrating directly into AWS’s global infrastructure.

The system’s design enables real-time data redirection and long-term scalability to meet the increasing demands of AI and edge computing.

Beyond connectivity, AWS is investing in community benefit funds for Maryland and County Cork, supporting local sustainability, education, and workforce development.

A project that reflects AWS’s wider strategy to reinforce critical digital infrastructure and strengthen global innovation in the cloud economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool on smartwatch detects hidden structural heart disease

An AI algorithm paired with smartwatch sensors has successfully detected structural heart diseases, including valve damage and weakened heart muscles, in adults. The study, conducted at Yale School of Medicine, will be presented at the American Heart Association’s 2025 Scientific Sessions in New Orleans.

The AI model was trained on over 266,000 electrocardiogram recordings and validated across multiple hospitals and population studies. When tested on 600 participants using single-lead ECGs from a smartwatch, it achieved an 88% accuracy in detecting heart disease.

Researchers said smartwatches could offer a low-cost, accessible method for early screening of structural heart conditions that usually require echocardiograms. The algorithm’s ability to analyse single-lead ECG data could enable preventive detection before symptoms appear.

Experts emphasised that smartwatch data cannot replace medical imaging, but it could complement clinical assessments and expand access to screening. Larger studies in the US are planned to confirm effectiveness and explore community-based use in preventive heart care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reliance and Google expand Gemini AI access across India

Google has partnered with Reliance Intelligence to expand access to its Gemini AI across India.

Under the new collaboration, Jio Unlimited 5G users aged between 18 and 25 will receive the Google AI Pro plan free for 18 months, with nationwide eligibility to follow soon.

The partnership grants access to the Gemini 2.5 Pro model and includes increased limits for generating images and videos with the Nano Banana and Veo 3.1 tools.

Users in India will also benefit from expanded NotebookLM access for study and research, plus 2 TB of cloud storage shared across Google Photos, Gmail and Drive for data and WhatsApp backups.

According to Google, the offer represents a value of about ₹35,100 and can be activated via the MyJio app. The company said the initiative aims to make its most advanced AI tools available to a wider audience and support everyday productivity across India’s fast-growing digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Internet Bill of Rights unveiled as response to global safety laws

A proposed US Internet Bill of Rights aims to protect digital freedoms as governments expand online censorship laws. The framework, developed by privacy advocates, calls for stronger guarantees of free expression, privacy, and access to information in the digital era.

Supporters argue that recent legislation such as the UK’s Online Safety Act, the EU’s Digital Services Act, and US proposals like KOSA and the STOP HATE Act have eroded civil liberties. They claim these measures empower governments and private firms to control online speech under the guise of safety.

The proposed US bill sets out rights including privacy in digital communications, platform transparency, protection against government surveillance, and fair access to the internet. It also calls for judicial oversight of censorship requests, open algorithms, and the protection of anonymous speech.

Advocates say the framework would enshrine digital freedoms through federal law or constitutional amendment, ensuring equal access and privacy worldwide. They argue that safeguarding free and open internet access is vital to preserve democracy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and Deutsche Telekom plan €1 billion AI data centre in Germany

Plans are being rolled out for a €1 billion data centre in Germany to bolster Europe’s AI infrastructure, with Nvidia and Deutsche Telekom set to co-fund the project.

The facility is expected to serve enterprise customers, including SAP SE, Europe’s largest software company, and to deploy around 10,000 advanced chips known as graphics processing units (GPUs).

While significant for Europe, the build is modest compared with gigawatt-scale sites elsewhere, highlighting the region’s push to catch up with US and Chinese capacity.

An announcement is anticipated next month in Berlin alongside senior industry and government figures, with Munich identified as the planned location.

The move aligns with the EU efforts to expand AI compute, including the €200 billion initiative announced in February to grow capacity over the next five to seven years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!