OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.
Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.
Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.
Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.
To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A consortium comprising the Artificial Intelligence Infrastructure Partnership, MGX, and BlackRock’s Global Infrastructure Partners has announced the acquisition of Aligned Data Centers for an estimated forty billion dollars.
The move marks a major step towards expanding the infrastructure underpinning global AI and cloud growth.
Aligned, headquartered in Dallas, operates more than fifty campuses and five gigawatts of capacity across the US and Latin America. The company is known for its patented air, liquid, and hybrid cooling systems that enhance efficiency and sustainability, particularly in high-density AI environments.
Under the consortium, Aligned will accelerate the development of scalable and energy-efficient data facilities to meet rising global demand.
The Artificial Intelligence Infrastructure Partnership was founded by BlackRock, GIP, MGX, Microsoft, and NVIDIA to advance large-scale AI infrastructure investment.
Backed by sovereign wealth funds from Kuwait and Singapore, it aims to mobilise thirty billion dollars in equity and up to one hundred billion, including debt.
The Aligned acquisition represents its first major investment and positions the company as a cornerstone of the group’s strategy.
Executives from BlackRock, MGX, and GIP said the deal reflects a shared commitment to building sustainable, resilient infrastructure for the AI era.
Aligned CEO Andrew Schaap added that the partnership would strengthen the company’s global reach and innovation capacity, redefining standards for digital infrastructure in an increasingly AI-driven economy.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The AI Agent Marketplace is embedded within Fusion Applications, allowing customers to browse, test and deploy partner-built, Oracle-validated agents directly within their enterprise workflows. These agents can supplement or replace built-in agents to address industry-specific tasks.
New capabilities in Agent Studio include MCP support to integrate agents with third-party data systems, agent cards for cross-agent communication and collaboration, credential store for secure access to external APIs, monitoring dashboard, and agent tracing and performance metrics for observability.
It will also have prompt libraries and version control for managing agent prompts across lifecycles, workflow chaining and deterministic execution to organise multi-step agent tasks, and human-in-the-loop support to combine automation with oversight.
Oracle also highlights its network of 32,000 certified experts trained in building AI agents via Agent Studio. These experts can help customers optimise use, extend the marketplace, and ensure agent quality and safety.
Overall, Oracle’s release positions its Fusion ecosystem as a more open, flexible, and enterprise-ready platform for AI agent deployment, balancing embedded automation with extensibility and governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Under this plan, sensor and equipment data from factory floors is captured in real time via Azure IoT and forwarded through Fabric. That data will then feed directly into Oracle SCM workflows.
The goal: more visibility, faster decisions and automated responses, such as triggering maintenance, quality checks or inventory adjustments.
Among the features highlighted are secure, real-time intelligence and data flows from shop floor equipment into enterprise systems, automated business events that respond to changes (e.g. imbalance, faults, demand shifts), standardised best practices with reference architectures and prescriptive guidance for integration and embedded AI assistant capabilities in SCM to augment decision making and resilience.
Oracle frames this as part of its Smart Operations vision, where systems are more connected and responsive by design. Microsoft emphasises that Azure’s edge processing and Fabric’s real-time analytics are critical to turning raw IoT signals into actionable business events.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Salesforce and AWS outlined a tighter partnership on agentic AI, citing rapid growth in enterprise agents and usage. They set four pillars for the ‘Agentic Enterprise’: unified data, interoperable agents, modernised contact centres and streamlined procurement via AWS Marketplace.
Data 360 ‘Zero Copy’ accesses Amazon Redshift without duplication, while Data 360 Clean Rooms integrate with AWS Clean Rooms for privacy-preserving collaboration. 1-800Accountant reports agents resolving most routine inquiries so human experts focus on higher-value work.
Agentforce supports open standards such as Model Context Protocol and Agent2Agent to coordinate multi-vendor agents. Pilots link Bedrock-based agents and Slack integrations that surface Quick Suite tools, with Anthropic and Amazon Nova models available inside Salesforce’s trust boundary.
Contact centres extend agentic workflows through Salesforce Contact Center with Amazon Connect, adding voice self-service plus real-time transcription and sentiment. Complex issues hand off to representatives with full context, and Toyota Motor North America plans automation for service tasks.
Procurement scales via AWS Marketplace, where Salesforce surpassed $2bn in lifetime sales across 30 countries. AgentExchange listings provide prebuilt, customisable agents and workflows, helping enterprises adopt agentic AI faster with governance and security intact.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Most firms are still struggling to turn AI pilots into measurable value, Cisco’s 2025 AI Readiness Index finds. Only 13% are ‘AI-ready’, having scaled deployments with results. The rest face gaps in data, security and governance.
Southeast Asia outperforms the global average at 16% ready. Indonesia reaches 23% and Thailand 21%, ahead of Europe at 11% and the Americas at 14%. Cisco says lower tech debt helps some emerging markets leapfrog.
Infrastructure debt is mounting: limited GPU capacity, fragmented data and constrained networks slow progress. Just 34% say their tech stack can adapt and scale for evolving compute needs. Most remain stuck in pilots.
Adoption plans are ambitious: 83% intend to deploy AI agents, with almost 40% expecting them to support staff within a year. Yet only one in three have change-management programmes, risking stalled workplace integration.
The leaders pair strong digital foundations with clear governance and cybersecurity embedded by design. Cisco urges broader collaboration among industry, government and tech firms, arguing that trust, regulation and investment will determine who monetises AI first.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.
The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.
Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.
Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.
Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Oracle and NVIDIA have joined forces to advance sovereign AI, supporting Abu Dhabi’s vision of becoming an AI-native government by 2027.
The partnership combines the computing platforms of NVIDIA with Oracle Cloud Infrastructure to create secure, high-performance systems that deliver next-generation citizen services, including multilingual AI assistants, automatic notifications, and intelligent compliance solutions.
The Government Digital Strategy 2025-2027 of Abu Dhabi, backed by a 13-billion AED investment, follows a phased ‘crawl, walk, run’ approach. The initiative has already gone live across 25 government entities, enabling over 15,000 daily users to access AI-accelerated services.
Generative AI applications are now integrated into human resources, procurement, and financial reporting, while advanced agentic AI and autonomous workflows will further enhance government-wide operations.
The strategy ensures full data sovereignty while driving innovation and efficiency across the public sector.
Partnerships with Deloitte and Core42 provide infrastructure and compliance support, while over 200 AI-powered capabilities are deployed to boost digital skills, economic growth, and employment opportunities.
By 2027, the initiative is expected to contribute more than 24 billion AED to Abu Dhabi’s GDP and create over 5,000 jobs, demonstrating a global blueprint for AI-native government transformation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nscale has signed an expanded deal with Microsoft to deliver about 200,000 NVIDIA GB300 GPUs across Europe and the US, with Dell collaborating. The company calls it one of the largest AI infrastructure contracts to date. The build-out targets surging enterprise demand for GPU capacity.
A ~240MW hyperscale AI campus in Texas, US, will host roughly 104,000 GB300s from Q3 2026, leased from Ionic Digital. Nscale plans to scale the site to 1.2GW, with Microsoft holding an option on a second 700MW phase from late 2027. The campus is optimised for air-cooled, power-efficient deployments.
In Europe, Nscale will deploy about 12,600 GB300s from Q1 2026 at Start Campus in Sines, Portugal, supporting sovereign AI needs within the EU. A separate UK facility at Loughton will house around 23,000 GB300s from Q1 2027. The 50MW site is scalable to 90MW to support Azure services.
A Norway programme also advances Aker-Nscale’s joint venture plans for about 52,000 GB300s at Narvik, along with Nscale’s GW+ greenfield sites and orchestration for target training, fine-tuning, and inference at scale. Microsoft emphasises sustainability and global availability.
Both firms cast the pact as deepening transatlantic tech ties and accelerating the rollout of next-gen AI services. Nscale says few providers can deploy GPU fleets at this pace. The roadmap points to sovereign-grade, multi-region capacity with lower-latency platforms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.
Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?
Where the AI money flows
Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.
The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.
The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.
OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.
Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.
Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.
AI makes money – it’s just not enough (yet)
Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.
No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.
To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.
OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.
Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.
NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.
Why AI projects crash and burn
Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.
GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.
Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.
The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.
Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.
All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.
How AI can become profitable
While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.
Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.
Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.
Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.
Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.
Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.
Charting the future of AI profitability
The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.
That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!