The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.
Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?
Where the AI money flows
Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.
The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.
The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.
OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.
Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.
Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.
AI makes money – it’s just not enough (yet)
Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.
No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.
To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.
OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.
Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.
NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.
Why AI projects crash and burn
Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.
GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.
Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.
The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.
Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.
All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.
How AI can become profitable
While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.
Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.
Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.
Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.
Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.
Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.
Charting the future of AI profitability
The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.
That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US firm, OpenAI, has announced a multi-year collaboration with Broadcom to design and deploy 10 gigawatts of custom AI accelerators.
The partnership will combine OpenAI’s chip design expertise with Broadcom’s networking and Ethernet technologies to create large-scale AI infrastructure. The deployment is expected to begin in the second half of 2026 and be completed by the end of 2029.
A collaboration that enables OpenAI to integrate insights gained from its frontier models directly into the hardware, enhancing efficiency and performance.
Broadcom will develop racks of AI accelerators and networking systems across OpenAI’s data centres and those of its partners. The initiative is expected to meet growing global demand for advanced AI computation.
Executives from both companies described the partnership as a significant step toward the next generation of AI infrastructure. OpenAI CEO Sam Altman said it would help deliver the computing capacity needed to realise the benefits of AI for people and businesses worldwide.
Broadcom CEO Hock Tan called the collaboration a milestone in the industry’s pursuit of more capable and scalable AI systems.
The agreement strengthens Broadcom’s position in AI networking and underlines OpenAI’s move toward greater control of its technological ecosystem. By developing its own accelerators, OpenAI aims to boost innovation while advancing its mission to ensure artificial general intelligence benefits humanity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new wave of digital protectionism is taking shape around the world — this time in the name of children’s safety.
Denmark is preparing to ban social media for users under 15, joining a small but growing club of countries seeking to push minors off major platforms. The government has yet to release full details, but the move reflects a growing recognition across many countries that the costs of children’s unrestricted access to social media — from mental health issues to exposure to harmful content — are no longer acceptable.
For inspiration, Copenhagen does not have to look far. Australia has already outlined one of the most detailed blueprints for a nationwide ban on under-16s, set to take effect on 10 December 2025. The law requires platforms to verify users’ ages, remove underage accounts, and block re-registrations. Platforms will also need to communicate clearly with affected users, although questions remain, including whether deleted content will be restored when a user turns 16.
In the EU, the debate over how to protect children online is entangled with a parallel fight over privacy and surveillance. Within the EU Council, a meeting of home affairs ministers taking place next week was expected to include a vote on the long-discussed ‘Chat Control’ regulation proposal, which aims to combat the distribution of child sexual abuse material (CSAM). This proposal is no longer on the agenda, as member states don’t seem to be in agreement on the current text; the vote is said to be postponed for December.
According to the most recent version of the draft regulation, a chat service can be required to screen users’ messages before they are sent and encrypted, but only after a decision from a judicial authority. The system would then search for images of child sexual abuse that are already in databases, while text messages themselves would not be reviewed. Although these provisions were presented as safeguards, not everyone is in agreement, and concerns remain over implications for privacy and encryption, among other issues.
Why it matters: Together, these developments suggest that the era of self-regulation for social media may be drawing to a close. The global debate is not about whether the digital playground needs guardians, but about the final design of its safety features. As governments weigh bans, lawsuits, and surveillance mandates, they struggle to balance two imperatives: protecting children from harm while safeguarding fundamental rights to privacy and free expression.
IN OTHER NEWS THIS WEEK
Decisive actions in AI governance
The world is incessantly debating the future and governance of AI. Here are some of the latest moves in the space.
Italy has made history as the first member state in the EU to pass its own national AI law, going beyond the framework of the EU’s Artificial Intelligence Act. From October 10, the law comes into effect, introducing sector-specific rules across health, justice, work, and public administration. Among its provisions: transparency obligations, criminal penalties for misuse of AI (such as harmful deepfakes), new oversight bodies, and protections for minors (e.g. parental consent if under 14).
In Brussels, the European Commission is simultaneously strategising for digital sovereignty – trying to break the EU’s dependence on foreign AI infrastructure. Its new ‘Apply AI’ strategy aims to channel €1 billion into deploying European AI platforms, integrating them into public services (health, defence, industry), and supporting local tech innovation. The Commission also launched an ‘AI in Science’ initiative to solidify Europe’s position at the forefront of AI research, through a network called RAISE.
Meanwhile, across the Atlantic, California has signed into law a bold transparency and whistleblower regime aimed at frontier AI developers – those deploying large, compute-intensive models. Under SB 53 (the Transparency in Frontier Artificial Intelligence Act), companies must publish safety protocols, monitor risks, and disclose ‘critical safety incidents.’ Crucially, employees who believe there is a catastrophic risk (even without full proof) are shielded from retaliation.
The bigger picture: These moves from Italy, the EU and California are part of a broader trend in which debates on AI governance are giving way to decisive action.
Beijing tightens rare earth grip
China has tightened its grip on the global tech supply chain by significantly expanding its restrictions on its rare earth exports. The new rules no longer focus solely on raw minerals — they now encompass processed materials, manufacturing equipment, and even the expertise used to refine and recycle rare earths. Exporters must seek government approval not only to ship these elements, but also for any product that contains them at a level exceeding 0.1%. Licences will be denied if the end users are involved in weapons production or military applications. Semiconductors won’t be spared either — chipmakers will now face intrusive case-by-case scrutiny, with Beijing demanding full visibility into tech specifications and end users before granting approval.
China is also sealing off human expertise. Engineers and companies in China are prohibited from participating to rare earth projects abroad unless the government explicitly permits it.
A critical moment: The timing of this development is no accident. With US-China tensions escalating and high-level talks between Presidents Trump and Xi on the horizon, Beijing is brandishing what could be described as a powerful economic weapon: monopoly over the minerals that power advanced electronics.
New COMESA platform enables instant, affordable cross-border payments
The trials, supported by two digital financial service providers and one foreign exchange provider, mark a significant step toward a secure and inclusive regional payment system. CCH encourages active participation from partners and stakeholders to refine and validate the platform, ensuring it delivers reliable, immediate, and affordable payments that empower individuals and businesses across the region.
The DRPP is part of CCH’s broader mission to promote economic growth and prosperity through intra-regional trade and integration. By bridging national markets and reducing barriers to trade, the platform seeks to create a financially integrated COMESA region where secure, affordable, and inclusive cross-border payments power trade, investment, and prosperity.
Superconducting breakthrough wins 2025 Nobel Prize in physics
Their pioneering experiments in the mid-1980s used superconducting circuits – specifically Josephson junctions, where superconducting components are separated by an ultra-thin insulating layer. By carefully controlling these circuits, the laureates showed that they could exhibit two hallmark quantum phenomena: tunnelling, where a system escapes a trapped state by passing through an energy barrier, and energy quantisation, where it absorbs or emits only specific amounts of energy.
Their work revealed that quantum behaviour, once thought to apply only to atomic particles, can manifest at the macroscopic scale. The discovery not only deepens understanding of fundamental physics but also underpins emerging quantum technologies, from computing to cryptography.
As Nobel Committee Chair Olle Eriksson noted, the award celebrates how century-old quantum mechanics continues to yield new insights and practical innovations shaping the digital age.
The 80th session of the UN General Assembly First Committee on Disarmament and International Security is taking place in New York from 8 October to 7 November 2025. The general debate on all disarmament and international security agenda items will run from Wednesday, 8 October, to Friday, 17 October. Among the topics expected to be discussed is the Global Mechanism, which is set to succeed the work of the OEWG. A briefing by the Chairperson of the Open-ended Working Group on security of and in the use of information and communications technologies 2021-2025 is scheduled for 27 October.
UN DESA will host two days of virtual consultations to review the ‘Zero Draft’ of the WSIS+20 process. Member states and stakeholders from civil society, academia, technical communities, and international organisations will discuss digital governance, bridging digital divides, human rights, and the digital economy. Sessions are designed for inclusive, global participation, offering a platform to share experiences, provide feedback, and refine the draft ahead of the second Preparatory Meeting on 15 October.
Informal negotiations on the draft are set to begin next week, taking place on 16–17 and 20–21 October 2025.
Geneva Peace Week 2025
The 2025 edition of Geneva Peace Week will bring together peacebuilders, policymakers, academics, and civil society to discuss and advance peacebuilding initiatives. The programme covers a wide range of topics, including conflict prevention, humanitarian response, environmental peacebuilding, and social cohesion. Sessions this year will explore new technologies, cybersecurity, and AI, including AI-fueled polarisation, AI for decision-making in fragile contexts, responsible AI use in peacebuilding, and digital approaches to supporting the voluntary and dignified return of displaced communities.
GESDA 2025 Summit
The GESDA 2025 Summit brings together scientists, diplomats, policymakers, and thought leaders to explore the intersection of science, technology, and diplomacy. Held at CERN in Geneva with hybrid participation, the three-day programme features sessions on emerging scientific breakthroughs, dual-use technologies, and equitable access to innovation. Participants will engage in interactive discussions, workshops, and demonstrations to examine how frontier science can inform global decision-making, support diplomacy, and address challenges such as climate change and sustainable development.
Other researchers question whether autonomous AI scientists are possible or even desirable. Other researchers question whether autonomous AI scientists are possible or even desirable.
Discover how generative AI is designing synthetic proteins that outperform nature, revolutionising gene therapy and accelerating the search for new medical cures.
The report takes stock of the current role the postal sector is playing in enabling inclusive digital transformation and provides recommendations on how to further scale its contribution.
The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.
A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.
During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.
The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.
An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.
Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.
Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.
OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft Azure has launched the world’s first NVIDIA GB300 NVL72 supercomputing cluster, explicitly designed for OpenAI’s large-scale AI workloads.
The new NDv6 GB300 VM series integrates over 4,600 NVIDIA Blackwell Ultra GPUs, representing a significant step forward in US AI infrastructure and innovation leadership.
Each rack-scale system combines 72 GPUs and 36 Grace CPUs, offering 37 terabytes of fast memory and 1.44 exaflops of FP4 performance.
A configuration that supports complex reasoning and multimodal AI systems, achieving up to five times the throughput of the previous NVIDIA Hopper architecture in MLPerf benchmarks.
The cluster is built on NVIDIA’s Quantum-X800 InfiniBand network, delivering 800 Gb/s of bandwidth per GPU for unified, high-speed performance.
Microsoft and NVIDIA’s long-standing collaboration has enabled a system capable of powering trillion-parameter models, positioning Azure at the forefront of the next generation of AI training and deployment.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US startup OpenAI has broadened access to its affordable ChatGPT Go plan, now available in 16 additional countries across Asia, including Malaysia, Vietnam, the Philippines, Pakistan, and Thailand.
Priced at under $5 per month, the plan offers local currency payments in select regions, while others will pay in USD with tax-adjusted variations.
ChatGPT Go gives users higher message and image-generation limits, increased upload capacity, and double the memory of the free plan.
A move that follows significant regional growth (Southeast Asia’s weekly active users increasing fourfold) and builds on earlier launches in India and Indonesia, where paid subscriptions have already doubled.
The expansion intensifies competition with Google, which recently introduced its Google AI Plus plan in more than 40 countries. Both companies are vying to attract users in fast-growing markets with low-cost AI access, each blending productivity and creative tools into subscription offerings.
At OpenAI’s DevDay 2025 in San Francisco, CEO Sam Altman announced that ChatGPT’s global weekly active users have reached 800 million.
OpenAI is also introducing in-chat applications from partners like Spotify, Zillow, and Coursera, signalling a shift toward transforming ChatGPT into a broader AI platform ecosystem.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.
The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.
Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.
Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.
The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has announced that ChatGPT now reaches 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises and governments.
The figure marks another milestone for the company, which reported 700 million weekly users in August and 500 million at the end of March.
Altman shared the news during OpenAI’s Dev Day keynote, noting that four million developers are now building with OpenAI tools. He said ChatGPT processes more than six billion tokens per minute through its API, signalling how deeply integrated it has become across digital ecosystems.
The event also introduced new tools for building apps directly within ChatGPT and creating more advanced agentic systems. Altman states these will support a new generation of interactive and personalised applications.
OpenAI, still legally a nonprofit, was recently valued at $500 billion following a private stock sale worth $6.6 billion.
Its growing portfolio now includes the Sora video-generation tool, a new social platform, and a commerce partnership with Stripe, consolidating its status as the world’s most valuable private company.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has acquired the personal investing startup Roi, which promises AI-driven insights, education, and guidance for individual investors. The Verge reports that the acquisition marks OpenAI’s official entry into the personal finance space.
Following the deal, Roi will shut down its service on October 15 and delete all user data. Its offerings included traditional investing options alongside crypto and NFTs. The company cited this transition in its announcement.
OpenAI did not publicly disclose the purchase price. With this move, OpenAI takes a step beyond content, tools and agents, toward embedding financial services into its AI ecosystem. It questions how AI platforms may offer personalised wealth management or advisory services someday.
The acquisition also draws regulatory, ethical and trust considerations. Mixing AI with finance means issues like explainability, bias, fiduciary responsibility, data privacy and risk management become immediately relevant. Whether users will embrace AI financial advice depends as much on trust and governance as algorithmic accuracy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AMD and OpenAI have announced a strategic partnership to deploy up to six gigawatts of AMD GPUs, marking one of the largest AI compute collaborations.
The multi-year agreement will begin with the rollout of one gigawatt of AMD Instinct MI450 GPUs in the second half of 2026, with further deployments planned across future AMD generations.
A deal that deepens a long-standing relationship between the two companies began with AMD’s MI300X and MI350X series.
OpenAI will adopt AMD as a core strategic compute partner, integrating its technology into large-scale AI systems and jointly optimising product roadmaps to support next-generation AI workloads.
To strengthen alignment, AMD has issued OpenAI a warrant for up to 160 million shares, with tranches vesting as the partnership achieves deployment and share-price milestones. AMD expects the collaboration to deliver tens of billions in revenue and boost its non-GAAP earnings per share.
AMD CEO Dr Lisa Su called the deal ‘a true win-win’ for both companies, while OpenAI’s Sam Altman said the partnership will ‘accelerate progress and bring advanced AI benefits to everyone faster’.
The collaboration positions AMD as a leading hardware supplier in the race to build global-scale AI infrastructure.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!