OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.
The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.
It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.
Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
MIT engineers have created an AI system that can assess material quality faster and more cheaply by generating synthetic spectral data. The tool uses generative AI to produce spectral readings across different scanning modalities, allowing industries to verify materials without using multiple instruments.
By analysing one type of scan, such as infrared, SpectroGen can accurately recreate what the same material’s X-ray or Raman spectrum would look like. The process is completed in less than a minute with AI, compared with hours or days using traditional laboratory equipment.
Researchers said the system achieved a 99% match with real-world data in trials involving more than 6,000 mineral samples. The breakthrough could streamline quality control in manufacturing, pharmaceuticals, semiconductors, and battery production, cutting both time and cost.
Professor Loza Tadesse described SpectroGen as a ‘co-pilot’ for researchers and technicians. Her team is now exploring medical and agricultural applications in the US, supported by Google funding, and plans to commercialise the technology through a startup.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.
Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.
Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.
Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.
Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nscale has signed an expanded deal with Microsoft to deliver about 200,000 NVIDIA GB300 GPUs across Europe and the US, with Dell collaborating. The company calls it one of the largest AI infrastructure contracts to date. The build-out targets surging enterprise demand for GPU capacity.
A ~240MW hyperscale AI campus in Texas, US, will host roughly 104,000 GB300s from Q3 2026, leased from Ionic Digital. Nscale plans to scale the site to 1.2GW, with Microsoft holding an option on a second 700MW phase from late 2027. The campus is optimised for air-cooled, power-efficient deployments.
In Europe, Nscale will deploy about 12,600 GB300s from Q1 2026 at Start Campus in Sines, Portugal, supporting sovereign AI needs within the EU. A separate UK facility at Loughton will house around 23,000 GB300s from Q1 2027. The 50MW site is scalable to 90MW to support Azure services.
A Norway programme also advances Aker-Nscale’s joint venture plans for about 52,000 GB300s at Narvik, along with Nscale’s GW+ greenfield sites and orchestration for target training, fine-tuning, and inference at scale. Microsoft emphasises sustainability and global availability.
Both firms cast the pact as deepening transatlantic tech ties and accelerating the rollout of next-gen AI services. Nscale says few providers can deploy GPU fleets at this pace. The roadmap points to sovereign-grade, multi-region capacity with lower-latency platforms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.
Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?
Where the AI money flows
Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.
The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.
The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.
OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.
Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.
Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.
AI makes money – it’s just not enough (yet)
Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.
No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.
To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.
OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.
Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.
NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.
Why AI projects crash and burn
Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.
GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.
Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.
The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.
Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.
All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.
How AI can become profitable
While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.
Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.
Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.
Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.
Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.
Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.
Charting the future of AI profitability
The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.
That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.
Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.
Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.
The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.
Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.
Google has announced a $15 billion commitment for 2026–2030 to build its first Indian AI hub in Visakhapatnam, positioning itself as a foundational partner in India’s AI-first push and strengthening US–India tech ties.
The hub will centre on a purpose-built, gigawatt-scale data-centre campus engineered to Google’s global standards for performance, reliability, and low latency. Partners AdaniConnex and Airtel will help deliver enterprise-grade capacity, enabling large companies and startups to build and scale AI-powered services.
Beyond compute, Google will anchor an international subsea gateway in Visakhapatnam, landing multiple cables to complement those in Mumbai and Chennai, adding route diversity, lowering latency across India’s east coast, and strengthening national connectivity for users, developers, and enterprises.
Clean growth is a core pillar of the plan, with work on transmission lines, new clean-energy generation, and storage in Andhra Pradesh. Google will apply its energy-efficient data centre design to expand India’s diverse clean power portfolio while supporting grid reliability and long-term sustainability goals.
The initiative aligns with the Viksit Bharat 2047 vision, targeting high-value jobs in India and spillover benefits to US research and development. By combining compute, connectivity, and clean energy at scale, Google aims to accelerate AI adoption across sectors and broaden digital inclusion nationwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Deloitte unveils Zora AI, powered by Oracle Fusion and OCI, to automate complex work and cut costs. Built on NVIDIA’s stack with Oracle AI Agent Studio, it delivers sharper, more contextual insights. The pitch: faster execution and fewer handoffs.
Deep-reasoning agents in the Zora AI team with embedded Oracle agents as coordinated multi-agent workflows. Finance, HR, customer experience, and supply chains gain real-time recommendations and error detection at scale. Data siloes give way to decisions on a unified Fusion platform.
Security and scale rely on OCI Generative AI and Oracle’s ‘on by default’ protections with Autonomous Database. NVIDIA NIM and NeMo support building, deploying, and optimising agents in regulated settings. Trustworthy AI principles cover governance, risk, and compliance from day one.
‘By running Zora AI on Oracle’s cloud, we’re unlocking end-to-end efficiencies,’ said Deloitte’s Mauro Schiavon. Oracle’s Roger Barga said Fusion integration will ‘accelerate innovation and future-proof’ investments. NVIDIA’s John Fanelli cited ‘digital workers at scale’ boosting productivity and autonomy.
Early deployments span finance, sourcing, procurement, sales, and marketing. Finance agents collaborate with Fusion SCM to predict disruptions and optimise operations. An enhanced partner programme adds enablement, accelerators, and go-to-market support.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Jim Lee rejects generative AI for DC storytelling, pledging no AI writing, art, or audio under his leadership. He framed AI alongside other overhyped threats, arguing that predictions falter while human craft endures. DC, he said, will keep its focus on creator-led work.
Lee rooted the stance in the value of imperfection and intent. Smudges, rough lines, and hesitation signal authorship, not flaws. Fans, he argued, sense authenticity and recoil from outputs that feel synthetic or aggregated.
Concerns ranged from shrinking attention spans to characters nearing the public domain. The response, Lee said, is better storytelling and world-building. Owning a character differs from understanding one, and DC’s universe supplies the meaning that endures.
Policy meets practice in DCs recent moves against suspected AI art. In 2024, variant covers were pulled after high-profile allegations of AI-generated content. The episode illustrated a willingness to enforce standards rather than just announce them.
Lee positioned 2035 and DC’s centenary as a waypoint, not a finish line. Creative evolution remains essential, but without yielding authorship to algorithms. The pledge: human-made stories, guided by editors and artists, for the next century of DC.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Asia’s creative future takes centre stage at Singapore’s All That Matters, a September forum for sports, tech, marketing, gaming, and music. AI dominated the music track, spanning creation, distribution, and copyright. Session notes signal rapid structural change across the industry.
The web is shifting again as AI reshapes search and discovery. AI-first browsers and assistants challenge incumbents, while Google’s Gemini and Microsoft’s Copilot race on integration. Early builds feel rough, yet momentum points to a new media discovery order.
Consumption defined the last 25 years, moving from CDs to MP3s, piracy, streaming, and even vinyl’s comeback. Creation looks set to define the next decade as generative tools become ubiquitous. Betting against that shift may be comfortable, yet market forces indicate it is inevitable.
Music generators like Suno are advancing fast amid lawsuits and talks with rights holders. Expected label licensing will widen training data and scale models. Outputs should grow more realistic and, crucially, more emotionally engaging.
Simpler interfaces will accelerate adoption. The prevailing design thesis is ‘less UI’: creators state intent and the system orchestrates cloud tools. Some services already turn a hummed idea into an arranged track, foreshadowing release-ready music from plain descriptions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!