Teenagers turn to AI for learning but struggle to spot false information

A new Oxford University Press (OUP) report has found that most teenagers are using AI for schoolwork but many cannot tell when information is false. Over 2,000 students aged 13 to 18 took part, with many finding it hard to verify AI content.

Around eight in ten pupils admitted using AI for homework or revision, often treating it as a digital tutor. However, many are simply copying material without being able to check its accuracy.

Assistant headteacher Dan Williams noted that even teachers sometimes struggle to identify AI-generated content, particularly in videos.

Despite concerns about misinformation, most pupils view AI positively. Nine in ten said they had benefited from using it, particularly in improving creative writing, problem-solving and critical thinking.

To support schools, OUP has launched an AI and Education Hub to help teachers develop confidence with the technology, while the Department for Education has released guidance on using AI safely in classrooms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Argentina poised to host Latin America’s first Stargate AI project

Argentina is set to become the host of Latin America’s first Stargate project, a major AI infrastructure initiative powered by clean energy. Led by Sur Energy with OpenAI, the plan aims to make Argentina a regional and global AI leader while boosting economic growth.

OpenAI and Sur Energy have signed a Letter of Intent to explore building a large-scale data centre in Argentina. Sur Energy will lead the consortium responsible for developing the project, ensuring that the ecosystem is powered by secure, efficient, and sustainable energy sources.

OpenAI is expected to be a key offtaker for the facility.

The project follows high-level talks in Buenos Aires between President Javier Milei, government ministers, and an OpenAI delegation led by Chris Lehane. With AI use tripling and millions using ChatGPT, Argentina ranks among Latin America’s top AI developers, making it an ideal choice for the project.

As part of OpenAI’s OpenAI for Countries initiative, discussions are underway to integrate AI tools into government operations. CEO Sam Altman said the project represents ‘more than just infrastructure’ and will help make Argentina an AI hub for Latin America.

Sur Energy’s Emiliano Kargieman called it a historic opportunity that combines renewable energy with digital innovation to create jobs and attract global investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokens-at-scale with Intel’s Crescent Island and Xe architecture

Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.

Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.

Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.

Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.

Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI predicts future knee X-rays for osteoarthritis patients

In the UK, an AI system developed at the University of Surrey can predict what a patient’s knee X-ray will look like a year in the future, offering a visual forecast alongside a risk score for osteoarthritis progression.

The technology is designed to help both patients and doctors better understand how the condition may develop, allowing earlier and more informed treatment decisions.

Trained on nearly 50,000 knee X-rays from almost 5,000 patients, the system delivers faster and more accurate predictions than existing AI tools.

It uses a generative diffusion model to produce a future X-ray and highlights 16 key points in the joint, giving clinicians transparency and confidence in the areas monitored. Patients can compare their current and predicted X-rays, which can encourage adherence to treatment plans and lifestyle changes.

Researchers hope the technology could be adapted for other chronic conditions, including lung disease in smokers or heart disease progression, providing similar visual insights.

The team is seeking partnerships to integrate the system into real-world clinical settings, potentially transforming how millions of people manage long-term health conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Abu Dhabi deploys AI-first systems with NVIDIA and Oracle

Oracle and NVIDIA have joined forces to advance sovereign AI, supporting Abu Dhabi’s vision of becoming an AI-native government by 2027.

The partnership combines the computing platforms of NVIDIA with Oracle Cloud Infrastructure to create secure, high-performance systems that deliver next-generation citizen services, including multilingual AI assistants, automatic notifications, and intelligent compliance solutions.

The Government Digital Strategy 2025-2027 of Abu Dhabi, backed by a 13-billion AED investment, follows a phased ‘crawl, walk, run’ approach. The initiative has already gone live across 25 government entities, enabling over 15,000 daily users to access AI-accelerated services.

Generative AI applications are now integrated into human resources, procurement, and financial reporting, while advanced agentic AI and autonomous workflows will further enhance government-wide operations.

The strategy ensures full data sovereignty while driving innovation and efficiency across the public sector.

Partnerships with Deloitte and Core42 provide infrastructure and compliance support, while over 200 AI-powered capabilities are deployed to boost digital skills, economic growth, and employment opportunities.

By 2027, the initiative is expected to contribute more than 24 billion AED to Abu Dhabi’s GDP and create over 5,000 jobs, demonstrating a global blueprint for AI-native government transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dell joins Microsoft and Nscale on hyperscale AI capacity

Nscale has signed an expanded deal with Microsoft to deliver about 200,000 NVIDIA GB300 GPUs across Europe and the US, with Dell collaborating. The company calls it one of the largest AI infrastructure contracts to date. The build-out targets surging enterprise demand for GPU capacity.

A ~240MW hyperscale AI campus in Texas, US, will host roughly 104,000 GB300s from Q3 2026, leased from Ionic Digital. Nscale plans to scale the site to 1.2GW, with Microsoft holding an option on a second 700MW phase from late 2027. The campus is optimised for air-cooled, power-efficient deployments.

In Europe, Nscale will deploy about 12,600 GB300s from Q1 2026 at Start Campus in Sines, Portugal, supporting sovereign AI needs within the EU. A separate UK facility at Loughton will house around 23,000 GB300s from Q1 2027. The 50MW site is scalable to 90MW to support Azure services.

A Norway programme also advances Aker-Nscale’s joint venture plans for about 52,000 GB300s at Narvik, along with Nscale’s GW+ greenfield sites and orchestration for target training, fine-tuning, and inference at scale. Microsoft emphasises sustainability and global availability.

Both firms cast the pact as deepening transatlantic tech ties and accelerating the rollout of next-gen AI services. Nscale says few providers can deploy GPU fleets at this pace. The roadmap points to sovereign-grade, multi-region capacity with lower-latency platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI gold rush where the miners are broke

The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.

Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?

Where the AI money flows

Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.

The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.

The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.

OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.

Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.

Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.

AI makes money – it’s just not enough (yet)

Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.

No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.

To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.

OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.

Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.

NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.

Why AI projects crash and burn

Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.

GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.

Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.

The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.

Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.

All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.

How AI can become profitable

While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.

Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.

Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.

Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.

Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.

Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.

Charting the future of AI profitability

The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.

That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virtual hosts and mass output shake up fragile podcast industry

AI is rapidly changing the podcast scene. Virtual hosts, no microphones or studios, are now producing content at a scale and cost that traditional podcasters find hard to match.

One of the pioneers in this trend is Inception Point AI, founded in 2023. With just eight people, the company produces around 3,000 podcast episodes per week, each costing about one dollar to make. With as few as twenty listens, an episode can be profitable.

Startups like ElevenLabs and Wondercraft have also entered the field, alongside companies leveraging Google’s Audio Overview. Many episodes are generated from documents, lectures, local data, anything that can be turned into a voice-narrated script. The tools are getting good at sounding natural.

Yet there is concern among indie podcasters and audio creators. The flood of inexpensive AI podcasts could saturate platforms, making it harder for smaller creators to attract listeners without big marketing budgets.

Another issue is disclosure: many AI-podcast platforms do note that content is AI-generated, but there is no universal requirement for clear labelling. Some believe listener expectations and trust may erode if distinction between human vs. synthetic voices becomes blurred.

As the output volume rises, so do questions about content quality, artistic originality, and how advertising revenues will be shared. The shift is real, but whether it will stifle creative diversity is still up for debate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot