xAI could reach AGI by 2026 as the AI race intensifies

Elon Musk has told xAI employees that the next two to three years will determine whether the company survives and emerges as a leading force in artificial general intelligence.

Speaking during a company-wide meeting, Musk argued that endurance during such a period could position xAI at the forefront of the AGI race.

Musk suggested that AGI could be achieved by xAI as early as 2026, pointing to rapid advances in the Grok model family. He has previously offered shifting timelines for AGI development, underscoring both technological momentum and persistent uncertainty surrounding the field.

The remarks come as competition across the AI sector intensifies, with OpenAI accelerating model releases and Google unveiling new iterations of its Gemini system. Against larger incumbents, xAI is positioning itself as a challenger focused on speed, scale and aggressive execution.

Central to that strategy is the Colossus project, which has already deployed around 200,000 GPUs and plans to expand to one million.

Musk also highlighted operational synergies with Tesla and SpaceX, while floating longer-term concepts such as space-based data centres, reinforcing xAI’s ambition to differentiate through scale and unconventional infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bank of England governor warns AI could displace jobs at Industrial Revolution scale

Bank of England Governor Andrew Bailey said the widespread adoption of AI is likely to displace workers from existing roles, drawing parallels with the labour disruption caused by the Industrial Revolution.

He emphasised that while AI can boost productivity and economic growth, the UK must invest in training and education to help workers transition into jobs that are AI-enabled.

Bailey expressed particular concern about the impact on younger and inexperienced workers, warning that AI may reduce entry-level opportunities in sectors such as law, accountancy and administration. He noted that firms may hire fewer junior staff as AI systems replace routine data and document analysis.

Despite these risks, Bailey described AI as a potential driver of future UK growth, although he cautioned that productivity gains may take time to materialise.

He also stated that the Bank of England is experimenting with AI internally while monitoring concerns about a potential AI market bubble and the risks of a sharp valuation correction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google study shows teens embrace AI

Google’s new study, The Future Report, surveyed over 7,000 teenagers across Europe about their use of digital technologies. Most respondents describe themselves as curious, critical, and optimistic about AI in their daily lives.

Many teens use AI daily or several times a week for learning, creativity, and exploring new topics. They report benefits such as instant feedback and more engaging learning while remaining cautious about over-reliance.

Young people value personalised content recommendations and algorithmic suggestions, but emphasise verifying information and avoiding bias. They adopt strategies to verify sources and ensure the trustworthiness of online content.

The report emphasises the importance of digital literacy, safety, balanced technology use, and youth engagement in shaping the digital future. Participants request guidance from educators and transparent AI design to promote the responsible and ethical use of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Major IBM training programme to boost India’s AI, cybersecurity and quantum skills

Technology giant IBM has announced a major education initiative to skill 5 million people in India by 2030 in frontier areas such as AI, cybersecurity and quantum computing.

The programme will be delivered via IBM’s SkillsBuild ecosystem, which offers over 1,000 courses and has already reached more than 16 million learners globally.

The initiative will span students and adult learners across schools, universities and vocational training ecosystems, with partnerships planned with bodies such as the All India Council for Technical Education (AICTE) to integrate hands-on learning, curriculum modules, faculty training, hackathons and internships.

IBM also plans to strengthen foundational AI skills at the school level by co-developing curricula, teaching resources and explainers to embed computational thinking and responsible AI concepts early in education.

The CEO of IBM has described India as having the talent and ambition to be a global leader in AI and quantum technologies, with broader access to these skills seen as vital for future economic competitiveness and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated video falsely claims US military to ‘take over’ Nigerian army

A video circulating online, purported to show a US military officer announcing that the United States would take control of the Nigerian Army, is false.

Independent analysis has revealed that the clip was likely generated or heavily manipulated using AI, and no official announcement or credible source supports this claim.

Fact-checkers used AI-detection tools and found high levels of manipulation, and investigations uncovered inconsistencies in uniform insignia and microphones linked to non-existent media outlets. No verified reports indicate that US military forces are intervening in Nigerian defence operations.

The false claim has spread on platforms including X (formerly Twitter), generating alarm and misinterpretation about foreign military involvement in Nigeria.

Experts warn that deepfakes and AI-generated misinformation are becoming harder to spot without specialised tools and verification.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

PwC automates AI governance with Agent Mode

The global professional services network, PwC, has expanded its Model Edge platform with the launch of Agent Mode, an AI assistant designed to automate governance, compliance and documentation across enterprise AI model lifecycles.

The capability targets the growing administrative burden faced by organisations as AI model portfolios scale and regulatory expectations intensify.

Agent Mode allows users to describe governance tasks in natural language, instead of manually navigating workflows.

A system that executes actions directly within Model Edge, generates leadership-ready documentation and supports common document and reporting formats, significantly reducing routine compliance effort.

PwC estimates weekly time savings of between 20 and 50 percent for governance and model risk teams.

Behind the interface, a secure orchestration engine interprets user intent, verifies role based permissions and selects appropriate large language models based on task complexity. The design ensures governance guardrails remain intact while enabling faster and more consistent oversight.

PwC positions Agent Mode as a step towards fully automated, agent-driven AI governance, enabling organisations to focus expert attention on risk assessment and regulatory judgement instead of process management as enterprise AI adoption accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The limits of raw computing power in AI

As the global race for AI accelerates, a growing number of experts are questioning whether simply adding more computing power still delivers meaningful results. In a recent blog post, digital policy expert Jovan Kurbalija argues that AI development is approaching a critical plateau, where massive investments in hardware produce only marginal gains in performance.

Despite the dominance of advanced GPUs and ever-larger data centres, improvements in accuracy and reasoning among leading models are slowing, exposing what he describes as an emerging ‘AI Pareto paradox’.

According to Kurbalija, the imbalance is striking: around 80% of AI investment is currently spent on computing infrastructure, yet it accounts for only a fraction of real-world impact. As hardware becomes cheaper and more widely available, he suggests it is no longer the decisive factor.

Instead, the next phase of AI progress will depend on how effectively organisations integrate human knowledge, skills, and processes into AI systems.

That shift places people, not machines, at the centre of AI transformation. Kurbalija highlights the limits of traditional training approaches and points to new models of learning that focus on hands-on development and deep understanding of data.

Building a simple AI tool may now take minutes, but turning it into a reliable, high-precision system requires sustained human effort, from refining data to rethinking internal workflows.

Looking ahead to 2026, the message is clear. Success in AI will not be defined by who owns the most powerful chips, but by who invests most wisely in people.

As Kurbalija concludes, organisations that treat AI as a skill to be cultivated, rather than a product to be purchased, are far more likely to see lasting benefits from the technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and security trends shape the internet in 2025

Cloudflare released its sixth annual Year in Review, providing a comprehensive snapshot of global Internet trends in 2025. The report highlights rising digital reliance, AI progress, and evolving security threats across Cloudflare’s network and Radar data.

Global Internet traffic rose 19 percent year-on-year, reflecting increased use for personal and professional activities. A key trend was the move from large-scale AI training to continuous AI inference, alongside rapid growth in generative AI platforms.

Google and Meta remained the most popular services, while ChatGPT led in generative AI usage.

Cybersecurity remained a critical concern. Post-quantum encryption now protects 52 percent of Internet traffic, yet record-breaking DDoS attacks underscored rising cyber risks.

Civil society and non-profit organisations were the most targeted sectors for the first time, while government actions caused nearly half of the major Internet outages.

Connectivity varied by region, with Europe leading in speed and quality and Spain ranking highest globally. The report outlines 2025’s Internet challenges and progress, providing insights for governments, businesses, and users aiming for greater resilience and security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!