OpenAI plans AI superapp to unify ChatGPT and Codex

A shift toward consolidation is underway, with OpenAI planning to merge its ChatGPT app, Codex platform and browser into a single desktop ‘superapp’ designed to simplify the user experience.

OpenAI said the move aims to streamline its product ecosystem after a period of rapid expansion that resulted in multiple standalone tools. The company is now prioritising a more unified approach, particularly as it intensifies competition with rivals such as Anthropic in enterprise and developer markets.

The planned superapp will focus heavily on ‘agentic’ AI capabilities, enabling systems to operate autonomously across tasks such as writing software, analysing data and managing workflows. The goal is to create a central platform where AI can act as a collaborative assistant across the full productivity stack.

Internal leadership changes are also supporting the transition. Chief of Applications Fidji Simo will oversee the initiative, working alongside President Greg Brockman, as the company restructures teams to align around a single core product. Executives have emphasised the need to reduce fragmentation and improve product quality.

The shift comes as OpenAI faces increasing pressure from competitors that have gained traction with enterprise customers. Anthropic, in particular, has seen success with its developer-focused offerings, prompting OpenAI to refocus on business users and revenue growth.

Over the coming months, the company plans to expand Codex with broader productivity features before integrating ChatGPT and its browser into the unified platform. While the mobile ChatGPT app will remain separate, the broader strategy signals a move toward a more cohesive and scalable AI ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba AI strategy targets $100 billion cloud and AI revenue

An ambitious target to generate $100 billion in annual cloud and AI revenue within five years has been set, as Alibaba seeks to counter slowing growth in its once-dominant e-commerce business.

The push follows a sharp deterioration in financial performance, with quarterly earnings plunging and revenue growth missing expectations. The results underscore growing urgency within the company to extract meaningful returns from its AI investments, which have so far required heavy capital outlays.

Central to the strategy is a shift toward monetisation, with the rollout of agentic AI services such as Wukong and price increases of up to 34% across cloud and storage products. Alibaba is positioning its AI and cloud division as its primary growth engine, aiming to replicate the momentum seen in recent quarters, when AI-related revenues expanded by triple digits.

However, competitive pressures are intensifying. Domestic rivals including Tencent are leveraging vast ecosystems such as WeChat to gain an advantage in agentic AI, while a new wave of players like DeepSeek, MiniMax and Zhipu are offering low-cost, open-source models that compress margins across the industry.

At the same time, Alibaba faces structural challenges beyond AI. Core businesses such as e-commerce and food delivery remain under pressure from aggressive competition, while rising operational costs – subsidies and promotions to attract users – continue to weigh on profitability.

Leadership uncertainty and ongoing restructuring add further complexity. With major investment commitments exceeding $50 billion and increasing competition from both domestic and global players, Alibaba’s ability to execute on its AI strategy will be critical in determining whether it can sustain long-term growth and regain market confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dublin launches major data centre microgrid

A new 110MW data centre microgrid has been launched in Dublin to support rising AI-driven energy demand. The system is designed to provide reliable power during early development stages before full grid connection.

The project combines energy generation, battery storage and heat recovery to improve efficiency and resilience. Developers say the system can help address power constraints affecting large-scale cloud and AI facilities.

Industry leaders in Dublin say the microgrid offers a model for integrating renewable energy with traditional infrastructure. The approach could be replicated in other European markets facing similar grid limitations.

Experts say the system also enables future innovations such as hydrogen integration and district heating. The project reflects a broader shift towards treating energy as a strategic asset in the expansion of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tether unveils mobile-friendly AI training platform

Tether has launched an AI framework that runs large language models on smartphones and non-NVIDIA GPUs. The system is part of its QVAC platform and uses Microsoft’s BitNet architecture, along with LoRA techniques to reduce memory and computational requirements.

The framework enables cross-platform training on AMD, Intel, Apple Silicon, and mobile GPUs, allowing models with up to 1 billion parameters to be fine-tuned on phones in under 2 hours.

Larger models with up to 13 billion parameters are also supported on mobile devices. BitNet’s 1-bit architecture reduces VRAM requirements by nearly 78%, enabling larger models to run on limited hardware.

Performance improvements benefit inference, with mobile GPUs outperforming CPUs, enabling on-device training and federated learning. By reducing reliance on cloud infrastructure, the system offers more flexible AI development for distributed environments.

Tether’s expansion into AI mirrors a broader trend in the crypto sector, where companies are investing in AI infrastructure, autonomous agents, and high-performance computing.

Industry activity includes record revenue growth for AI and HPC operations, blockchain-integrated AI agents, and new tools for secure on-chain transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Exchange Online outage affecting Outlook access resolved by Microsoft

Microsoft has addressed an Exchange Online outage that disrupted access to email and calendar services for users worldwide. The issue affected multiple connection methods, including Outlook on the web, Outlook desktop, and Exchange ActiveSync.

The company first acknowledged the problem early in the day, saying it was investigating reports of users being unable to access their mailboxes. According to a Microsoft 365 admin centre update, several Exchange Online connection protocols were impacted during the outage.

Although Microsoft later reported that telemetry indicated the issue was no longer occurring for most users, some customers continued to experience access problems. At one point, the Office.com portal also displayed an error message, preventing users from logging in.

Microsoft linked the disruption to an issue within its supporting network infrastructure, which affected how traffic was processed. Engineers implemented configuration changes to restore normal service and continue monitoring the platform to ensure stability.

In a later update, Microsoft confirmed that the Exchange Online outage had been mitigated and that services had been restored. The company said it is still investigating the root cause and will provide further details in a post-incident report, while a separate issue affecting Microsoft 365 Copilot web access remains under review.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vera CPU by NVIDIA accelerates large-scale AI workloads

NVIDIA has unveiled the Vera CPU, designed specifically for agentic AI and reinforcement learning. It delivers 50% faster performance and double the energy efficiency, already adopted by Alibaba, Meta, ByteDance, Oracle Cloud, CoreWeave, and Lambda.

Vera features 88 Olympus cores, high-bandwidth memory, and advanced multithreading, supporting large-scale AI deployments. Liquid-cooled racks can host over 22,500 concurrent CPU environments, allowing enterprises and research labs to scale agentic AI efficiently.

The CPU integrates with NVIDIA GPUs via NVLink-C2C and includes ConnectX SuperNIC and BlueField-4 DPUs to enhance networking, storage, and security. Early users like Cursor and Redpanda report major gains in AI agent throughput and real-time data processing.

High performance, energy efficiency, and GPU integration make Vera a new standard for faster, scalable, and responsive AI systems. The platform supports coding assistants, reinforcement learning, and large-scale data, making it suitable for enterprise and scientific use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data centre security evolves with rise of robot dog patrols

Rising demand for AI and cloud computing is driving a surge in data centre construction, pushing operators to adopt new security solutions. Companies are increasingly deploying robotic dogs to patrol sites and monitor operations.

These four-legged machines can inspect equipment, detect anomalies and alert staff before issues escalate. Merry Frayne, senior director of product management at Boston Dynamics, noted a sharp increase in interest as investment in data infrastructure continues to grow.

Developed by firms such as Boston Dynamics and Ghost Robotics, the robots are designed to support rather than replace human guards. Their use can reduce costs by requiring fewer personnel while maintaining continuous monitoring.

The machines can travel long distances on a single charge and operate across both external and internal environments. Some facilities already use them on pre-programmed patrols to collect data and flag unusual activity.

At the same time, competition in robotics is intensifying globally, with companies exploring humanoid and AI-powered systems. Advances from firms like Nvidia and Tesla highlight how automation is expanding beyond security into broader industrial use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!