OpenAI plans AI superapp to unify ChatGPT and Codex

A shift toward consolidation is underway, with OpenAI planning to merge its ChatGPT app, Codex platform and browser into a single desktop ‘superapp’ designed to simplify the user experience.

OpenAI said the move aims to streamline its product ecosystem after a period of rapid expansion that resulted in multiple standalone tools. The company is now prioritising a more unified approach, particularly as it intensifies competition with rivals such as Anthropic in enterprise and developer markets.

The planned superapp will focus heavily on ‘agentic’ AI capabilities, enabling systems to operate autonomously across tasks such as writing software, analysing data and managing workflows. The goal is to create a central platform where AI can act as a collaborative assistant across the full productivity stack.

Internal leadership changes are also supporting the transition. Chief of Applications Fidji Simo will oversee the initiative, working alongside President Greg Brockman, as the company restructures teams to align around a single core product. Executives have emphasised the need to reduce fragmentation and improve product quality.

The shift comes as OpenAI faces increasing pressure from competitors that have gained traction with enterprise customers. Anthropic, in particular, has seen success with its developer-focused offerings, prompting OpenAI to refocus on business users and revenue growth.

Over the coming months, the company plans to expand Codex with broader productivity features before integrating ChatGPT and its browser into the unified platform. While the mobile ChatGPT app will remain separate, the broader strategy signals a move toward a more cohesive and scalable AI ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba AI strategy targets $100 billion cloud and AI revenue

An ambitious target to generate $100 billion in annual cloud and AI revenue within five years has been set, as Alibaba seeks to counter slowing growth in its once-dominant e-commerce business.

The push follows a sharp deterioration in financial performance, with quarterly earnings plunging and revenue growth missing expectations. The results underscore growing urgency within the company to extract meaningful returns from its AI investments, which have so far required heavy capital outlays.

Central to the strategy is a shift toward monetisation, with the rollout of agentic AI services such as Wukong and price increases of up to 34% across cloud and storage products. Alibaba is positioning its AI and cloud division as its primary growth engine, aiming to replicate the momentum seen in recent quarters, when AI-related revenues expanded by triple digits.

However, competitive pressures are intensifying. Domestic rivals including Tencent are leveraging vast ecosystems such as WeChat to gain an advantage in agentic AI, while a new wave of players like DeepSeek, MiniMax and Zhipu are offering low-cost, open-source models that compress margins across the industry.

At the same time, Alibaba faces structural challenges beyond AI. Core businesses such as e-commerce and food delivery remain under pressure from aggressive competition, while rising operational costs – subsidies and promotions to attract users – continue to weigh on profitability.

Leadership uncertainty and ongoing restructuring add further complexity. With major investment commitments exceeding $50 billion and increasing competition from both domestic and global players, Alibaba’s ability to execute on its AI strategy will be critical in determining whether it can sustain long-term growth and regain market confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Learning to integrate AI into daily work like a Googler

A Stanford-backed study examined how Googlers adopt AI, showing why some embrace it while others struggle to find value. Researchers found that many initially relied on ‘simple substitution,’ replacing tasks with AI, but achieved limited benefit because to effort exceeded the payoff.

Successful adopters approached AI differently, applying a product management mindset. They identified high-value opportunities, understood the capabilities of various AI tools, and redesigned workflows rather than seeking quick fixes.

Generative AI, described as a Swiss Army knife of technology, benefits from this methodical approach.

The study highlighted five strategies for deep AI adoption: focus on work blockers rather than technology, select the right tool for the task, start small with rapid experiments, think holistically across systems, and document successful practices for others to replicate.

These techniques help users integrate AI into broader processes, elevate strategic thinking, and increase productivity.

Researchers emphasised that AI adoption thrives when employees rethink workflows and collaborate to share insights. Using a product management mindset, teams can integrate AI to boost creativity, efficiency, and decision-making across the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Firefox adds VPN and AI tools

Mozilla is preparing a major update to its Firefox browser, introducing a built-in VPN and new AI-powered tools. The company says the changes aim to strengthen privacy and give users greater control over browsing.

The integrated VPN will hide the user’s location and IP address while offering a limited monthly data allowance in selected regions. The feature replaces a previously separate paid service and will be built into the browser.

New AI tools will support tasks such as summarising content and comparing products without leaving a web page. Additional features include split-screen browsing and tools to organise notes across tabs.

The update also introduces redesigned settings and a refreshed interface to improve usability. Mozilla says the changes are intended to create a more personalised and modern browsing experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agent causes internal data leak at Meta

Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.

Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.

The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.

Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.

Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.

Analysts warn that such errors are likely to recur as AI adoption accelerates.

The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bulgaria becomes first country to deploy a national AI model across a tax authority

Bulgaria’s National Revenue Agency (NRA) has begun rolling out an AI system developed by INSAIT, the Institute for Computer Science, Artificial Intelligence and Technology at Sofia University, across all of its organisational structures, making it the first large-scale public administrative body in the country to deploy the BgGPT national language model.

Following a successful pilot phase, the system is now in expanded use across the NRA’s central office and seven territorial directorates.

The AI system enables staff to conduct general and specialised searches related to tax and social security legislation, generating instant responses to improve service quality for citizens and businesses.

Crucially, it runs exclusively on open-weight models and operates on proprietary hardware, an approach specifically designed to prevent data leakage and protect privacy, two of the central concerns when integrating AI into government institutions.

The next phase of the project will see the system adapted for specialised use cases and integrated into internal processes alongside national integrator ‘Information Services’, with the goal of reaching daily use by more than 7,000 NRA employees.

INSAIT describes the initiative as a concrete contribution to European AI sovereignty, with Bulgaria combining nationally developed language models and locally controlled hardware to reduce dependence on commercial AI providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mastercard expands AI strategy with new payments model

Mastercard has introduced a generative AI foundation model trained on billions of anonymised transactions. The model is designed as a backend system to power insights across payments and commerce services.

The company plans to extend AI use beyond fraud detection into cybersecurity, loyalty programmes and small-business tools. The model is being developed with support from Nvidia and Databricks technologies.

Earlier AI tools focused on fraud detection, significantly improving accuracy and reducing false positives. The new model marks a shift towards a broader infrastructure approach across multiple products.

This move aligns with Mastercard’s growing reliance on value-added services, which generated over $13 billion in revenue. These services include security, analytics and digital payment solutions beyond the core network.

Competitors such as Visa and PayPal are also expanding AI-driven commerce platforms. The race is intensifying as firms build integrated systems for payments, automation and intelligent services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic Ready programme by Visa launched to prepare for AI-driven payments

Visa has launched Agentic Ready, a global programme preparing the payments ecosystem for AI agents to initiate transactions for consumers. The programme builds on Visa Intelligent Commerce, the company’s framework for secure, AI-driven payment experiences.

The first phase, launching in Europe, including the UK, focuses on issuer readiness. Participating banks and financial institutions can test and validate agent-initiated transactions in controlled production environments, ensuring they remain secure, reliable, and scalable.

Visa’s trust layer integrates tokenisation, identity verification, risk controls, and biometric authentication to maintain consumer consent and protection throughout transactions.

Controlled testing with selected merchants allows issuers to gain practical experience of agentic commerce in real-world settings. Early participants, including Barclays, HSBC UK, Revolut, and Banco Santander, help Visa test and refine safe AI-driven payments across channels.

The programme advances Visa’s vision of AI-driven commerce, enabling flexible payments while keeping consumers in control. Expansion beyond Europe is planned, leveraging lessons from the initial rollout to accelerate agentic commerce globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!