Alibaba AI strategy targets $100 billion cloud and AI revenue

An ambitious target to generate $100 billion in annual cloud and AI revenue within five years has been set, as Alibaba seeks to counter slowing growth in its once-dominant e-commerce business.

The push follows a sharp deterioration in financial performance, with quarterly earnings plunging and revenue growth missing expectations. The results underscore growing urgency within the company to extract meaningful returns from its AI investments, which have so far required heavy capital outlays.

Central to the strategy is a shift toward monetisation, with the rollout of agentic AI services such as Wukong and price increases of up to 34% across cloud and storage products. Alibaba is positioning its AI and cloud division as its primary growth engine, aiming to replicate the momentum seen in recent quarters, when AI-related revenues expanded by triple digits.

However, competitive pressures are intensifying. Domestic rivals including Tencent are leveraging vast ecosystems such as WeChat to gain an advantage in agentic AI, while a new wave of players like DeepSeek, MiniMax and Zhipu are offering low-cost, open-source models that compress margins across the industry.

At the same time, Alibaba faces structural challenges beyond AI. Core businesses such as e-commerce and food delivery remain under pressure from aggressive competition, while rising operational costs – subsidies and promotions to attract users – continue to weigh on profitability.

Leadership uncertainty and ongoing restructuring add further complexity. With major investment commitments exceeding $50 billion and increasing competition from both domestic and global players, Alibaba’s ability to execute on its AI strategy will be critical in determining whether it can sustain long-term growth and regain market confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Learning to integrate AI into daily work like a Googler

A Stanford-backed study examined how Googlers adopt AI, showing why some embrace it while others struggle to find value. Researchers found that many initially relied on ‘simple substitution,’ replacing tasks with AI, but achieved limited benefit because to effort exceeded the payoff.

Successful adopters approached AI differently, applying a product management mindset. They identified high-value opportunities, understood the capabilities of various AI tools, and redesigned workflows rather than seeking quick fixes.

Generative AI, described as a Swiss Army knife of technology, benefits from this methodical approach.

The study highlighted five strategies for deep AI adoption: focus on work blockers rather than technology, select the right tool for the task, start small with rapid experiments, think holistically across systems, and document successful practices for others to replicate.

These techniques help users integrate AI into broader processes, elevate strategic thinking, and increase productivity.

Researchers emphasised that AI adoption thrives when employees rethink workflows and collaborate to share insights. Using a product management mindset, teams can integrate AI to boost creativity, efficiency, and decision-making across the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum cryptography pioneers win top computing prize

Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.

The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.

Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.

The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot