DoorDash launches Tasks app to train AI robots with gig workers

A new wave of AI development is increasingly relying on real-world human behaviour, with DoorDash moving to tap its gig workforce to generate training data for robotics systems.

DoorDash has launched a standalone app called Tasks, allowing couriers to earn money by recording themselves performing everyday activities such as folding clothes, washing dishes or making a bed. The collected data is used to train AI and robotics models to understand physical environments and human interactions better.

The move reflects a broader shift in AI training, where companies are seeking physical, real-world data rather than relying solely on text and images. Such data is essential for building systems capable of performing tasks in dynamic environments, including humanoid robots and autonomous machines.

Other companies are pursuing similar strategies. Uber and Instawork have tested gig-based data-collection models, while robotics startups are using wearable devices, such as gloves and head-mounted cameras, to capture detailed motion data for training.

The Tasks app is currently being rolled out as a pilot, with DoorDash planning to expand the types of available assignments over time. Some tasks may also be integrated into the main Dasher app, including activities that support navigation or assist autonomous delivery systems.

As competition intensifies, access to large-scale physical data is becoming a critical advantage. DoorDash’s approach highlights how gig-economy platforms are increasingly integrated into the development of next-generation AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK firms struggle to turn AI adoption into measurable returns

AI adoption is accelerating across UK businesses, with 78% now using the technology in some capacity, rising to 85% among mid-sized organisations. A further 14% are exploring or planning implementation by 2026, reflecting the continued momentum behind AI adoption.

Despite widespread use, tangible results remain limited. Just 31% of UK businesses report a positive return on investment, while 18% say their AI initiatives have failed to deliver expected benefits. Another 16% indicate it is still too early to assess outcomes, highlighting the long lead times often associated with AI deployments.

A major issue lies in defining success. Only 41% of organisations using AI say they have a clear understanding of what success looks like, suggesting that AI adoption is often not matched by clear strategic planning, even among mid-sized firms, the most active adopters; fewer than half can articulate measurable goals.

The findings suggest that rapid uptake has outpaced organisational readiness. Many businesses are deploying AI tools without defining how they fit into workflows, what decisions they are meant to support, or whether the goal is efficiency, cost reduction, or growth.

For AI adoption to translate into real business value, companies will need stronger governance, clearer objectives, and measurable success criteria. Without that foundation, AI risks remaining an expensive experiment rather than a driver of long-term transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba AI strategy targets $100 billion cloud and AI revenue

An ambitious target to generate $100 billion in annual cloud and AI revenue within five years has been set, as Alibaba seeks to counter slowing growth in its once-dominant e-commerce business.

The push follows a sharp deterioration in financial performance, with quarterly earnings plunging and revenue growth missing expectations. The results underscore growing urgency within the company to extract meaningful returns from its AI investments, which have so far required heavy capital outlays.

Central to the strategy is a shift toward monetisation, with the rollout of agentic AI services such as Wukong and price increases of up to 34% across cloud and storage products. Alibaba is positioning its AI and cloud division as its primary growth engine, aiming to replicate the momentum seen in recent quarters, when AI-related revenues expanded by triple digits.

However, competitive pressures are intensifying. Domestic rivals including Tencent are leveraging vast ecosystems such as WeChat to gain an advantage in agentic AI, while a new wave of players like DeepSeek, MiniMax and Zhipu are offering low-cost, open-source models that compress margins across the industry.

At the same time, Alibaba faces structural challenges beyond AI. Core businesses such as e-commerce and food delivery remain under pressure from aggressive competition, while rising operational costs – subsidies and promotions to attract users – continue to weigh on profitability.

Leadership uncertainty and ongoing restructuring add further complexity. With major investment commitments exceeding $50 billion and increasing competition from both domestic and global players, Alibaba’s ability to execute on its AI strategy will be critical in determining whether it can sustain long-term growth and regain market confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Learning to integrate AI into daily work like a Googler

A Stanford-backed study examined how Googlers adopt AI, showing why some embrace it while others struggle to find value. Researchers found that many initially relied on ‘simple substitution,’ replacing tasks with AI, but achieved limited benefit because to effort exceeded the payoff.

Successful adopters approached AI differently, applying a product management mindset. They identified high-value opportunities, understood the capabilities of various AI tools, and redesigned workflows rather than seeking quick fixes.

Generative AI, described as a Swiss Army knife of technology, benefits from this methodical approach.

The study highlighted five strategies for deep AI adoption: focus on work blockers rather than technology, select the right tool for the task, start small with rapid experiments, think holistically across systems, and document successful practices for others to replicate.

These techniques help users integrate AI into broader processes, elevate strategic thinking, and increase productivity.

Researchers emphasised that AI adoption thrives when employees rethink workflows and collaborate to share insights. Using a product management mindset, teams can integrate AI to boost creativity, efficiency, and decision-making across the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!