Human data demand fuels new global digital economy

A growing number of individuals worldwide are participating in a new digital economy built around supplying data for AI systems.

Through platforms such as Kled AI and Silencio, users upload videos, audio recordings and personal interactions in exchange for payment, contributing to the development of increasingly sophisticated AI models.

Such a trend reflects a broader shift in the AI industry, where demand for high-quality human-generated data is rising as traditional web-based sources become more limited.

Researchers suggest that human data remains essential for improving system performance and modelling behaviour beyond existing datasets. As a result, data marketplaces have emerged as an alternative supply mechanism.

Economic considerations often shape participation. In regions facing limited employment opportunities or currency instability, earning income in global currencies can provide a meaningful financial incentive.

At the same time, similar practices are expanding in higher-income countries, where individuals seek supplementary income streams amid rising living costs.

However, the model introduces complex trade-offs.

Contributors may grant extensive usage rights over their data, sometimes on a long-term or irreversible basis. Experts note that such arrangements can reduce control over how personal information is reused, including in contexts not initially anticipated.

Concerns also extend to issues such as data security, transparency and the potential for misuse in areas including synthetic media and identity replication.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to strengthen digital safeguards ahead of elections

Emmanuel Macron has called for stronger enforcement of the EU digital rules, urging Ursula von der Leyen to act against risks linked to foreign interference in elections. The request comes amid growing concern over attempts to influence democratic processes across Europe.

In a letter addressed to the Commission, Macron stressed the importance of safeguarding electoral integrity in a challenging geopolitical environment.

He wrote:

‘In a geopolitical context marked by a multiplication of hostile stances against the European model and its democratic values, it is crucial that the Union… ensure the integrity of civic discourse and electoral processes’.

The proposal focuses on stricter enforcement instead of new legislation, particularly regarding the Digital Services Act. European authorities are encouraged to ensure that online platforms properly assess and mitigate systemic risks, including the spread of manipulated content and coordinated disinformation.

Attention is also directed toward algorithmic amplification, AI-generated content labelling and the removal of fake accounts.

As multiple elections approach across the EU, policymakers are considering how to apply existing regulatory tools more effectively to protect democratic systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

DoorDash launches Tasks app to train AI robots with gig workers

A new wave of AI development is increasingly relying on real-world human behaviour, with DoorDash moving to tap its gig workforce to generate training data for robotics systems.

DoorDash has launched a standalone app called Tasks, allowing couriers to earn money by recording themselves performing everyday activities such as folding clothes, washing dishes or making a bed. The collected data is used to train AI and robotics models to understand physical environments and human interactions better.

The move reflects a broader shift in AI training, where companies are seeking physical, real-world data rather than relying solely on text and images. Such data is essential for building systems capable of performing tasks in dynamic environments, including humanoid robots and autonomous machines.

Other companies are pursuing similar strategies. Uber and Instawork have tested gig-based data-collection models, while robotics startups are using wearable devices, such as gloves and head-mounted cameras, to capture detailed motion data for training.

The Tasks app is currently being rolled out as a pilot, with DoorDash planning to expand the types of available assignments over time. Some tasks may also be integrated into the main Dasher app, including activities that support navigation or assist autonomous delivery systems.

As competition intensifies, access to large-scale physical data is becoming a critical advantage. DoorDash’s approach highlights how gig-economy platforms are increasingly integrated into the development of next-generation AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU digital wallet nears rollout

Interoperability tests for the European Digital Identity Wallet have marked a significant step towards deployment, following a major industry-wide exercise. Systems were tested under real conditions to ensure compatibility across providers.

The initiative forms part of the EU’s plan to provide citizens with a secure digital wallet for identification and online services. The system will allow users to store identity data and access services, including electronic signatures.

Results showed that most test scenarios were successfully completed, confirming that independent systems can work together effectively. The exercise also highlighted areas requiring further refinement ahead of wider implementation.

EU officials and industry leaders said the progress supports the development of a unified digital ecosystem. The wallet is expected to simplify everyday services while strengthening security and trust in digital identity solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agent causes internal data leak at Meta

Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.

Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.

The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.

Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.

Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.

Analysts warn that such errors are likely to recur as AI adoption accelerates.

The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic Ready programme by Visa launched to prepare for AI-driven payments

Visa has launched Agentic Ready, a global programme preparing the payments ecosystem for AI agents to initiate transactions for consumers. The programme builds on Visa Intelligent Commerce, the company’s framework for secure, AI-driven payment experiences.

The first phase, launching in Europe, including the UK, focuses on issuer readiness. Participating banks and financial institutions can test and validate agent-initiated transactions in controlled production environments, ensuring they remain secure, reliable, and scalable.

Visa’s trust layer integrates tokenisation, identity verification, risk controls, and biometric authentication to maintain consumer consent and protection throughout transactions.

Controlled testing with selected merchants allows issuers to gain practical experience of agentic commerce in real-world settings. Early participants, including Barclays, HSBC UK, Revolut, and Banco Santander, help Visa test and refine safe AI-driven payments across channels.

The programme advances Visa’s vision of AI-driven commerce, enabling flexible payments while keeping consumers in control. Expansion beyond Europe is planned, leveraging lessons from the initial rollout to accelerate agentic commerce globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Horizon Worlds remains active as Meta reconsiders VR plans

Meta has reversed its earlier decision to discontinue virtual reality support for Horizon Worlds, allowing the platform to remain available on VR headsets despite previous plans to prioritise mobile and web access.

The decision follows an internal reassessment of user engagement trends, which indicate limited adoption of VR-based social platforms.

While Horizon Worlds was once positioned as central to the company’s metaverse ambitions, demand has remained relatively low, raising questions about the long-term viability of immersive social environments.

Financial pressures also continue to shape strategy.

Meta’s Reality Labs division has recorded substantial losses since 2021, reflecting high investment in virtual and augmented reality technologies without corresponding commercial returns.

Industry data further suggests declining headset sales, reinforcing uncertainty around VR as a mainstream consumer platform.

In contrast, mobile usage of Horizon Worlds is growing faster. Increasing downloads point to broader accessibility and improved product-market alignment, though revenue generation remains limited.

As a result, Meta is prioritising mobile development instead of fully abandoning VR, maintaining a dual approach while seeking more sustainable engagement models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New iPhone vulnerability raises concerns over advanced mobile cyber threats

A newly identified cyberattack known as ‘DarkSword’ is raising concerns about the security of iPhone devices, following reports that millions of users could be exposed to rapid data extraction techniques.

Cybersecurity researchers indicate that the attack targets specific iOS versions, exploiting vulnerabilities in the Safari browser and a graphics processing feature known as WebGPU.

Once access is gained, attackers can retrieve sensitive information, including messages, emails and location data, within minutes, while removing traces of the intrusion almost immediately.

Estimates suggest that a significant share of global iPhone users may be affected, with hundreds of millions of devices running vulnerable software versions.

The scale of exposure remains uncertain, particularly as experts continue to assess whether additional versions of iOS may also be impacted.

Researchers have associated the campaign with a threat actor previously identified by Google, with observed activity across multiple regions.

Such a development highlights growing concerns about the evolution of mobile cyber threats, where increasingly sophisticated techniques are being deployed beyond traditional state-level operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!