Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI users spend 40% of saved time fixing errors

A recent study from Workday reveals that 40% of the time saved by AI in the workplace is spent correcting errors, highlighting a growing productivity paradox. Frequent AI users are bearing the brunt, often double- or triple-checking outputs to ensure accuracy.

Despite widespread adoption- 87% of employees report using AI at least a few times per week, and 85% save one to seven hours weekly-much of that time is redirected to fixing low-quality results rather than achieving net gains in productivity.

The findings suggest that AI can increase workloads rather than streamline operations if not implemented carefully.

Experts argue that AI should enhance human work rather than replace it. Employees need tools that handle complex tasks reliably, allowing teams to focus on creativity, judgment, and strategic decision-making.

Upskilling staff to manage AI effectively is critical to realising sustainable productivity benefits.

The study also highlights the risk of organisations prioritising speed over quality. Many AI tools place trust and accuracy responsibilities on employees, creating hidden costs and risks for decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wikipedia marks 25 years with new global tech partnerships

Wikipedia marked its 25th anniversary by showcasing the rapid expansion of Wikimedia Enterprise and its growing tech partnerships. The milestone reflects Wikipedia’s evolution into one of the most trusted and widely used knowledge sources in the digital economy.

Amazon, Meta, Microsoft, Mistral AI, and Perplexity have joined the partner roster for the first time, alongside Google, Ecosia, and several other companies already working with Wikimedia Enterprise.

These organisations integrate human-curated Wikipedia content into search engines, AI models, voice assistants, and data platforms, helping deliver verified knowledge to billions of users worldwide.

Wikipedia remains one of the top ten most visited websites globally and the only one in that group operated by a non-profit organisation. With over 65 million articles in 300+ languages, the platform is a key dataset for training large language models.

Wikimedia Enterprise provides structured, high-speed access to this content through on-demand, snapshot, and real-time APIs, allowing companies to use Wikipedia data at scale while supporting its long-term sustainability.

As Wikipedia continues to expand into new languages and subject areas, its value for AI development, search, and specialised knowledge applications is expected to grow further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI invests in Merge Labs to advance brain-computer interfaces

The US AI company, OpenAI, has invested in Merge Labs as part of a seed funding round, signalling a growing interest in brain-computer interfaces as a future layer of human–technology interaction.

Merge Labs describes its mission as bridging the gap between biology and AI to expand human capability and agency. The research lab is developing new BCI approaches designed to operate safely while enabling much higher communication bandwidth between the brain and digital systems.

AI is expected to play a central role in Merge Labs’ work, supporting advances in neuroscience, bioengineering and device development instead of relying on traditional interface models.

High-bandwidth brain interfaces are also expected to benefit from AI systems capable of interpreting intent under conditions of limited and noisy signals.

OpenAI plans to collaborate with Merge Labs on scientific foundation models and advanced tools, aiming to accelerate research progress and translate experimental concepts into practical applications over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE joins US led Pax Silica alliance

The United Arab Emirates has joined Pax Silica, a US-led alliance focused on AI and semiconductor supply chains. The move places Abu Dhabi among Washington’s trusted technology partners.

The pact aims to secure access to chips, computing power, energy and critical minerals. The US Department of State says technology supply chains are now treated as strategic assets.

UAE officials view the alliance as supporting economic diversification and AI leadership ambitions. Membership strengthens access to advanced semiconductors and large-scale data centre infrastructure.

Pax Silica reflects a broader shift in global tech diplomacy towards allied supply networks. Analysts say participation could shape future investment in AI infrastructure and manufacturing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IBM launches software focused on digital sovereignty and AI

The tech giant, IBM, has announced IBM Sovereign Core, a new software offering designed to help organisations deploy and manage AI-ready environments under sovereign control.

The product addresses growing regulatory and governance requirements as enterprises and governments seek greater authority over data, infrastructure and AI operations.

Digital sovereignty, according to IBM, extends beyond where data is stored and includes who controls systems, how access is governed and under which jurisdiction AI workloads operate.

IBM Sovereign Core is positioned as a foundational software layer that embeds sovereignty into operations instead of applying controls after deployment.

Built on Red Hat’s open-source technologies, the software enables customer-operated control planes, in-jurisdiction identity management and continuous compliance reporting. AI workloads, including inference and model hosting, can be governed locally without exporting data to external providers.

IBM plans to offer the software across on-premises environments, in-region cloud infrastructure and through selected service providers.

A technology preview is expected to begin in February, with full general availability planned for mid-2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI boom strains global memory chip supply

Gadget makers face rising costs as AI drives intense demand for memory chips. Supplies of DRAM and storage components have tightened across global markets.

Manufacturers have shifted production towards AI data centres, squeezing availability for consumer devices. Analysts warn the memory shortage could extend well into next year.

Higher prices are already affecting laptops, smartphones and connected devices. Some companies are redesigning products or limiting features to manage the costs of chip components.

Industry experts say engineers are writing leaner software to reduce memory use. The AI surge is marking the end of an era of cheap and abundant memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reaffirms commitment to Digital Markets Act enforcement

European Commission Executive Vice President Teresa Ribera has stated that the EU has a constitutional obligation under its treaties to uphold its digital rulebook, including the Digital Markets Act (DMA).

Speaking at a competition law conference, Ribera framed enforcement as a duty to protect fair competition and market balance across the bloc.

Her comments arrive amid growing criticism from US technology companies and political pressure from Washington, where enforcement of EU digital rules has been portrayed as discriminatory towards American firms.

Several designated gatekeepers have argued that the DMA restricts innovation and challenges existing business models.

Ribera acknowledged the right of companies to challenge enforcement through the courts, while emphasising that designation decisions are based on lengthy and open consultation processes. The Commission, she said, remains committed to applying the law effectively rather than retreating under external pressure.

Apple and Meta have already announced plans to appeal fines imposed in 2025 for alleged breaches of DMA obligations, reinforcing expectations that legal disputes around EU digital regulation will continue in parallel with enforcement efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok to be integrated into Pentagon networks as the US expands military AI strategy

The US Department of Defence plans to integrate Elon Musk’s AI tool Grok into Pentagon networks later in January, according to Defence Secretary Pete Hegseth.

The system is expected to operate across both classified and unclassified military environments as part of a broader push to expand AI capabilities.

Hegseth also outlined an AI acceleration strategy designed to increase experimentation, reduce administrative barriers and prioritise investment across defence technology.

An approach that aims to enhance access to data across federated IT systems, aligning with official views that military AI performance relies on data availability and interoperability.

The move follows earlier decisions by the Pentagon to adopt Google’s Gemini for an internal AI platform and to award large contracts to Anthropic, OpenAI, Google and xAI for agentic AI development.

Officials describe these efforts as part of a long-term strategy to strengthen US military competitiveness in AI.

Grok’s integration comes amid ongoing controversy, including criticism over generated imagery and previous incidents involving extremist and offensive content. Several governments and regulators have already taken action against the tool, adding scrutiny to its expanded role within defence systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!