Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Switzerland can shape AI in 2026

Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ dominated by the biggest hardware buyers. In his blog ‘10 Swiss values and practices for AI & digitalisation in 2026,’ Jovan Kurbalija argues that Switzerland’s best response is to build resilience around an ‘AI Trinity’ of Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, using long-standing Swiss practices as a practical compass rather than a slogan.

A central idea is subsidiarity. When top-down approaches hit limits, Switzerland can push ‘bottom-up AI’ grounded in local knowledge and real community needs. Kurbalija points to practical steps such as turning libraries, post offices, and community centres into AI knowledge hubs, creating apprenticeship-style AI programmes, and small grants that help communities develop local AI tools. He also cites a proposal for a ‘Geneva stack’ of sovereign digital tools adopted across public institutions, alongside the notion of a decentralised ‘cyber militia’ capacity for defence.

The blog also leans heavily on entrepreneurship and innovation, especially Switzerland’s SME culture and Zurich’s tech ecosystem. The message for 2026 is to strengthen partnerships between Swiss startups and major global tech firms present in the region, while also connecting more actively with fast-growing digital economy actors from places like India and Singapore.

Instead of chasing moonshots alone, Kurbalija says Switzerland can double down on ‘precision AI’ in areas such as medtech, fintech, and cleantech, and expand its move toward open-source AI tools across the full lifecycle, from models to localised agents.

Another theme is trust and quality, and the challenge of translating Switzerland’s high-trust reputation into the AI era. Beyond cybersecurity, the question is whether Switzerland can help define ‘trustworthy AI,’ potentially even as an international verifier certifying systems.

At the same time, Kurbalija frames quality as a Swiss competitive edge in a world frustrated with low-grade ‘AI slop,’ arguing that better outcomes often depend less on new algorithms and more on well-curated knowledge and data.

He also flags neutrality and sovereignty as issues that will move from abstract debates to urgent policy questions, such as what neutrality means when cyber weapons and AI systems are involved, and how much control a country can realistically keep over data and infrastructure in an interdependent world. He notes that digital sovereignty is a key priority in Switzerland’s 2026 digital strategy, with a likely focus on mapping where critical digital assets are stored and on protecting sensitive domains, such as health, elections, and security, while running local systems when feasible.

Finally, the blog stresses solidarity and resilience as the social and infrastructural foundations of the transition. As AI-driven centralisation risks widening divides, Kurbalija calls for reskilling, support for regions and industries in transition, and digital tools that strengthen social safety nets rather than weaken them.

His bottom line is that Switzerland can’t, and shouldn’t, try to outspend others on hardware. Still, it can choose whether to ‘import the future as a dependency’ or build it as a durable capability, carefully and inclusively, on unmistakably Swiss strengths.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia H200 chip sales to China cleared by US administration

The US administration has approved the export of Nvidia’s H200 AI chips to China, reversing years of tight US restrictions on advanced AI hardware. The Nvidia H200 chips represent the company’s second-most-powerful chip series and were previously barred from sale due to national security concerns.

The US president announced the move last month, linking approval to a 25 per cent fee payable to the US government. The administration said the policy balances economic competitiveness with security interests, while critics warned it could strengthen China’s military and surveillance capabilities.

Under the new rules, Nvidia H200 chips may be shipped to China only after third-party testing verifies their performance. Chinese buyers are limited to 50 per cent of the volume sold to US customers and must provide assurances that the chips will not be used for military purposes.

Nvidia welcomed the decision, saying it would support US jobs and global competitiveness. However, analysts questioned whether the safeguards can be effectively enforced, noting that Chinese firms have previously accessed restricted technologies through intermediaries.

Chinese companies have reportedly ordered more than two million Nvidia H200 chips, far exceeding the chipmaker’s current inventory. The scale of demand has intensified debate over whether the policy will limit China’s AI ambitions or accelerate its access to advanced computing power.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU proposes indefinite spectrum licences for telecoms

The European Commission is set to unveil the Digital Networks Act (DNA), a major revamp of EU telecom regulations aimed at boosting investment in digital infrastructure.

A draft document indicates the Commission plans to grant indefinite-duration radio spectrum licences, introducing ‘use-it-or-share-it’ conditions to prevent hoarding and encourage active deployment.

The DNA also calls for tighter oversight of dominant firms, including transparency, non-discrimination, and pricing rules in related markets.

Fibre rollout guidance and flexible copper replacement deadlines aim to harmonise investment and support 2030 connectivity goals across member states.

Large online platforms are expected to engage in a voluntary cooperative framework moderated by the Body of European Regulators for Electronic Communications (BEREC).

The approach avoids mandatory levies or binding duties, focusing instead on technical dialogue and ‘best practice’ codes while leaving enforcement largely to national regulators.

The draft shifts focus from forcing Big Tech to fund networks to reforming spectrum and telecom rules to boost investment. Member states and the European Parliament will negotiate EU coordination, national discretion, and net neutrality protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reaffirms commitment to Digital Markets Act enforcement

European Commission Executive Vice President Teresa Ribera has stated that the EU has a constitutional obligation under its treaties to uphold its digital rulebook, including the Digital Markets Act (DMA).

Speaking at a competition law conference, Ribera framed enforcement as a duty to protect fair competition and market balance across the bloc.

Her comments arrive amid growing criticism from US technology companies and political pressure from Washington, where enforcement of EU digital rules has been portrayed as discriminatory towards American firms.

Several designated gatekeepers have argued that the DMA restricts innovation and challenges existing business models.

Ribera acknowledged the right of companies to challenge enforcement through the courts, while emphasising that designation decisions are based on lengthy and open consultation processes. The Commission, she said, remains committed to applying the law effectively rather than retreating under external pressure.

Apple and Meta have already announced plans to appeal fines imposed in 2025 for alleged breaches of DMA obligations, reinforcing expectations that legal disputes around EU digital regulation will continue in parallel with enforcement efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Billions in data protection fines remain unpaid

Ireland’s Data Protection Commission is owed more than €4 billion in fines imposed on companies, primarily Big Tech firms. Most of the penalties remain unpaid due to ongoing legal challenges.

Figures released under Freedom of Information laws show the watchdog collected only €125,000 from over €530 million in fines issued last year. Similar patterns have persisted across several previous years.

Since 2020, the commission has levied €4.04 billion in data protection penalties. Just €20 million has been paid, while the remaining balance is tied up in appeals before Irish and EU courts.

The regulator states that legislation prevents enforcement until the court proceedings conclude. Several cases hinge on a landmark WhatsApp ruling at the EU’s top court, expected to shape future collections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot