Claws become the new trend in local agentic AI

A new expression has entered the AI vocabulary, with ‘claws’ becoming the latest term to capture the industry’s imagination.

The term refers to a growing family of open-source personal assistants designed to run locally on consumer hardware, often on Apple’s compact Mac mini rather than on cloud-based servers.

These assistants can access calendars, email accounts, coding tools, browsers and external model APIs, enabling them to carry out complex digital tasks autonomously.

Interest increased after AI researcher Andrej Karpathy described his experiments with claws, prompting broader attention across online communities.

Many users have begun adopting the tools as lightweight agentic systems capable of handling real work, from scheduling meetings to writing software overnight by linking to models from providers such as OpenAI.

The name originated with Clawdbot, which was recently rebranded as OpenClaw and became a prominent example in Silicon Valley.

A wave of variants, including NanoClaw, ZeroClaw and IronClaw, has followed, marking a surge in locally run assistants that appeal to users seeking greater autonomy, privacy and experimentation.

Growing enthusiasm for claws highlights a wider shift towards agentic AI running directly on personal devices.

Whether these systems become mainstream or remain a niche developer trend, they show how quickly the AI landscape can evolve and how new concepts often spread long before they fully mature.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum-safe security upgrades SIM and eSIM cards

Thales has successfully demonstrated a world-first capability that prepares 5G networks for the era of quantum computing. The test proved that SIM and eSIM cards can be remotely upgraded to support post-quantum cryptography, boosting security without disrupting services or user experience.

The breakthrough highlights the potential of crypto-agile networks to evolve securely as quantum threats emerge.

Replacing millions of devices is impractical, so Thales enables operators to deploy quantum-safe algorithms directly to existing devices. Remote upgrades preserve data and connectivity while instantly boosting security, keeping 5G networks resilient and trusted.

The demonstration reinforces Thales’ leadership in post-quantum cryptography, with dedicated research teams developing quantum-resistant methods and contributing to international standards, including NIST initiatives.

Operators can now protect long-term investments, secure critical services, and prepare for the next generation of quantum computing without operational disruptions.

Thales’ approach offers a practical roadmap for telecoms to adopt quantum-safe security today, ensuring continuity, trust, and resilience across mobile networks as digital threats evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm unveils AI focused wearable chip

Qualcomm has unveiled its Snapdragon Wear Elite chip at MWC 2026 in Barcelona, positioning it for a new wave of AI-driven wearable devices. The company said the processor is aimed at pins, pendants, and potentially display-free smart glasses.

Built on a 3nm process, the chip includes both an eNPU for low-power AI tasks and a Hexagon NPU for heavier on-device processing. Qualcomm said the platform can handle up to two billion parameters locally, supporting more advanced AI features without relying on the cloud.

The Snapdragon Wear Elite is designed to sit alongside the existing W5 Plus rather than replace it. Qualcomm added that the chip improves power efficiency, with GPS tracking using 40 per cent less power and fast charging that delivers around 50 per cent of battery capacity in 10 minutes.

Connectivity features include satellite support, 5G, ultra wideband and Bluetooth 6.0. Qualcomm signalled that longer battery life and on-device AI performance will be central to the next generation of wearable AI gadgets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Topshop unveils AI shoppable catwalk in Manchester

Topshop has staged what it describes as a world-first AI-driven shoppable catwalk in Manchester, as part of its UK brand revival. The Manchester event combined physical runway looks with real-time digital purchasing through a bespoke Front Row AI app.

Guests in Manchester were able to buy outfits instantly as models walked, while also trying on virtual versions after the show. The experience was adjudicated by the World Record Certification Agency and positioned as a new model for immersive retail in the UK.

The Manchester showcase formed part of Topshop’s regional strategy beyond London, highlighting the North West’s role in the UK fashion sector. Students from the University of Salford and Manchester Metropolitan University designed and presented the finale in Manchester.

Topshop’s broader comeback in the UK includes pop ups in John Lewis stores, a standalone website relaunch and a partnership with Liberty in London. Executives said Manchester marked a new phase where AI and commerce converge to reshape retail experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

McKinsey claims agentic AI will reshape global banking

Agentic AI is set to transform banking operations in the US and Asia, according to a McKinsey podcast featuring senior partners from New York, Mumbai and London. The technology goes beyond traditional automation by handling less structured tasks and supporting end to end decision making.

Research cited in the discussion suggests many banks are experimenting with AI, yet few report material financial gains. Leaders in the US and Asia are urged to avoid narrow pilot projects and instead redesign workflows, teams and governance around AI at scale.

McKinsey partners said successful banks in the US and Asia are aligning chief executives, technology leaders and risk officers behind a shared strategy. Operations, risk management and frontline services are seen as areas where AI could deliver significant productivity and quality gains.

Banks in India and other Asian markets are also benefiting from regulatory engagement, including guidance from the Reserve Bank of India. Speakers argued that workforce training, cross functional collaboration and clear accountability will determine whether AI delivers lasting impact in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data sovereignty becomes an infrastructure strategy in the AI era

For most of the past decade, data governance was treated as a legal issue. IT built networks and bought tools, while regulators were someone else’s problem. That division no longer holds. Cloud adoption and AI have turned data sovereignty into a core infrastructure and strategy question.

Regulatory frameworks such as GDPR, NIS2, and DORA are expanding and being enforced more strictly. Governments are also scrutinising foreign cloud providers and cross-border access. Local data storage no longer ensures absolute data sovereignty if critical control layers remain outside national jurisdiction.

Traditional SASE and SSE models were not built for this environment. Many still separate outbound cloud traffic from inbound controls. That split creates blind spots in distributed architectures and complicates consistent policy enforcement.

AI workloads intensify the pressure. Retailers, banks, and manufacturers are deploying models locally, not just in hyperscale clouds. Securing east-west traffic across systems and APIs without undermining data sovereignty is becoming a central architectural challenge.

Managed sovereign infrastructure is one response. It reduces reliance on external cloud paths while preserving operational scale. Ultimately, organisations must align security, AI deployment, and governance with long-term resilience goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pakistan’s digital transformation highlighted as UNESCO advances AI ethics

UNESCO used the Pakistan Governance Forum 2026 to highlight the need for a structured Ethical AI and Data Governance Framework as the country accelerates its digital transformation.

Federal leaders, provincial authorities and civil society convened to examine governance reforms, with UNESCO urging Pakistan to align its expanding digital public infrastructure with coherent standards that protect rights while enabling innovation.

Speaking at the Forum, Fuad Pashayev underlined that Pakistan’s reform priority should centre on the Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by all 193 Member States.

Anchoring national systems in transparency, accountability and meaningful human oversight was framed as essential for maintaining public trust as digital services reshape access to benefits and interactions between citizens and the state.

To support the shift, UNESCO promoted its AI Readiness Assessment Methodology (RAM), which is already deployed in more than 50 countries. The tool helps governments identify regulatory gaps, strengthen institutional coordination and design safeguards against discrimination and algorithmic bias.

UNESCO has already contributed to Pakistan’s draft National AI Policy, ensuring alignment with international ethical frameworks while accommodating national development needs.

Capacity building formed a major pillar of UNESCO’s engagement. In partnership with the University of Oxford, the organisation launched a global course on AI and Digital Transformation in Government in 2025, attracting over nineteen thousand enrolments worldwide.

Pakistan leads participation globally, reflecting both the country’s momentum and growing demand for structured training.

UNESCO’s ongoing work aims to reinforce data governance, improve AI readiness and embed ethical safeguards across Pakistan’s digital transformation strategy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!