Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Live facial recognition rolled out in Cardiff policing operation

South Wales Police has deployed live facial recognition technology in Cardiff to help prevent and detect crime. The operation is designed to identify suspects, wanted individuals and high-risk missing persons.

The deployment forms part of the force’s broader strategy to integrate advanced technologies into policing across South Wales. Officers will operate in clearly marked vehicles and designated recognition zones during the initiative.

Facial Recognition Technology compares faces captured from live camera feeds or digital images against a database of stored images. The system analyses key facial features and converts them into a mathematical representation using NEC’s NeoFace M40 algorithm before generating potential matches for officer review.

South Wales Police uses three types of facial recognition tools. Live Facial Recognition scans faces in real time against a pre-set watchlist, while Retrospective Facial Recognition analyses still images after incidents. Operator-Initiated Facial Recognition allows officers to take a photo on a mobile device and compare it against a watchlist to confirm identity.

Members of the public are encouraged to approach officers to learn more about how the technology works. Where possible, demonstrations will be provided to explain its operation and purpose.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Topshop unveils AI shoppable catwalk in Manchester

Topshop has staged what it describes as a world-first AI-driven shoppable catwalk in Manchester, as part of its UK brand revival. The Manchester event combined physical runway looks with real-time digital purchasing through a bespoke Front Row AI app.

Guests in Manchester were able to buy outfits instantly as models walked, while also trying on virtual versions after the show. The experience was adjudicated by the World Record Certification Agency and positioned as a new model for immersive retail in the UK.

The Manchester showcase formed part of Topshop’s regional strategy beyond London, highlighting the North West’s role in the UK fashion sector. Students from the University of Salford and Manchester Metropolitan University designed and presented the finale in Manchester.

Topshop’s broader comeback in the UK includes pop ups in John Lewis stores, a standalone website relaunch and a partnership with Liberty in London. Executives said Manchester marked a new phase where AI and commerce converge to reshape retail experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

McKinsey claims agentic AI will reshape global banking

Agentic AI is set to transform banking operations in the US and Asia, according to a McKinsey podcast featuring senior partners from New York, Mumbai and London. The technology goes beyond traditional automation by handling less structured tasks and supporting end to end decision making.

Research cited in the discussion suggests many banks are experimenting with AI, yet few report material financial gains. Leaders in the US and Asia are urged to avoid narrow pilot projects and instead redesign workflows, teams and governance around AI at scale.

McKinsey partners said successful banks in the US and Asia are aligning chief executives, technology leaders and risk officers behind a shared strategy. Operations, risk management and frontline services are seen as areas where AI could deliver significant productivity and quality gains.

Banks in India and other Asian markets are also benefiting from regulatory engagement, including guidance from the Reserve Bank of India. Speakers argued that workforce training, cross functional collaboration and clear accountability will determine whether AI delivers lasting impact in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

European businesses gain AI-powered contract tools with local data hosting

Workday has rolled out its Contract Lifecycle Management (CLM) platform with EU-hosted data in Frankfurt, allowing European organisations to use AI contract tools while keeping all data within the EU.

German, French, and Spanish language support is live, with more languages planned. The update is part of Workday’s EU Sovereign Cloud strategy, targeting the CLM market, which is set to grow to $1.9 billion by 2033.

The platform uses AI agents to automate contracts. The Contract Intelligence Agent extracts terms, obligations, and renewal dates to create a searchable repository, while the Contract Negotiation Agent flags deviations, drafts redlines, and speeds approvals.

Multilingual support ensures smooth workflows across Europe’s largest commercial languages, improving compliance and efficiency.

GDPR compliance remains critical, with fines up to €20 million or 4% of global turnover. EU-hosted CLM removes offshore data risks, which are crucial for the finance, healthcare, and defence sectors. Workday combines AI efficiency with full legal compliance.

Decision-makers should focus on three priorities: EU data residency, leveraging AI agents to accelerate contracts, and integrating CLM with HR and finance systems to maximise value. Workday aims to capture market share in Europe against competitors such as Icertis and DocuSign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pakistan’s digital transformation highlighted as UNESCO advances AI ethics

UNESCO used the Pakistan Governance Forum 2026 to highlight the need for a structured Ethical AI and Data Governance Framework as the country accelerates its digital transformation.

Federal leaders, provincial authorities and civil society convened to examine governance reforms, with UNESCO urging Pakistan to align its expanding digital public infrastructure with coherent standards that protect rights while enabling innovation.

Speaking at the Forum, Fuad Pashayev underlined that Pakistan’s reform priority should centre on the Recommendation on the Ethics of Artificial Intelligence, adopted unanimously by all 193 Member States.

Anchoring national systems in transparency, accountability and meaningful human oversight was framed as essential for maintaining public trust as digital services reshape access to benefits and interactions between citizens and the state.

To support the shift, UNESCO promoted its AI Readiness Assessment Methodology (RAM), which is already deployed in more than 50 countries. The tool helps governments identify regulatory gaps, strengthen institutional coordination and design safeguards against discrimination and algorithmic bias.

UNESCO has already contributed to Pakistan’s draft National AI Policy, ensuring alignment with international ethical frameworks while accommodating national development needs.

Capacity building formed a major pillar of UNESCO’s engagement. In partnership with the University of Oxford, the organisation launched a global course on AI and Digital Transformation in Government in 2025, attracting over nineteen thousand enrolments worldwide.

Pakistan leads participation globally, reflecting both the country’s momentum and growing demand for structured training.

UNESCO’s ongoing work aims to reinforce data governance, improve AI readiness and embed ethical safeguards across Pakistan’s digital transformation strategy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Financial crime risks are reshaped by the rise of autonomous AI agents

Autonomous AI agents are transforming finance by executing transactions independently and speeding up workflows in digital assets and programmable finance. Software can manage wallets and move funds across blockchains in seconds, narrowing detection windows.

AI agents don’t create new crimes but increase speed and complexity, making accountability essential. Responsibility rests with developers, operators, and beneficiaries, with investigators tracing control, configuration, and economic benefit to determine liability.

Weak oversight or misconfigured rules can lead to significant compliance and enforcement consequences.

Investigations face new challenges as autonomous agents operate across multiple blockchains, decentralised exchanges, and global jurisdictions.

Real-time analytics and automated tracing are essential to link transactions to accountable actors before funds move. Governance architecture and monitoring systems increasingly serve as evidence in regulatory or criminal actions.

Institutions and law enforcement are using AI monitoring, anomaly detection, and automated containment systems. Autonomous AI impacts sanctions and national security, emphasising the need for human oversight alongside automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!