Claude AI experiences temporary global outage

Anthropic’s AI chatbot, Claude, experienced a global outage, leaving users unable to access the platform. Visitors reported error messages indicating the system had broken down, though the company said it was working to resolve the issue.

The Claude API, used by other websites to integrate the chatbot, remained operational. Anthropic confirmed that the outage was limited to the Claude web interface and did not affect other integrations, emphasising that engineers were actively resolving the issue.

The outage, tracked by Down Detector, began around noon in the UK and affected users worldwide. Messages on the platform reassured users that Claude would return soon and that the problem had been identified and was being fixed.

The interruption comes at a sensitive time for Anthropic, as the company navigates heightened attention surrounding access to its Claude AI system. The situation unfolds amid broader discussions about the role of advanced AI tools in defence contexts, with industry players facing increasing scrutiny over their policies and partnerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU pressures Meta over alleged smart glasses privacy breaches

Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.

Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.

The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.

They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.

EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.

Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.

The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.

Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.

Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI helps scientists translate thoughts into speech and images

Breakthroughs in AI and neuroscience are bringing researchers closer to translating human thoughts into words, offering new communication tools for people living with paralysis or severe speech disorders. Experiments with implanted brain electrodes have enabled patients to produce sentences simply by imagining speech.

Machine learning systems analyse neural signals captured from small electrode arrays placed in speech-related brain regions, converting activity into text at increasing speed and accuracy. Recent trials achieved communication rates approaching practical conversation while also capturing tone, rhythm and emotional expression.

Scientists have begun detecting ‘inner speech’, identifying silent counting or imagined phrases without physical attempts to speak. Findings suggest thinking and speaking rely on overlapping neural networks, although spontaneous thoughts remain difficult to decode reliably.

Beyond language, researchers are reconstructing images, music and sensory experiences from brain scans using generative AI models. Studies analysing visual and auditory processing reveal how different brain regions encode perception, opening possibilities for studying hallucinations, dreams and animal cognition.

Technology companies, including Neuralink, are pushing brain-computer interfaces toward commercial use, though current systems sample only a tiny fraction of the brain’s billions of neurons. Experts believe widespread applications such as natural speech restoration or even brain-to-brain communication may emerge within the next two decades, alongside growing ethical debates around privacy and mental autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsung settles Texas lawsuit over smart TV data collection

Samsung has settled a lawsuit with the Texas Attorney General over allegations that its smart TVs collected viewing data without users’ informed consent.

Texas Attorney General Ken Paxton filed the suit last December, accusing Samsung of using Automated Content Recognition (ACR) technology to capture screenshots of what consumers were watching and using that information for targeted advertising.

As part of the settlement, Samsung must halt any collection or processing of ACR viewing data without first obtaining the express consent of Texas consumers.

The company is also required to update its smart TVs with clear, conspicuous disclosure and consent screens, replacing what a court had previously identified as ‘dark patterns’ requiring over 200 clicks to access privacy settings.

Samsung stated that it does not believe its Viewing Information Services system violated any regulations, but agreed to strengthen its privacy disclosures. Paxton noted that other smart TV manufacturers, including Sony, LG, Hisense, and TCL Technologies, have not yet made similar changes in response to ongoing lawsuits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claws become the new trend in local agentic AI

A new expression has entered the AI vocabulary, with ‘claws’ becoming the latest term to capture the industry’s imagination.

The term refers to a growing family of open-source personal assistants designed to run locally on consumer hardware, often on Apple’s compact Mac mini rather than on cloud-based servers.

These assistants can access calendars, email accounts, coding tools, browsers and external model APIs, enabling them to carry out complex digital tasks autonomously.

Interest increased after AI researcher Andrej Karpathy described his experiments with claws, prompting broader attention across online communities.

Many users have begun adopting the tools as lightweight agentic systems capable of handling real work, from scheduling meetings to writing software overnight by linking to models from providers such as OpenAI.

The name originated with Clawdbot, which was recently rebranded as OpenClaw and became a prominent example in Silicon Valley.

A wave of variants, including NanoClaw, ZeroClaw and IronClaw, has followed, marking a surge in locally run assistants that appeal to users seeking greater autonomy, privacy and experimentation.

Growing enthusiasm for claws highlights a wider shift towards agentic AI running directly on personal devices.

Whether these systems become mainstream or remain a niche developer trend, they show how quickly the AI landscape can evolve and how new concepts often spread long before they fully mature.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dell expands AI PC strategy to support human creativity

Dell is accelerating development of AI PCs, positioning them as the next standard for personal computing rather than a niche category. Industry forecasts suggest AI-enabled devices could account for more than half of global PC sales by 2026.

Dedicated neural processing units allow AI tasks to run directly on devices, freeing central and graphics processors for demanding creative workflows. Dell says such hardware enables faster editing, improved generative tools and smoother multitasking for designers, editors and digital creators.

Louise Quennell, UK Senior Director at Dell Technologies, emphasised that AI should support creativity rather than replace it. Automating repetitive tasks aims to give professionals more time for experimentation, artistic decision-making and higher-value creative work.

AI assistants are increasingly capable of managing scheduling, summarising information and reducing routine digital administration. Dell believes reducing these ‘digital chores’ could significantly improve productivity, particularly for freelancers balancing creative production with business responsibilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Karnataka chief minister says AI should support not replace artists

Speaking at the Bengaluru GAFX Conference, a major event for the Games, Animation, Visual Effects and Extended Reality (AVGC-XR) sector, Karnataka Chief Minister Siddaramaiah positioned AI as a tool to augment artistic work rather than replace human creators.

He highlighted the importance of ethical AI adoption, respect for intellectual property, data privacy, and ensuring fair compensation for artists and creative professionals as the sector grows.

Siddaramaiah underscored that the ‘soul of storytelling’ and human emotion cannot be fully replicated by algorithms, stressing that technology should amplify human potential without erasing it.

He also urged industry leaders to invest in original content, educational institutions to modernise curricula, and global partners to collaborate with Karnataka’s burgeoning creative ecosystem.

The remarks came amid efforts to develop the AVGC-XR sector through policy support, infrastructure, skill development, and the creation of digital creative clusters beyond Bengaluru in cities like Mysuru, Mangaluru and Hubballi-Dharwad.

Siddaramaiah framed this approach as both an economic and cultural opportunity that must be inclusive and ethically grounded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot