Stanford speech warns of AI tsunami

Senator Bernie Sanders has warned at Stanford University in California that the US is unprepared for the speed and scale of the AI revolution. Speaking in California alongside Congressman Ro Khanna, he called the moment one of the most dangerous in modern US history.

At Stanford University, Sanders urged a moratorium on the expansion of AI data centres to slow development while lawmakers catch up. He argued that the American public lacks a clear understanding of the economic and social impact ahead and that New York is already considering a pause.

Khanna, who represents Silicon Valley in California, rejected a complete moratorium but called for steering AI growth through renewable energy and water efficiency standards. He outlined principles to prevent wealth from being concentrated among a small group of tech billionaires.

Sanders also raised concerns in California about job losses and emotional reliance on AI, citing projections of widespread automation. He called for a national debate in the US over whether AI will benefit the public or deepen inequality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

University of Bristol opens free online course on AI

The University of Bristol has launched a free online course called AI Fundamentals, designed to increase public understanding of AI. Many people use AI regularly but feel unsure about how to engage with it effectively, creating a gap that the course aims to address.

AI Fundamentals explores the technology’s complexities, societal impact, and environmental implications. The curriculum emphasises critical thinking about AI, its risks, and its potential, making it relevant for both enthusiasts and the curious general public.

The course runs entirely online over four weeks, requiring about 3 hours of self-paced work per week. No coding or advanced mathematics is needed, allowing learners from all backgrounds to participate and explore AI in a digestible format.

Led by Professors Genevieve Liveley and Seth Bullock, the course draws on expertise across fields including computer science, law, medicine, humanities, and neuroscience. Supported by a £50,000 alum donation and UKRI funding, it is now open for enrolment via FutureLearn.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global microchip shortage pushes electronics prices higher

South African consumers may soon pay more for smartphones and laptops due to a global shortage of memory chips. The high demand is largely driven by AI data centres, which require powerful microchips to operate.

Tech experts report that major AI companies are acquiring large quantities of these chips for their own data centres, limiting supply for other industries. At the same time, importing chips from regions such as China has become more difficult because of trade tensions and tariffs.

Industry leaders, including Apple’s Tim Cook and Tesla’s Elon Musk, have expressed concern over the impact on production and business operations. The strain is being felt across the tech sector as companies compete for the limited supply of components.

With no immediate solution, the increased costs are expected to be passed down to consumers. Analysts warn that the combination of high demand, supply constraints, and global trade issues will make technology and appliances more expensive for consumers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Carrefour accelerates AI-enabled transformation to 2030, following Walmart’s strategic playbook

According to reporting by Diginomica, Carrefour, one of Europe’s largest retail groups, is accelerating the adoption of AI across its business as part of a strategic transformation aimed at 2030.

Inspired in part by the AI-driven overhaul undertaken by Walmart in the US, Carrefour’s initiative is intended to reshape its logistics, pricing, forecasting and store operations to become more data-driven, efficient and responsive to consumer trends.

Key elements of Carrefour’s AI focus include supply chain optimisation, dynamic pricing and promotions, customer engagement, and store and back-office automation.

First, using AI to predict demand, manage inventories and reduce waste across national and regional networks. Then, algorithms adjust pricing based on real-time data to improve competitiveness and margin performance.

Personalised offers and recommendations powered by machine learning work to enhance loyalty and user experience. Finally, AI tools streamline staffing, task allocation, and routine merchandising processes.

The transformation plan emphasises enterprise data strategy as a foundation, from consolidating disparate data sources to deploying machine learning models that inform business decisions in near-real time.

Carrefour executives view AI not just as a set of point solutions, but as core to future competitiveness, citing early gains in forecasting accuracy and reduced waste.

Carrefour’s approach is part of a broader retail AI arms race in which large grocers leverage scale and data to drive efficiency and customer centricity, with Walmart often cited as a pioneer whose playbook demonstrates the strategic value of enterprise-wide AI.

The report also notes challenges ahead, such as aligning organisational culture, ensuring data quality and addressing privacy concerns around personalised offers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ashford Port Health Authority rolls out AI-powered compliance checks at UK border control

The Ashford Port Health Authority, operated by Ashford Borough Council at the Sevington Border Control Post in Kent, has deployed an AI-enabled system to support import compliance checks.

This technology uses Intelligent Document Processing to automatically extract, structure and evaluate import documentation for agricultural products and other regulated goods, reducing the need for manual review in early screening stages.

Officials describe the system as the first of its kind in the UK to fully automate initial documentary compliance checks for imported goods, including products of animal origin (POAO), high-risk food not of animal origin (HRFNAO) and other regulated consignments.

By mimicking the workflows of human officers, it helps improve productivity, consistency and speed of border controls while allowing staff to focus on frontline services.

The rollout also allows Ashford Borough Council to freeze official control charges for the 2026/27 financial year, as automation gains offset cost pressures. The council emphasises that the AI system augments rather than replaces expert oversight, strengthening compliance without sacrificing professional judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kentucky AI therapy ban passes with strong support in decisive 88–7 vote

Lawmakers in the Kentucky House of Representatives have approved House Bill 455, a measure aimed at limiting the role of AI in mental health services. The proposal introduces safeguards to regulate the use of AI tools in therapy settings and to strengthen patient protections.

Under the bill, AI systems are prohibited from making independent therapeutic decisions or generating treatment plans without review from a licensed therapist. In particular, tools such as ChatGPT, Gemini, and Claude would be barred from performing direct therapy or replacing human interaction.

However, self-help materials and educational resources are explicitly exempt from the restrictions. Therapists may still use AI as a supportive tool, provided they do not delegate substantive clinical responsibilities or direct client engagement.

In addition, practitioners must inform patients if AI is being used and obtain their consent. Supporters argue that preserving the human-to-human relationship in therapy is essential, especially amid concerns that some chatbot systems have encouraged harmful behaviour or worsened mental health outcomes.

Although the bill passed the House 88-7, opposition came mainly from libertarian-leaning Republican members who contended that the measure introduces unnecessary regulation and could hinder innovation. Nevertheless, backers maintain that clearer guardrails are necessary to address risks linked to automated mental health advice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music discovery unlocks powerful and highly effective ways to find new songs

AI tools developed by companies such as OpenAI, Anthropic, and Google are increasingly shaping everyday digital practices. While these systems are not fully reliable for complex research, they offer practical support for routine tasks. One emerging use case is personalised music discovery.

Music platforms, such as Spotify and Apple, allow users to export their listening history, creating opportunities for AI-driven analysis. By uploading a music library file, users enable AI systems to categorise genres, detect patterns, and identify gaps in their playlists. Broader preferences can then be refined through targeted prompts.

Greater specificity improves results. Users can exclude familiar artists, prioritise recent releases, or emphasise similarities with favourite bands. Signature tracks may be suggested for evaluation, allowing continuous feedback. Iterative interaction helps the system better understand musical preferences over time, leading to increasingly accurate recommendations.

Once curated, playlists can be exported and transferred back to streaming services using tools such as Exportify and TuneMyMusic. Although some may question the data implications of such personalisation, the process remains efficient, fast, and engaging. AI-driven music discovery ultimately demonstrates how general-purpose systems can deliver highly tailored cultural experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Generative AI presents the biggest data-risk challenge in history

Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.

Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.

The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.

Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.

Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.

The author calls for revised data governance frameworks, including strict training data provenance, auditability, encryption, minimisation and purpose limitation, to mitigate what is described as ‘the biggest data risk in history.’

Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!