Ashford Port Health Authority rolls out AI-powered compliance checks at UK border control

The Ashford Port Health Authority, operated by Ashford Borough Council at the Sevington Border Control Post in Kent, has deployed an AI-enabled system to support import compliance checks.

This technology uses Intelligent Document Processing to automatically extract, structure and evaluate import documentation for agricultural products and other regulated goods, reducing the need for manual review in early screening stages.

Officials describe the system as the first of its kind in the UK to fully automate initial documentary compliance checks for imported goods, including products of animal origin (POAO), high-risk food not of animal origin (HRFNAO) and other regulated consignments.

By mimicking the workflows of human officers, it helps improve productivity, consistency and speed of border controls while allowing staff to focus on frontline services.

The rollout also allows Ashford Borough Council to freeze official control charges for the 2026/27 financial year, as automation gains offset cost pressures. The council emphasises that the AI system augments rather than replaces expert oversight, strengthening compliance without sacrificing professional judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kentucky AI therapy ban passes with strong support in decisive 88–7 vote

Lawmakers in the Kentucky House of Representatives have approved House Bill 455, a measure aimed at limiting the role of AI in mental health services. The proposal introduces safeguards to regulate the use of AI tools in therapy settings and to strengthen patient protections.

Under the bill, AI systems are prohibited from making independent therapeutic decisions or generating treatment plans without review from a licensed therapist. In particular, tools such as ChatGPT, Gemini, and Claude would be barred from performing direct therapy or replacing human interaction.

However, self-help materials and educational resources are explicitly exempt from the restrictions. Therapists may still use AI as a supportive tool, provided they do not delegate substantive clinical responsibilities or direct client engagement.

In addition, practitioners must inform patients if AI is being used and obtain their consent. Supporters argue that preserving the human-to-human relationship in therapy is essential, especially amid concerns that some chatbot systems have encouraged harmful behaviour or worsened mental health outcomes.

Although the bill passed the House 88-7, opposition came mainly from libertarian-leaning Republican members who contended that the measure introduces unnecessary regulation and could hinder innovation. Nevertheless, backers maintain that clearer guardrails are necessary to address risks linked to automated mental health advice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music discovery unlocks powerful and highly effective ways to find new songs

AI tools developed by companies such as OpenAI, Anthropic, and Google are increasingly shaping everyday digital practices. While these systems are not fully reliable for complex research, they offer practical support for routine tasks. One emerging use case is personalised music discovery.

Music platforms, such as Spotify and Apple, allow users to export their listening history, creating opportunities for AI-driven analysis. By uploading a music library file, users enable AI systems to categorise genres, detect patterns, and identify gaps in their playlists. Broader preferences can then be refined through targeted prompts.

Greater specificity improves results. Users can exclude familiar artists, prioritise recent releases, or emphasise similarities with favourite bands. Signature tracks may be suggested for evaluation, allowing continuous feedback. Iterative interaction helps the system better understand musical preferences over time, leading to increasingly accurate recommendations.

Once curated, playlists can be exported and transferred back to streaming services using tools such as Exportify and TuneMyMusic. Although some may question the data implications of such personalisation, the process remains efficient, fast, and engaging. AI-driven music discovery ultimately demonstrates how general-purpose systems can deliver highly tailored cultural experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Code Security by Anthropic aims to detect and patch complex vulnerabilities

Anthropic has introduced Claude Code Security, an AI-powered service that scans software codebases for vulnerabilities and recommends targeted fixes. Built into Claude Code, the capability is rolling out in a limited research preview for Enterprise and Team customers.

The tool analyses code beyond traditional rule-based scanners, examining data flows and component interactions to identify complex, high-severity vulnerabilities. Findings undergo multi-stage verification, receive severity and confidence ratings, and are presented in a dashboard for human review.

Anthropic said the system re-examines its own results to reduce false positives before surfacing them to analysts. Teams can prioritise remediation based on severity ratings and iterate on suggested patches within familiar development workflows.

Claude Code Security builds on more than a year of cybersecurity research. Using Claude Opus 4.6, Anthropic reported discovering more than 500 long-undetected bugs in open-source projects through testing and external partnerships.

The company said AI will increasingly be used to scan global codebases, warning that attackers and defenders alike are adopting advanced models. Open-source maintainers can apply for expedited access as Anthropic expands the preview.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MWC 2026 to spotlight SK Telecom’s AI infrastructure vision

SK Telecom will present its end-to-end AI capabilities at MWC 2026, taking place from 2 to 5 March in Barcelona. Under the theme ‘AI for Infinite Possibilities’, the company will highlight AI infrastructure, models, and telecom applications.

The South Korea-based operator will showcase its AI data centre expertise, including infrastructure for a major Ulsan project and a high-performance GPU cluster. Its AI Data Center Infrastructure Manager will demonstrate real-time monitoring across integrated systems.

GPU-as-a-service solutions will also include the Petasus AI Cloud platform, AI Cloud Manager for resource optimisation, and the GAIA monitoring system. SK Telecom will introduce its AI Inference Factory, designed to integrate hardware and software into a unified stack for inference workloads.

In the telecom infrastructure space, the company will outline its AI-native network strategy, spanning embedded AI agents, AI-enabled RAN base stations, and on-device antenna tuning. Integrated sensing and communication technologies will preview autonomous networks and early 6G capabilities.

The booth will also feature SK Telecom’s 519-billion-parameter A.X K1 large language model and open-source variants. Applications for physical AI, including digital twins and robot-training platforms that link virtual and physical environments, will be demonstrated.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia steps into global AI leadership to shape AI future

The Global Partnership on Artificial Intelligence (GPAI), a multilateral initiative hosted by the OECD and launched by the G7, has officially welcomed Saudi Arabia as a new member. The move reflects the Kingdom’s commitment to shaping global AI governance and ethical technology use.

Accession is led by the Saudi Data and Artificial Intelligence Authority and supported by Crown Prince Mohammed bin Salman. Joining GPAI aligns with Vision 2030, which aims to localise advanced technologies and boost the digital economy’s contribution to GDP.

Through membership in GPAI, which unites over 40 countries, Saudi Arabia will help establish international AI standards, promote human-centric and responsible AI development, and strengthen global cooperation in the sector.

Officials also anticipate that the move will attract high-quality international investment, leveraging the Kingdom’s expanding regulatory framework and growing AI and data ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!