AI data centre surge pushes electricity demand in the UK to new heights

The UK faces rising pressure on its electricity system as about 140 new data centre projects could demand more power than the country’s current peak consumption, according to Ofgem.

The regulator said developers are seeking about 50 gigawatts of capacity, a level driven by rapid growth in AI and far beyond earlier forecasts.

Connection requests have surged since late 2024, placing strain on a grid already struggling to support vital renewable projects that are key to national climate targets.

Work needed to connect expanding data centre capacity could delay schemes considered essential for decarbonisation and economic growth, instead of supporting the transition at the required pace.

The growing electricity footprint of AI infrastructure also threatens the aim of creating a virtually carbon-free power system by 2030, particularly as high costs and slow grid integration continue to hinder progress.

A proposed data centre in Lincolnshire has already raised concerns by projecting emissions greater than those of several international airports combined.

Ofgem now warns that speculative grid applications are blocking more viable projects, including those tied to government AI growth zones.

The regulator is considering more stringent financial requirements and new fees for access to grid connections, arguing that developers may need to build their own routes to the network rather than rely entirely on existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

University of Bristol opens free online course on AI

The University of Bristol has launched a free online course called AI Fundamentals, designed to increase public understanding of AI. Many people use AI regularly but feel unsure about how to engage with it effectively, creating a gap that the course aims to address.

AI Fundamentals explores the technology’s complexities, societal impact, and environmental implications. The curriculum emphasises critical thinking about AI, its risks, and its potential, making it relevant for both enthusiasts and the curious general public.

The course runs entirely online over four weeks, requiring about 3 hours of self-paced work per week. No coding or advanced mathematics is needed, allowing learners from all backgrounds to participate and explore AI in a digestible format.

Led by Professors Genevieve Liveley and Seth Bullock, the course draws on expertise across fields including computer science, law, medicine, humanities, and neuroscience. Supported by a £50,000 alum donation and UKRI funding, it is now open for enrolment via FutureLearn.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ashford Port Health Authority rolls out AI-powered compliance checks at UK border control

The Ashford Port Health Authority, operated by Ashford Borough Council at the Sevington Border Control Post in Kent, has deployed an AI-enabled system to support import compliance checks.

This technology uses Intelligent Document Processing to automatically extract, structure and evaluate import documentation for agricultural products and other regulated goods, reducing the need for manual review in early screening stages.

Officials describe the system as the first of its kind in the UK to fully automate initial documentary compliance checks for imported goods, including products of animal origin (POAO), high-risk food not of animal origin (HRFNAO) and other regulated consignments.

By mimicking the workflows of human officers, it helps improve productivity, consistency and speed of border controls while allowing staff to focus on frontline services.

The rollout also allows Ashford Borough Council to freeze official control charges for the 2026/27 financial year, as automation gains offset cost pressures. The council emphasises that the AI system augments rather than replaces expert oversight, strengthening compliance without sacrificing professional judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude Code Security by Anthropic aims to detect and patch complex vulnerabilities

Anthropic has introduced Claude Code Security, an AI-powered service that scans software codebases for vulnerabilities and recommends targeted fixes. Built into Claude Code, the capability is rolling out in a limited research preview for Enterprise and Team customers.

The tool analyses code beyond traditional rule-based scanners, examining data flows and component interactions to identify complex, high-severity vulnerabilities. Findings undergo multi-stage verification, receive severity and confidence ratings, and are presented in a dashboard for human review.

Anthropic said the system re-examines its own results to reduce false positives before surfacing them to analysts. Teams can prioritise remediation based on severity ratings and iterate on suggested patches within familiar development workflows.

Claude Code Security builds on more than a year of cybersecurity research. Using Claude Opus 4.6, Anthropic reported discovering more than 500 long-undetected bugs in open-source projects through testing and external partnerships.

The company said AI will increasingly be used to scan global codebases, warning that attackers and defenders alike are adopting advanced models. Open-source maintainers can apply for expedited access as Anthropic expands the preview.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MWC 2026 to spotlight SK Telecom’s AI infrastructure vision

SK Telecom will present its end-to-end AI capabilities at MWC 2026, taking place from 2 to 5 March in Barcelona. Under the theme ‘AI for Infinite Possibilities’, the company will highlight AI infrastructure, models, and telecom applications.

The South Korea-based operator will showcase its AI data centre expertise, including infrastructure for a major Ulsan project and a high-performance GPU cluster. Its AI Data Center Infrastructure Manager will demonstrate real-time monitoring across integrated systems.

GPU-as-a-service solutions will also include the Petasus AI Cloud platform, AI Cloud Manager for resource optimisation, and the GAIA monitoring system. SK Telecom will introduce its AI Inference Factory, designed to integrate hardware and software into a unified stack for inference workloads.

In the telecom infrastructure space, the company will outline its AI-native network strategy, spanning embedded AI agents, AI-enabled RAN base stations, and on-device antenna tuning. Integrated sensing and communication technologies will preview autonomous networks and early 6G capabilities.

The booth will also feature SK Telecom’s 519-billion-parameter A.X K1 large language model and open-source variants. Applications for physical AI, including digital twins and robot-training platforms that link virtual and physical environments, will be demonstrated.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data breach at PayPal prompts password resets and transaction refunds

PayPal has notified some customers of a data breach linked to its Working Capital loan application, after unauthorised access between 1 July and 12 December 2025 exposed personal information. Letters dated 10 February confirm that around 100 customers were potentially affected.

The incident was linked to an error in the Working Capital application, described as a ‘code change’. PayPal said it ‘terminated the unauthorised access to PayPal’s systems’ after discovery.

In a statement sent following publication, a PayPal spokesperson said ‘When there is a potential exposure of customer information, PayPal is required to notify affected customers. In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.’

Data potentially accessed includes names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth. PayPal confirmed a small number of unauthorised transactions and said refunds were issued. Affected users had passwords reset and were offered credit monitoring.

Previous incidents include a 2023 credential stuffing attack that affected nearly 35,000 accounts and phishing campaigns that abused legitimate infrastructure. The company said it continues to use manual investigations and automated tools to mitigate fraud.

Customers are advised to use unique passwords, avoid unsolicited links, verify urgent messages directly via their accounts, and enable passkeys where available. Even limited breaches can heighten risks of targeted phishing and identity theft, especially for small businesses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Phishing messages target IndiaAI and Impact Summit 2026 participants

IndiaAI has issued an urgent advisory warning of a phishing campaign targeting attendees of the India AI Impact Summit 2026. Fraudulent SMS and WhatsApp messages claim refunds are pending and request sensitive financial details.

Organisers said the messages are not official and have not been authorised. Recipients are being urged to click links and provide full card numbers, WhatsApp numbers, and other contact information to ‘process’ refunds.

IndiaAI advised participants not to click suspicious links or share personal or banking information with unverified sources. Attendees in India are encouraged to delete such messages immediately and block the sender’s number.

Anyone who may have submitted details through a suspicious link should contact their bank without delay to secure their accounts. Organisers stressed that event-related communication will only be shared through official channels.

The advisory was issued under the AI Impact Summit 2026 banner, themed ‘Welfare for All | Happiness of All’, as authorities seek to prevent financial fraud linked to the high-profile gathering.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saudi Arabia steps into global AI leadership to shape AI future

The Global Partnership on Artificial Intelligence (GPAI), a multilateral initiative hosted by the OECD and launched by the G7, has officially welcomed Saudi Arabia as a new member. The move reflects the Kingdom’s commitment to shaping global AI governance and ethical technology use.

Accession is led by the Saudi Data and Artificial Intelligence Authority and supported by Crown Prince Mohammed bin Salman. Joining GPAI aligns with Vision 2030, which aims to localise advanced technologies and boost the digital economy’s contribution to GDP.

Through membership in GPAI, which unites over 40 countries, Saudi Arabia will help establish international AI standards, promote human-centric and responsible AI development, and strengthen global cooperation in the sector.

Officials also anticipate that the move will attract high-quality international investment, leveraging the Kingdom’s expanding regulatory framework and growing AI and data ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!