UK plans AI systems to monitor offenders and prevent crimes before they occur

The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.

One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.

These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.

Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.

Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.

UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple notifies French users after commercial spyware threats surge

France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.

These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.

CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.

Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts guidelines on the interplay between DSA and GDPR

The European Data Protection Board (EDPB) has adopted its first guidelines on how the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) work together. The aim is to understand how GDPR should be applied in the context of DSA.

Presented during the EDPB’s September plenary, the guidelines ensure consistent interpretation where the DSA involves personal data processing by online intermediaries like search engines and platforms. While enforcement of DSA falls under authorities’ discretion, the EDPB’s input supports harmonised application across the EU’s evolving digital regulatory framework, including:

  • Notice-and-action systems that help individuals or entities report illegal content,
  • Recommender systems used by online platforms to automatically present specific content to the users of the platform with a particular relative order or prominence,
  • The provisions to ensure minors’ privacy, safety, and security and prohibit profile-based advertising using their data are presented to them.
  • transparency of advertising by online platforms, and
  • Prohibition of profiling-based advertising using special categories of data.

Following initial guidelines on the GDPR and DSA, the EDPB is now working with the European Commission on joint guidelines covering the interplay between the Digital Markets Act (DMA) and GDPR, as well as between the upcoming AI Act and the EU data protection laws. The aim is to ensure consistency and coherent safeguards across the evolving regulatory landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU enforces tougher cybersecurity rules under NIS2

The European Union’s NIS2 directive has officially come into force, imposing stricter cybersecurity duties on thousands of organisations.

Adopted in 2022 and implemented into national law by late 2024, the rules extend beyond critical infrastructure to cover more industries. Energy, healthcare, transport, ICT, and even waste management firms now face mandatory compliance.

Measures include multifactor authentication, encryption, backup systems, and stronger supply chain security. Senior executives are held directly responsible for failures, with penalties ranging from heavy fines to operational restrictions.

Companies must also report major incidents promptly to national authorities. Unlike ISO certifications, NIS2 requires organisations to prove compliance through internal processes or independent audits, depending on national enforcement.

Analysts warn that firms still reliant on legacy systems face a difficult transition. Yet experts agree the directive signals a decisive shift: cybersecurity is now a legal duty, not simply best practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jaguar Land Rover extends production halt after cyberattack

Jaguar Land Rover has told staff to stay at home until at least Wednesday as the company continues to recover from a cyberattack.

The hack forced JLR to shut down systems on 31 August, disrupting operations at plants in Halewood, Solihull and Wolverhampton, UK. Production was initially paused until 9 September but has now been extended for at least another week.

Business minister Sir Chris Bryant said it was too early to determine whether the attack was state-sponsored. The incident follows a wave of cyberattacks in the UK, including recent breaches at M&S, Harrods and train operator LNER.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Switzerland weighs new digital security measures

The Swiss government has proposed a new regulation that would require digital service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months, and, in some cases, disable encryption. The proposal, which does not require parliamentary approval, has triggered alarm among privacy advocates and technology companies worldwide.

The measure would impact services such as VPNs, encrypted email, and messaging platforms. The regulation would mandate providers to collect users’ email addresses, phone numbers, IP addresses, and device port numbers, and to share them with authorities upon request, without the need for a court order.

Swiss official Jean-Louis Biberstein emphasised that the proposed regulation includes strict safeguards to prevent mass surveillance, framing the initiative as a necessary measure to address cyberattacks, organised crime, and terrorism.

While the timeline for implementation remains uncertain, the government of Switzerland is committed to a public consultation process, allowing stakeholders to provide input before any final decision is made.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NATO and Seoul expand cybersecurity dialogue and defence ties

South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.

The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.

Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.

The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.

Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK launches CAF 4.0 for cybersecurity

The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.

An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.

Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.

AI-related cyber risks are also now covered more thoroughly throughout the framework.

The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.

Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.

Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!