The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.
Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.
Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.
OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.
ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.
One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.
These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.
Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.
Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.
UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s national cybersecurity agency, CERT-FR, has confirmed that Apple issued another set of threat notifications on 3 September 2025. The alerts inform certain users that devices linked to their iCloud accounts may have been targeted by spyware.
These latest alerts mark this year’s fourth campaign, following earlier waves in March, April and June. Targeted individuals include journalists, activists, politicians, lawyers and senior officials.
CERT-FR says the attacks are highly sophisticated and involve mercenary spyware tools. Many intrusions appear to exploit zero-day or zero-click vulnerabilities, meaning no victim interaction must be compromised.
Apple advises victims to preserve threat notifications, avoid altering device settings that could obscure forensic evidence, and contact authorities and cybersecurity specialists. Users are encouraged to enable features like Lockdown Mode and update devices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Jaguar Land Rover has told staff to stay at home until at least Wednesday as the company continues to recover from a cyberattack.
The hack forced JLR to shut down systems on 31 August, disrupting operations at plants in Halewood, Solihull and Wolverhampton, UK. Production was initially paused until 9 September but has now been extended for at least another week.
Business minister Sir Chris Bryant said it was too early to determine whether the attack was state-sponsored. The incident follows a wave of cyberattacks in the UK, including recent breaches at M&S, Harrods and train operator LNER.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Swiss government has proposed a new regulation that would require digital service providers with more than 5,000 users to collect government-issued identification, retain subscriber data for six months, and, in some cases, disable encryption. The proposal, which does not require parliamentary approval, has triggered alarm among privacy advocates and technology companies worldwide.
The measure would impact services such as VPNs, encrypted email, and messaging platforms. The regulation would mandate providers to collect users’ email addresses, phone numbers, IP addresses, and device port numbers, and to share them with authorities upon request, without the need for a court order.
Swiss official Jean-Louis Biberstein emphasised that the proposed regulation includes strict safeguards to prevent mass surveillance, framing the initiative as a necessary measure to address cyberattacks, organised crime, and terrorism.
While the timeline for implementation remains uncertain, the government of Switzerland is committed to a public consultation process, allowing stakeholders to provide input before any final decision is made.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.
The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.
Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.
The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.
Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.
An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.
Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.
AI-related cyber risks are also now covered more thoroughly throughout the framework.
The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.
Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.
Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Vietnam’s National Credit Information Centre (CIC), a key financial data hub under the State Bank of Vietnam, confirmed a cybersecurity attack, according to the Vietnam Cyber Emergency Response Centre (VNCERT). Initial investigations suggest the attack was a deliberate attempt by cybercriminals to steal personal data.
VNCERT reported signs of unauthorized data access and potential leaks of sensitive information. The Department of Cybersecurity and High-Tech Crime Prevention has tasked VNCERT with leading the incident response and coordinating with major cybersecurity firms, including Viettel, VNPT, and NCS.
Authorities have deployed technical measures to contain the breach, assess its scope, and preserve the integrity of the national financial system. Evidence is being gathered for possible legal proceedings, while the full extent of compromised data remains under investigation.
VNCERT has warned individuals and organisations not to download, share, or exploit any leaked data, citing Vietnam’s data protection laws. Government agencies and financial institutions have been urged to audit their systems and comply with national cybersecurity standards.
Cybersecurity expert Ngô Minh Hiếu noted that critical banking data, such as passwords and credit card numbers, is not stored in CIC, suggesting financial transactions remain unaffected.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Marks & Spencer’s technology chief, Rachel Higham, has stepped down less than 18 months after joining the retailer from BT.
Her departure comes months after a cyberattack in April by Scattered Spider disrupted systems and cost the company around £300 million. Online operations, including click-and-collect, were temporarily halted before being gradually restored.
In a memo to staff, the company described Higham as a steady hand during a turbulent period and wished her well. M&S has said it does not intend to replace her role, leaving questions over succession directly.
The retailer expects part of the financial hit to be offset by insurance. It has declined to comment further on whether Higham will receive a payoff.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The California State Assembly passed SB 243, advancing legislation making the state the first in the USA to regulate AI companion chatbots. The bill, which aims to safeguard minors and vulnerable users, passed with bipartisan support and now heads to the state Senate for a final vote on Friday.
If signed into law by Governor Gavin Newsom, SB 243 would take effect on 1 January 2026. It would require companies like OpenAI, Replika, and Character.AI to implement safety protocols for AI systems that simulate human companionship.
The law would prohibit such chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. For minors, platforms must provide recurring alerts every three hours, reminding them they interact with AI and encouraging breaks.
The bill also introduces annual transparency and reporting requirements, effective 1 July 2027. Users harmed by violations could seek damages of up to $1,000 per incident, injunctive relief and attorney’s fees.
The legislation follows the suicide of teen Adam Raine after troubling conversations with ChatGPT, and amid mounting scrutiny of AI’s impact on children. Lawmakers nationwide and the Federal Trade Commission (FTC) are increasing pressure on AI companies to bolster safeguards in the USA.
Though earlier versions of the bill included stricter requirements, like banning addictive engagement tactics, those provisions were removed. Still, backers say the final bill strikes a necessary balance between innovation and public safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!