Telecom experts say 6G must be secure by design as planning for the next generation of mobile networks accelerates.
Industry leaders warn that 6G will vastly expand the attack surface, with autonomous vehicles, drones, industrial robots and AR systems all reliant on ultra-low latency connections. AI will be embedded at every layer, creating opportunities for optimisation but also new risks such as model poisoning.
Quantum threats are also on the horizon, with adversaries potentially able to decrypt sensitive data. Quantum-resistant cryptography is expected to be a cornerstone of 6G defences.
With standards due by 2029, experts stress cooperation among regulators, equipment vendors and operators. Security, they argue, must be as fundamental to 6G as speed and sustainability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.
Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.
VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.
This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.
Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.
Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.
Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.
The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.
Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.
Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.
OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.
ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.
Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.
Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.
Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.
Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.
Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union’s NIS2 directive has officially come into force, imposing stricter cybersecurity duties on thousands of organisations.
Adopted in 2022 and implemented into national law by late 2024, the rules extend beyond critical infrastructure to cover more industries. Energy, healthcare, transport, ICT, and even waste management firms now face mandatory compliance.
Measures include multifactor authentication, encryption, backup systems, and stronger supply chain security. Senior executives are held directly responsible for failures, with penalties ranging from heavy fines to operational restrictions.
Companies must also report major incidents promptly to national authorities. Unlike ISO certifications, NIS2 requires organisations to prove compliance through internal processes or independent audits, depending on national enforcement.
Analysts warn that firms still reliant on legacy systems face a difficult transition. Yet experts agree the directive signals a decisive shift: cybersecurity is now a legal duty, not simply best practice.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Jaguar Land Rover has told staff to stay at home until at least Wednesday as the company continues to recover from a cyberattack.
The hack forced JLR to shut down systems on 31 August, disrupting operations at plants in Halewood, Solihull and Wolverhampton, UK. Production was initially paused until 9 September but has now been extended for at least another week.
Business minister Sir Chris Bryant said it was too early to determine whether the attack was state-sponsored. The incident follows a wave of cyberattacks in the UK, including recent breaches at M&S, Harrods and train operator LNER.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.
The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.
Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.
The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.
Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.
An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.
Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.
AI-related cyber risks are also now covered more thoroughly throughout the framework.
The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.
Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.
Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.
Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.
Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.
An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.
Other companies receiving orders include Character.AI and Elon Musk’s xAI.
The probe follows growing public concern over the psychological effects of generative AI on young people.
Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!