Cyberattack compromises personal data used for DBS checks at UK college

Bracknell and Wokingham College has confirmed a cyberattack that compromised data collected for Disclosure and Barring Service (DBS) checks. The breach affects data used by Activate Learning and other institutions, including names, dates of birth, National Insurance numbers, and passport details.

Access Personal Checking Services (APCS) was alerted by supplier Intradev on August 17 that its systems had been accessed without authorisation. While payment card details and criminal conviction records were not compromised, data submitted between December 2024 and May 8, 2025, was copied.

APCS stated that its own networks and those of Activate Learning were not breached. The organisation is contacting only those data controllers where confirmed breaches have occurred and has advised that its services can continue to be used safely.

Activate Learning reported the incident to the Information Commissioner’s Office following a risk assessment. APCS is still investigating the full scope of the breach and has pledged to keep affected institutions and individuals informed as more information becomes available.

Individuals have been advised to closely monitor their financial statements, exercise caution when opening phishing emails, and regularly update security measures, including passwords and two-factor authentication. Activate Learning emphasised the importance of staying vigilant to minimise risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Miljodata hack exposes data of nearly 15% of Swedish population

Swedish prosecutors have confirmed that a cyberattack on IT systems provider Miljodata exposed the personal data of 1.5 million people, nearly 15% of Sweden’s population. The attack occurred during the weekend of August 23–24.

Authorities said the stolen data has been leaked online and includes names, addresses, and contact details. Prosecutor Sandra Helgadottir said the group Datacarry has claimed responsibility, though no foreign state involvement is suspected.

Media in Sweden reported that the hackers demanded 1.5 bitcoin (around $170,000) to prevent the release of the data. Miljodata confirmed the information has now been published on the darknet.

The Swedish Authority for Privacy Protection has received over 250 breach notifications, with 164 municipalities and four regional authorities impacted. Employees in Gothenburg were among those affected, according to SVT.

Private companies, including Volvo, SAS, and GKN Aerospace, also reported compromised data. Investigators are working to identify the perpetrators as the breach’s scale continues to raise concerns nationwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI search tools challenge Google’s dominance

AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.

Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.

ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.

Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.

Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.

Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.

The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.

This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Millions of customer records stolen in Kering luxury brand data breach

Kering has confirmed a data breach affecting several of its luxury brands, including Gucci, Balenciaga, Brioni, and Alexander McQueen, after unauthorised access to its Salesforce systems compromised millions of customer records.

Hacking group ShinyHunters has claimed responsibility, alleging it exfiltrated 43.5 million records from Gucci and nearly 13 million from the other brands. The stolen data includes names, email addresses, dates of birth, sales histories, and home addresses.

Kering stated that the incident occurred in June 2025 and did not compromise bank or credit card details or national identifiers. The company has reported the breach to the relevant regulators and is notifying the affected customers.

Evidence shared by ShinyHunters suggests Balenciaga made an initial ransom payment of €500,000 before negotiations broke down. The group released sample data and chat logs to support its claims.

ShinyHunters has exploited Salesforce weaknesses in previous attacks targeting luxury, travel, and financial firms. Questions remain about the total number of affected customers and the potential exposure of other Kering brands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telecom industry outlines vision for secure 6G

Telecom experts say 6G must be secure by design as planning for the next generation of mobile networks accelerates.

Industry leaders warn that 6G will vastly expand the attack surface, with autonomous vehicles, drones, industrial robots and AR systems all reliant on ultra-low latency connections. AI will be embedded at every layer, creating opportunities for optimisation but also new risks such as model poisoning.

Quantum threats are also on the horizon, with adversaries potentially able to decrypt sensitive data. Quantum-resistant cryptography is expected to be a cornerstone of 6G defences.

With standards due by 2029, experts stress cooperation among regulators, equipment vendors and operators. Security, they argue, must be as fundamental to 6G as speed and sustainability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use ChatGPT for fake ID attack

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.

Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.

Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.

The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!