Miljodata hack exposes data of nearly 15% of Swedish population

Swedish prosecutors have confirmed that a cyberattack on IT systems provider Miljodata exposed the personal data of 1.5 million people, nearly 15% of Sweden’s population. The attack occurred during the weekend of August 23–24.

Authorities said the stolen data has been leaked online and includes names, addresses, and contact details. Prosecutor Sandra Helgadottir said the group Datacarry has claimed responsibility, though no foreign state involvement is suspected.

Media in Sweden reported that the hackers demanded 1.5 bitcoin (around $170,000) to prevent the release of the data. Miljodata confirmed the information has now been published on the darknet.

The Swedish Authority for Privacy Protection has received over 250 breach notifications, with 164 municipalities and four regional authorities impacted. Employees in Gothenburg were among those affected, according to SVT.

Private companies, including Volvo, SAS, and GKN Aerospace, also reported compromised data. Investigators are working to identify the perpetrators as the breach’s scale continues to raise concerns nationwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia outlines guidelines for social media age ban

Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.

The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.

Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.

Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.

Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China proposes independent oversight committees to strengthen data protection

The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.

Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.

The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.

If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.

The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions of customer records stolen in Kering luxury brand data breach

Kering has confirmed a data breach affecting several of its luxury brands, including Gucci, Balenciaga, Brioni, and Alexander McQueen, after unauthorised access to its Salesforce systems compromised millions of customer records.

Hacking group ShinyHunters has claimed responsibility, alleging it exfiltrated 43.5 million records from Gucci and nearly 13 million from the other brands. The stolen data includes names, email addresses, dates of birth, sales histories, and home addresses.

Kering stated that the incident occurred in June 2025 and did not compromise bank or credit card details or national identifiers. The company has reported the breach to the relevant regulators and is notifying the affected customers.

Evidence shared by ShinyHunters suggests Balenciaga made an initial ransom payment of €500,000 before negotiations broke down. The group released sample data and chat logs to support its claims.

ShinyHunters has exploited Salesforce weaknesses in previous attacks targeting luxury, travel, and financial firms. Questions remain about the total number of affected customers and the potential exposure of other Kering brands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European regulators push for stronger oversight in crypto sector

European regulators from Italy, France, and Austria have called for changes to the EU’s Markets in Crypto-Assets Regulation (MiCA). Their proposals aim to fix supervisory gaps, improve cybersecurity, and simplify token white paper approvals.

The regulation, which came into force in December 2024, requires prior authorisation for firms offering crypto-related services in Europe. However, early enforcement has shown significant gaps in how national authorities apply the rules.

Regulators argue these differences undermine investor protection and threaten the stability of the European internal market.

Concerns have also been raised about non-EU platforms serving European clients through intermediaries outside MiCA’s scope. To counter this, authorities recommend restricting such activity and ensuring intermediaries only use platforms compliant with MiCA or equivalent standards.

Additional measures include independent cybersecurity audits, mandatory both before and after authorisation, to bolster resilience against cyber-attacks.

The proposals suggest giving ESMA direct oversight of major crypto providers and centralising white paper filings. Regulators say the changes would boost legal clarity, cut investor risks, and level the field for European firms against global rivals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telecom industry outlines vision for secure 6G

Telecom experts say 6G must be secure by design as planning for the next generation of mobile networks accelerates.

Industry leaders warn that 6G will vastly expand the attack surface, with autonomous vehicles, drones, industrial robots and AR systems all reliant on ultra-low latency connections. AI will be embedded at every layer, creating opportunities for optimisation but also new risks such as model poisoning.

Quantum threats are also on the horizon, with adversaries potentially able to decrypt sensitive data. Quantum-resistant cryptography is expected to be a cornerstone of 6G defences.

With standards due by 2029, experts stress cooperation among regulators, equipment vendors and operators. Security, they argue, must be as fundamental to 6G as speed and sustainability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers use ChatGPT for fake ID attack

A hacking group has reportedly used ChatGPT to generate a fake military ID in a phishing attack targeting South Korea. The incident, uncovered by cybersecurity firm Genians, shows how AI can be misused to make malicious campaigns more convincing.

Researchers said the group, known as Kimsuky, crafted a counterfeit South Korean military identification card to support a phishing email. While the document looked genuine, the email instead contained links to malware designed to extract data from victims’ devices.

Targets included journalists, human rights activists and researchers. Kimsuky has a history of cyber-espionage. US officials previously linked the group to global intelligence-gathering operations.

The findings highlight a wider trend of AI being exploited for cybercrime, from creating fake résumés to planning attacks and developing malware. Genians warned that attackers are rapidly using AI to impersonate trusted organisations, while the full scale of the breach is unknown.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attacks pose growing threat to shipping industry

The maritime industry faces rising cyber threats, with Nigerian gangs among the most active attackers of shipping firms. HFW lawyers say ‘man-in-the-middle’ frauds are now common, letting hackers intercept communications and steal sensitive financial or operational data.

Costs from cyber attacks are rising sharply, with average mitigation expenses for shipping firms doubling to $550,000 (£410,000) between 2022 and 2023. In cases where hackers remain embedded, ransom payments can reach $3.2m.

The rise in attacks coincides with greater digitisation, satellite connectivity such as Starlink, and increased use of onboard sensors.

Threats now extend beyond financial extortion, with GPS jamming and spoofing posing risks to navigation. Incidents such as the grounding of MSC Antonia in the Red Sea illustrate potential physical damage from cyber interference.

Industry regulators are responding, with the International Maritime Organization introducing mandatory cyber security measures into ship management systems. Experts say awareness has grown, and shipping firms are gradually strengthening defences against criminal and state cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Industry leaders urge careful AI use in research projects

The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.

Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.

Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.

OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.

ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!