China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.
The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.
The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.
Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.
Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Aflac, a health and life insurer in the US, revealed that a cyberattack discovered in June affected over 22.6 million individuals. Personal and claims information, including social security numbers, may have been accessed.
The investigation found the attack likely originated from the Scattered Spider cybercrime group. Authorities were notified, and third-party cybersecurity experts were engaged to contain the incident.
Systems remained operational, and no ransomware was detected, with containment achieved within hours. Notifications have begun, and the insurer continues to monitor for potential fraudulent use of data.
Class-action lawsuits have been filed in response to the incident, which also affected employees, agents, and other related individuals. Erie and Philadelphia Insurance previously reported network issues linked to similar threats.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European governments are intensifying their efforts to safeguard satellites from cyberattacks as space becomes an increasingly vital front in modern security and hybrid warfare. Once seen mainly as technical infrastructure, satellites are now treated as strategic assets, carrying critical communications, navigation, and intelligence data that are attractive targets for espionage and disruption.
Concerns intensified after a 2022 cyberattack on the Viasat satellite network coincided with Russia’s invasion of Ukraine, exposing how vulnerable space systems can be during geopolitical crises. Since then, the EU institutions have warned of rising cyber and electronic interference against satellites and ground stations, while several European countries have flagged growing surveillance activities linked to Russia and China.
To reduce risks, Europe is investing in new infrastructure and technologies. One example is a planned satellite ground station in Greenland, backed by the European Space Agency, designed to reduce dependence on the highly sensitive Arctic hub in Svalbard. That location currently handles most European satellite data traffic but relies on a single undersea internet cable, making it a critical point of failure.
At the same time, the EU is advancing with IRIS², a secure satellite communication system designed to provide encrypted connectivity and reduce reliance on foreign providers, such as Starlink. Although the project promises stronger security and European autonomy, it is not expected to be operational for several years.
Experts warn that technology alone is not enough. European governments are still clarifying who is responsible for defending space systems, while the cybersecurity industry struggles to adapt tools designed for Earth-based networks to the unique challenges of space. Better coordination, clearer mandates, and specialised security approaches will be essential as space becomes more contested.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Space Agency (ESA) has confirmed that a data breach occurred, but stated that its impact appears to be limited. According to the agency, only a very small number of science servers were affected, and these systems were located outside ESA’s main corporate network.
Claims about the breach began circulating on 26 December, when a hacker using the alias ‘888’ alleged that more than 200 gigabytes of ESA data had been compromised and put up for sale. The hacker claimed the material included source code, internal project documents, API tokens, and embedded login credentials.
ESA acknowledged the allegations on 29 December and launched a forensic investigation. A day later, the agency stated that its initial findings confirmed unauthorised access but suggested the scope was far smaller than online claims implied.
The affected servers were described as unclassified systems used for collaborative engineering work within the scientific community. ESA said it has already informed relevant stakeholders and taken immediate steps to secure any potentially impacted devices.
The investigation is still ongoing, and ESA has stated that it will provide further updates once the forensic analysis is complete.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A hacker using the name Lovely has allegedly claimed to have accessed subscriber data belonging to WIRED and to have leaked details relating to around 2.3 million users.
The same individual also states that a wider Condé Nast account system covering more than 40 million users could be exposed in future leaks instead of ending with the current dataset.
Security researchers are reported to have matched samples of the claimed leak with other compromised data sources. The information is said to include names, email addresses, user IDs and timestamps instead of passwords or payment information.
Some researchers also believe that certain home addresses could be included, which would raise privacy concerns if verified.
The dataset is reported to be listed on Have I Been Pwned. However, no official confirmation from WIRED or Condé Nast has been issued regarding the authenticity, scale or origin of the claimed breach, and the company’s internal findings remain unknown until now.
The hacker has also accused Condé Nast of failing to respond to earlier security warnings, although these claims have not been independently verified.
Users are being urged by security professionals to treat unexpected emails with caution instead of assuming every message is genuine.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.
Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.
More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.
Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.
The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.
Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.
A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.
The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.
The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.
Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.
Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.
OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.
When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.
These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.
The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.
The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.
Existing users are not affected, and the requirement applies only at the moment a number is issued.
The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.
Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.
Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.
They also question whether such a broad rule is proportionate when mobile access is essential for daily life.
The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.
A rule that has become a test case for how far governments should extend biometric identity checks into routine services.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK outdoor enthusiasts are warned not to rely solely on AI for tide times or weather. Errors recently stranded visitors on Sully Island, showing the limits of unverified information.
Maritime authorities recommend consulting official sources such as the UK Hydrographic Office and Met Office. AI tools may misread tables or local data, making human oversight essential for safety.
Mountain rescue teams report similar issues when inexperienced walkers used AI to plan trips. Even with good equipment, lack of judgement can turn minor errors into dangerous situations.
Practical experience, professional guidance, and verified data remain critical for safe outdoor activities. Relying on AI alone can create serious risks, especially on tidal beaches and challenging mountain routes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!