China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Korean Air has disclosed a data breach affecting about 30,000 employees. Stolen records were taken from systems operated by a former subsidiary.
The breach occurred at catering supplier KC&D, sold off in 2020. Hackers, who had previously attacked the Washington Post accessed employee names and their bank account details, while customer data remained unaffected.
Investigators linked the incident to exploits in Oracle E-Business Suite. Cybercriminals abused zero day flaws during a wider global hacking campaign.
The attack against Korean Air has been claimed by the Cl0p ransomware group. Aviation firms worldwide have reported similar breaches connected to the same campaign.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.
The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.
The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.
Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.
Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Aflac, a health and life insurer in the US, revealed that a cyberattack discovered in June affected over 22.6 million individuals. Personal and claims information, including social security numbers, may have been accessed.
The investigation found the attack likely originated from the Scattered Spider cybercrime group. Authorities were notified, and third-party cybersecurity experts were engaged to contain the incident.
Systems remained operational, and no ransomware was detected, with containment achieved within hours. Notifications have begun, and the insurer continues to monitor for potential fraudulent use of data.
Class-action lawsuits have been filed in response to the incident, which also affected employees, agents, and other related individuals. Erie and Philadelphia Insurance previously reported network issues linked to similar threats.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China is proposing new rules requiring users to consent before AI companies can use chat logs for training. The draft measures aim to balance innovation with safety and public interest.
Platforms would need to inform users when interacting with AI and provide options to access or delete their chat history. For minors, guardian consent is required before sharing or storing any data.
Analysts say the rules may slow AI chatbot improvements but provide guidance on responsible development. The measures signal that some user conversations are too sensitive for free training data.
The draft rules are open for public consultation with feedback due in late January. China encourages expanding human-like AI applications once safety and reliability are demonstrated.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.
Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.
Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.
Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal agencies planning to deploy agentic AI in 2026 are being told to prioritise data organisation as a prerequisite for effective adoption. AI infrastructure providers say poorly structured data remains a major barrier to turning agentic systems into operational tools.
Public sector executives at Amazon Web Services, Oracle, and Cisco said government clients are shifting focus away from basic chatbot use cases. Instead, agencies are seeking domain-specific AI systems capable of handling defined tasks and delivering measurable outcomes.
US industry leaders said achieving this shift requires modernising legacy infrastructure alongside cleaning, structuring, and contextualising data. Executives stressed that agentic AI depends on high-quality data pipelines that allow systems to act autonomously within defined parameters.
Oracle said its public sector strategy for 2026 centres on enabling context-aware AI through updated data assets. Company executives argued that AI systems are only effective when deeply aligned with an organisation’s underlying data environment.
The companies said early agentic AI use cases include document review, data entry, and network traffic management. Cloud infrastructure was also highlighted as critical for scaling agentic systems and accelerating innovation across government workflows.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European governments are intensifying their efforts to safeguard satellites from cyberattacks as space becomes an increasingly vital front in modern security and hybrid warfare. Once seen mainly as technical infrastructure, satellites are now treated as strategic assets, carrying critical communications, navigation, and intelligence data that are attractive targets for espionage and disruption.
Concerns intensified after a 2022 cyberattack on the Viasat satellite network coincided with Russia’s invasion of Ukraine, exposing how vulnerable space systems can be during geopolitical crises. Since then, the EU institutions have warned of rising cyber and electronic interference against satellites and ground stations, while several European countries have flagged growing surveillance activities linked to Russia and China.
To reduce risks, Europe is investing in new infrastructure and technologies. One example is a planned satellite ground station in Greenland, backed by the European Space Agency, designed to reduce dependence on the highly sensitive Arctic hub in Svalbard. That location currently handles most European satellite data traffic but relies on a single undersea internet cable, making it a critical point of failure.
At the same time, the EU is advancing with IRIS², a secure satellite communication system designed to provide encrypted connectivity and reduce reliance on foreign providers, such as Starlink. Although the project promises stronger security and European autonomy, it is not expected to be operational for several years.
Experts warn that technology alone is not enough. European governments are still clarifying who is responsible for defending space systems, while the cybersecurity industry struggles to adapt tools designed for Earth-based networks to the unique challenges of space. Better coordination, clearer mandates, and specialised security approaches will be essential as space becomes more contested.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Space Agency (ESA) has confirmed that a data breach occurred, but stated that its impact appears to be limited. According to the agency, only a very small number of science servers were affected, and these systems were located outside ESA’s main corporate network.
Claims about the breach began circulating on 26 December, when a hacker using the alias ‘888’ alleged that more than 200 gigabytes of ESA data had been compromised and put up for sale. The hacker claimed the material included source code, internal project documents, API tokens, and embedded login credentials.
ESA acknowledged the allegations on 29 December and launched a forensic investigation. A day later, the agency stated that its initial findings confirmed unauthorised access but suggested the scope was far smaller than online claims implied.
The affected servers were described as unclassified systems used for collaborative engineering work within the scientific community. ESA said it has already informed relevant stakeholders and taken immediate steps to secure any potentially impacted devices.
The investigation is still ongoing, and ESA has stated that it will provide further updates once the forensic analysis is complete.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The presidency of the Council of the European Union next year is expected to see Ireland lead a European drive for ID-verified social media accounts.
Tánaiste Simon Harris said the move is intended to limit anonymous abuse, bot activity and coordinated disinformation campaigns that he views as a growing threat to democracy worldwide.
A proposal that would require users to verify their identity instead of hiding behind anonymous profiles. Harris also backed an Australian-style age verification regime to prevent children from accessing social media, arguing that existing digital consent rules are not being enforced.
Media Minister Patrick O’Donovan is expected to bring forward detailed proposals during the presidency.
The plan is likely to trigger strong resistance from major social media platforms with European headquarters in Ireland, alongside criticism from the US.
However, Harris believes there is growing political backing across Europe, pointing to signals of support from French President Emmanuel Macron and UK Prime Minister Keir Starmer.
Harris said he wanted constructive engagement with technology firms rather than confrontation, while insisting that stronger safeguards are now essential.
He argued that social media companies already possess the technology to verify users and restrict harmful accounts, and that European-level coordination will be required to deliver meaningful change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!