Cybersecurity researchers uncovered an unsecured database exposing 8.7 billion records linked to individuals and businesses in China. The data was found in early January 2026 and remained accessible online for more than three weeks.
The China focused dataset included national ID numbers, home addresses, email accounts, social media identifiers and passwords. Researchers warned that the scale of exposure in China creates serious risks of identity theft and account takeovers.
The records were stored in a large Elasticsearch cluster hosted on so called bulletproof infrastructure. Analysts believe the structure suggests deliberate aggregation in China rather than an accidental misconfiguration.
Although the database is now closed, experts say actors targeting China may have already copied the data. China has experienced several major leaks in recent years, highlighting persistent weaknesses in large scale data handling.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.
Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.
Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.
Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.
The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.
Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.
The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.
The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.
The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.
Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.
The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.
Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.
The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.
Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.
These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.
The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.
A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.
Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.
Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.
France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.
A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.
Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.
The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.
Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.
A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.
Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.
Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.
An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.
The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.
By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.
HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.
An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.
UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.
The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.
Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.
Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.
Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.
Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.
The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.
The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.
It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.
IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.
The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.
The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.
The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.
Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.
She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.
Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.
In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!