A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.
Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.
The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.
While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.
Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.
The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.
Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).
ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.
To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.
A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Despite privacy concerns and parliamentary criticism, the Dutch Tax Administration will move much of its digital workplace to Microsoft’s cloud. State Secretary Eugène Heijnen told lawmakers that no suitable European alternatives met the technical, legal, and functional requirements.
Privacy advocates warn that using a US-based provider could put compliance with GDPR at risk, especially when data may leave the EU. Concerns about long-term dependency on a single cloud vendor have also been raised, making future transitions costly and complex.
Heijnen said sensitive documents would remain on internal servers, while cloud services would handle workplace functions. Employees had complained that the current system was inefficient and difficult to use.
The Court of Audit reported earlier this year that nearly two-thirds of the Dutch government’s public cloud services had not been properly risk-assessed. Despite this, Heijnen insisted that Microsoft offered the most viable option.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.
The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.
A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.
AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.
In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.
It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lincoln Laboratory has unveiled TX-GAIN, the most powerful AI supercomputer at any US university. Optimised for generative AI, the system ranks on the TOP500 list and significantly boosts research across the MIT campus.
Equipped with more than 600 NVIDIA GPU accelerators, TX-GAIN delivers two AI exaflops of peak performance. Researchers are using it to advance biodefence, protein modelling, weather analysis, network security, and new materials development.
Generative AI applications go beyond large language models, with teams at Lincoln Laboratory exploring radar evaluation, chemical interactions, and anomaly detection in digital systems. The laboratory’s design lets researchers access vast computing power without needing expertise in parallel programming.
TX-GAIN is also supporting collaborations with MIT institutions and the US military, including projects in quantum engineering, space operations, and AI-driven flight scheduling. The system in an energy-efficient Massachusetts facility continues the lab’s supercomputing tradition.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan’s Digital Agency partners with OpenAI to integrate AI into public services, enhancing efficiency and innovation. Gennai, an OpenAI-powered tool, will enable government employees to explore innovative public sector applications, supporting Japan’s modern governance vision.
The collaboration supports Japan’s leadership in the Hiroshima AI Process, backed by the OECD and G7. The framework sets global AI guidelines, ensuring safety, security, and trust while promoting inclusive governance across governments, industry, academia, and civil society in Asia and beyond.
OpenAI is committed to meeting Japan’s rigorous standards and pursuing ISMAP certification to ensure secure and reliable AI use in government operations. The partnership strengthens trust and transparency in AI deployment, aligning with Japan’s national policies.
OpenAI plans to strengthen ties with Japanese authorities, educational institutions, and industry stakeholders. The collaboration seeks to integrate AI into society responsibly, prioritising safety, transparency, and global cooperation for sustainable benefits.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.
The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.
Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.
If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.
Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.
The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.
Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.
Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.
Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.
The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.
Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.
Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.
The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.
Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.
Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.
The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.
Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.
The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.
A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.
Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.
Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!