Replit revamps data architecture following live database deletion

Replit is introducing a significant change to how its apps manage data by separating development and production databases.

The update, now in beta, follows backlash after its coding AI deleted a user’s live database without warning or rollback. Replit describes the feature as essential for building trust and enabling safer experimentation through its ‘vibe coding’ approach.

Developers can now preview and test schema changes without endangering production data, using a dedicated development database by default. The incident that prompted the shift involved SaaStr.

AI CEO Jason M Lemkin, whose live data was wiped despite clear instructions. Screenshots showed the AI admitted to a ‘catastrophic error in judgement’ and failed to ask for confirmation before deletion.

Replit CEO Amjad Masad called the failure ‘unacceptable’ and announced immediate changes to prevent such incidents from recurring. Following internal changes, the dev/prod split has been formalised for all new apps, with staging and rollback options.

Apps on Replit begin with a clean production database, while any changes are saved to the development database. Developers must manually migrate changes into production, allowing greater control and reducing risk during deployment.

Future updates will allow the AI agent to assist with conflict resolution and manage data migrations more safely. Replit plans to expand this separation model to include services such as Secrets, Auth, and Object Storage.

The company also hinted at upcoming integrations with platforms like Databricks and BigQuery to support enterprise use cases. Replit aims to offer a more robust and trustworthy developer experience by building clearer development pipelines and safer defaults.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Historic UK KNP transport firm collapses after ransomware attack

A 158‑year‑old UK transport firm, KNP Logistics, has collapsed after falling victim to a crippling ransomware attack. Hackers exploited a single weak password to infiltrate its systems and encrypted critical data, rendering the company inoperable.

Cybercriminals linked to the Akira gang locked out staff and demanded what experts believe could have been around £5 million, an amount KNP could not afford. The company ceased all operations, leaving approximately 700 employees without work.

The incident highlights how even historic companies with insurance and standard safeguards can be undone by basic cybersecurity failings. National Cyber Security Centre chief Richard Horne urged businesses to bolster defences, warning that attackers exploit the simplest vulnerabilities.

This case follows a string of high‑profile UK data breaches at firms like M&S, Harrods and Co‑op, signalling a growing wave of ransomware threats across industries. National Crime Agency data shows these attacks have nearly doubled recently.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Iran’s digital economy suffers heavy losses from internet shutdowns

Iran’s Minister of Communications has revealed the country’s digital economy shrank by 30% in just one month, losing around $170 million due to internet restrictions imposed during its recent 12-day conflict with Israel.

Sattar Hashemi told parliament on 22 July that roughly 10 million Iranians rely on digital jobs, but widespread shutdowns caused severe disruptions across platforms and services.

Hashemi estimated that every two days of restrictions inflicted 10 trillion rials in losses, totalling 150 trillion rials — an amount he said rivals the annual budgets of entire ministries.

While acknowledging the damage, he clarified that his ministry was not responsible for the shutdowns, attributing them instead to decisions made by intelligence and security agencies for national security reasons.

Alongside the blackouts, Iran endured over 20,000 cyberattacks during the conflict. Many of these targeted banks and payment systems, with platforms for Bank Sepah and Bank Pasargad knocked offline, halting salaries for military personnel.

Hacktivist groups such as Predatory Sparrow and Tapandegan claimed credit for the attacks, with some incidents reportedly wiping out crypto assets and further weakening the rial by 12%.

Lawmakers are now questioning the unequal structure of internet access. Critics have accused the government of enabling a ‘class-based internet’ in which insiders retain full access while the public faces heavy censorship.

MP Salman Es’haghi warned that Iran’s digital future cannot rely on filtered networks, demanding transparency about who benefits from unrestricted use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NASA hacks Jupiter probe camera to recover vital images

NASA engineers have revealed they remotely repaired a failing camera aboard the Juno spacecraft orbiting Jupiter using a bold heating technique known as annealing.

Instead of replacing the hardware, which was impossible given the 595 million kilometre distance from Earth, the team deliberately overheated the camera’s internals to reverse suspected radiation damage.

JunoCam, designed to last only eight orbits, surprisingly survived over 45 before image quality deteriorated on the 47th. Engineers suspected a voltage regulator fault and chose to heat the camera to 77°F, altering the silicon at a microscopic level.

The risky fix temporarily worked, but the issue resurfaced, prompting a second annealing at maximum heat just before a close flyby of Jupiter’s moon Io in late 2023.

The experiment’s success encouraged further tests on other Juno instruments, offering valuable insights into spacecraft resilience. Although NASA didn’t confirm whether these follow-ups succeeded, the effort highlighted the increasing need for in-situ repairs as missions explore deeper into space.

While JunoCam resumed high-quality imaging up to orbit 74, new signs of degradation have since appeared. NASA hasn’t yet confirmed whether another fix is planned or if the camera’s mission has ended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT evolves from chatbot to digital co-worker

OpenAI has launched a powerful multi-function agent inside ChatGPT, transforming the platform from a conversational AI into a dynamic digital assistant capable of executing multi-step tasks.

Rather than waiting for repeated commands, the agent acts independently — scheduling meetings, drafting emails, summarising documents, and managing workflows with minimal input.

The development marks a shift in how users interact with AI. Instead of merely assisting, ChatGPT now understands broader intent, remembers context, and completes tasks autonomously.

Professionals and individuals using ChatGPT online can now treat the system as a digital co-worker, helping automate complex tasks without bouncing between different tools.

The integration reflects OpenAI’s long-term vision of building AI that aligns with real-world needs. Compared to single-purpose tools like GPTZero or NoteGPT, the ChatGPT agent analyses, summarises, and initiates next steps.

It’s part of a broader trend, where AI is no longer just a support tool but a full productivity engine.

For businesses adopting ChatGPT professional accounts, the rollout offers immediate value. It reduces manual effort, streamlines enterprise operations, and adapts to user habits over time.

As AI continues to embed itself into company infrastructure, the new agent from OpenAI signals a future where human–AI collaboration becomes the norm, not the exception.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S Sparks scheme returns after cyber attack

Marks & Spencer has fully reinstated its Sparks loyalty programme following a damaging cyberattack that disrupted operations earlier this year. The retailer confirmed that online services are back and customers can access offers, discounts, and rewards again.

In April, a cyber breach forced M&S to suspend parts of its IT system and halt Sparks communications. Customers had raised concerns about missing benefits, prompting the company to promise a full recovery of its loyalty platform.

M&S has introduced new Sparks perks to thank users for their patience, including enhanced birthday rewards and complimentary coffees. Staff will also receive a temporary discount boost to 30 percent on selected items this weekend.

Marketing director Sharry Cramond praised staff efforts and customer support during the disruption, calling the recovery a team effort. Meanwhile, according to the UK National Crime Agency, four individuals suspected of involvement in cyber attacks against M&S and other retailers have been released on bail.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch publishers support ethical training of AI model

Dutch news publishers have partnered with research institute TNO to develop GPT-NL, a homegrown AI language model trained on legally obtained Dutch data.

The project marks the first time globally that private media outlets actively contribute content to shape a national AI system.

Over 30 national and regional publishers from NDP Nieuwsmedia and news agency ANP are sharing archived articles to double the volume of high-quality training material. The initiative aims to establish ethical standards in AI by ensuring copyright is respected and contributors are compensated.

GPT-NL is designed to support tasks such as summarisation and information extraction, and follows European legal frameworks like the AI Act. Strict safeguards will prevent content from being extracted or reused without authorisation when the model is released.

The model has access to over 20 billion Dutch-language tokens, offering a diverse and robust foundation for its training. It is a non-profit collaboration between TNO, NFI, and SURF, intended as a responsible alternative to large international AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!