Dutch publishers support ethical training of AI model

Dutch news publishers have partnered with research institute TNO to develop GPT-NL, a homegrown AI language model trained on legally obtained Dutch data.

The project marks the first time globally that private media outlets actively contribute content to shape a national AI system.

Over 30 national and regional publishers from NDP Nieuwsmedia and news agency ANP are sharing archived articles to double the volume of high-quality training material. The initiative aims to establish ethical standards in AI by ensuring copyright is respected and contributors are compensated.

GPT-NL is designed to support tasks such as summarisation and information extraction, and follows European legal frameworks like the AI Act. Strict safeguards will prevent content from being extracted or reused without authorisation when the model is released.

The model has access to over 20 billion Dutch-language tokens, offering a diverse and robust foundation for its training. It is a non-profit collaboration between TNO, NFI, and SURF, intended as a responsible alternative to large international AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Co-op confirms massive data breach as retail cyberattacks surge

All 6.5 million members of the Co-op had their personal data compromised in a cyberattack carried out on 30 April, the company’s chief executive has confirmed.

Shirine Khoury-Haq said the breach felt ‘personal’ after seeing the toll it took on IT teams fighting off the intrusion. She spoke in her first interview since the breach, broadcast on BBC Breakfast.

Initial statements from the Co-op described the incident as having only a ‘small impact’ on internal systems, including call centres and back-office operations.

Alleged hackers soon contacted media outlets and claimed to have accessed both employee and customer data, prompting the company to update its assessment.

The Co-op later admitted that data belonging to a ‘significant number’ of current and former members had been stolen. Exposed information included names, addresses, and contact details, though no payment data was compromised.

Restoration efforts are still ongoing as the company works to rebuild affected back-end systems. In some locations, operational disruption led to empty shelves and prolonged outages.

Khoury-Haq recalled meeting employees during the remediation phase and said she was ‘incredibly sorry’ for the incident. ‘I will never forget the looks on their faces,’ she said.

The attackers’ movements were closely tracked. ‘We were able to monitor every mouse click,’ Khoury-Haq added, noting that this helped authorities in their investigation.

The company reportedly disconnected parts of its network in time to prevent ransomware deployment, though not in time to avoid significant damage. Police said four individuals were arrested earlier this month in connection with the Co-op breach and related retail incidents. All have been released on bail.

Marks & Spencer and Harrods were also hit by cyberattacks in early 2025, with M&S still restoring affected systems. Researchers believe the same threat actor is responsible for all three attacks.

The group, identified as Scattered Spider, has previously disrupted other high-profile targets, including major US casinos in 2023.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G market grows as GCT begins chipset rollout

GCT Semiconductor Holding, Inc. has begun delivering samples of its latest 5G chipsets to lead customers, including Airspan Networks and Orbic. The company offers chip and module formats to meet specific testing needs.

Initial shipments aim to fulfil early demand, after which GCT will work with clients to assess performance and establish production requirements. The firm is well positioned to scale with a robust supply chain and deep experience in high-speed connectivity.

The fabless semiconductor designer targets mid-tier 5G applications and plans to introduce a Verizon-certified module. GCT has said it remains focused on accelerating its role in the global 5G market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fashion sector targeted again as Louis Vuitton confirms data breach

Louis Vuitton Hong Kong is under investigation after a data breach potentially exposed the personal information of around 419,000 customers, according to the South China Morning Post.

The company informed Hong Kong’s privacy watchdog on 17 July, more than a month after its French office first detected suspicious activity on 13 June. The Office of the Privacy Commissioner has now launched a formal inquiry.

Early findings suggest that compromised data includes names, passport numbers, birth dates, phone numbers, email addresses, physical addresses, purchase histories, and product preferences.

Although no complaints have been filed so far, the regulator is examining whether the reporting delay breached data protection rules and how the unauthorised access occurred. Louis Vuitton stated that it responded quickly with the assistance of external cybersecurity experts and confirmed that no payment details were involved.

The incident adds to a growing list of cyberattacks targeting fashion and retail brands in 2025. In May, fast fashion giant Shein confirmed a breach that affected customer support systems.

[Correction] Contrary to some reports, Puma was not affected by a ransomware attack in 2025. This claim appears to be inaccurate and is not corroborated by any verified public disclosures or statements by the company. Please disregard any previous mentions suggesting otherwise.

Security experts have warned that the sector remains a growing target due to high-value customer data and limited cyber defences. Louis Vuitton said it continues to upgrade its security systems and will notify affected individuals and regulators as the investigation continues.

‘We sincerely regret any concern or inconvenience this situation may cause,’ the company said in a statement.

[Dear readers, a previous version of this article highlighted incorrect information about a cyberattack on Puma. The information has been removed from our website, and we hereby apologise to Puma and our readers.]

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OCC urged to delay crypto bank approvals

Major US banking and credit union associations are pressuring regulators to delay granting federal bank licences to crypto firms. These include companies such as Circle, Ripple, and Fidelity Digital Assets.

In a joint letter, the American Bankers Association and others called on the Office of the Comptroller of the Currency (OCC) to halt decisions on these applications, raising what they described as serious legal and procedural issues.

The groups argue that the crypto firms’ business models do not align with the fiduciary activities typically required for national trust banks. They warned that granting such charters without clear oversight could mark a major policy shift and potentially weaken the foundations of the financial system.

The banks also claim the publicly available details of the applications are insufficient for public scrutiny. Some in the crypto sector see this as a sign of resistance from traditional banks fearing competition.

Recent legislative developments, particularly the GENIUS Act’s stablecoin framework, are encouraging more crypto firms to seek national bank charters.

Legal experts say such charters offer broader operational freedom than the new stablecoin licence, making them an increasingly attractive option for firms aiming to operate across all US states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human rights must anchor crypto design

Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.

Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.

The collapse of platforms like FTX has only strengthened calls for human-centric solutions.

New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.

Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.

While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI device brings early skin cancer diagnosis to remote communities

A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.

The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.

Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.

The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.

The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.

The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.

The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.

Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!