UK MoD avoids further penalty after data breach

The UK’s data protection regulator has defended its decision not to pursue further action against the Ministry of Defence (MoD) over a serious data breach that exposed personal information of Afghans who assisted British forces.

The Information Commissioner’s Office (ICO) said the incident caused considerable harm but concluded additional investigation would not deliver greater benefit. The office stressed that organisations must handle data with greater care to avoid such damaging consequences.

The breach occurred when a hidden dataset in a spreadsheet was mistakenly shared under the pressures of a UK military operation. While the sender believed only limited data was being released, the spreadsheet contained much more information, some of which was later leaked online.

The ICO has already fined the MoD £350,000 in 2023 over a previous incident related to the Afghan relocation programme. The regulator confirmed that in both cases, the department had taken significant remedial action and committed extensive public resources to mitigate future risk.

Although the ICO acknowledged the incident’s severe impact, including threats to individual lives, it decided not to divert further resources given existing accountability, classified restrictions, and national security concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New GLOBAL GROUP ransomware targets all major operating systems

A sophisticated new ransomware threat, dubbed GLOBAL GROUP, has emerged on cybercrime forums, meticulously designed to target systems across Windows, Linux, and macOS with cross-platform precision.

In June 2025, a threat actor operating under the alias ‘Dollar Dollar Dollar’ launched the GLOBAL GROUP Ransomware-as-a-Service (RaaS) platform on the Ramp4u forum. The campaign offers affiliates scalable tools, automated negotiations, and generous profit-sharing, creating an appealing setup for monetising cybercrime at scale.

GLOBAL GROUP leverages the Golang language to build monolithic binaries, enabling seamless execution across varied operating environments in a single campaign. The strategy expands attackers’ reach, allowing them to exploit hybrid infrastructures while improving operational efficiency and scalability.

Golang’s concurrency model and static linking make it an attractive option for rapid, large-scale encryption without relying on external dependencies. However, forensic analysis by Picus Security Labs suggests GLOBAL GROUP is not an entirely original threat but rather a rebrand of previous ransomware operations.

Researchers linked its code and infrastructure to the now-defunct Mamona RIP and Black Lock families, revealing continuity in tactics and tooling. Evidence includes a reused mutex string—’Global\Fxo16jmdgujs437’—which was also found in earlier Mamona RIP samples, confirming code inheritance.

The re-use of such technical markers highlights how threat actors often evolve existing malware rather than building from scratch, streamlining development and deployment.

Beyond its cross-platform flexibility, GLOBAL GROUP also integrates modern cryptographic features to boost effectiveness and resistance to detection. It employs the ChaCha20-Poly1305 encryption algorithm, offering both confidentiality and message integrity with high processing performance.

The malware leverages Golang’s goroutines to encrypt all system drives simultaneously, reducing execution time and limiting defenders’ reaction window. Encrypted files receive customised extensions like ‘.lockbitloch’, with filenames also obscured to hinder recovery efforts without the correct decryption key.

Ransom note logic is embedded directly within the binary, generating tailored communication instructions and linking to Tor-based leak sites. The approach simplifies extortion for affiliates while preserving operational security and ensuring anonymous negotiations with victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT evolves from chatbot to digital co-worker

OpenAI has launched a powerful multi-function agent inside ChatGPT, transforming the platform from a conversational AI into a dynamic digital assistant capable of executing multi-step tasks.

Rather than waiting for repeated commands, the agent acts independently — scheduling meetings, drafting emails, summarising documents, and managing workflows with minimal input.

The development marks a shift in how users interact with AI. Instead of merely assisting, ChatGPT now understands broader intent, remembers context, and completes tasks autonomously.

Professionals and individuals using ChatGPT online can now treat the system as a digital co-worker, helping automate complex tasks without bouncing between different tools.

The integration reflects OpenAI’s long-term vision of building AI that aligns with real-world needs. Compared to single-purpose tools like GPTZero or NoteGPT, the ChatGPT agent analyses, summarises, and initiates next steps.

It’s part of a broader trend, where AI is no longer just a support tool but a full productivity engine.

For businesses adopting ChatGPT professional accounts, the rollout offers immediate value. It reduces manual effort, streamlines enterprise operations, and adapts to user habits over time.

As AI continues to embed itself into company infrastructure, the new agent from OpenAI signals a future where human–AI collaboration becomes the norm, not the exception.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI device brings early skin cancer diagnosis to remote communities

A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.

The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.

Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.

The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.

The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.

The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.

The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.

Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Irish hospital turns to AI for appointment management

Beaumont Hospital in Dublin plans to deploy AI to predict patient no-shows and late cancellations, aiming to reduce wasted resources.

Instead of relying solely on reminders, the hospital will pilot AI software costing up to €110,000, using patient data to forecast missed appointments. Currently, no-shows account for 15.5% of its outpatient slots.

The system will integrate with Beaumont’s existing two-way text messaging service. Rather than sending uniform reminders, the AI model will tailor messages based on the likelihood of attendance while providing hospital staff with real-time insights to better manage clinic schedules.

The pilot is expected to begin in late 2025 or early 2026, potentially expanding into a full €1.2 million contract.

The move forms part of Beaumont Hospital’s strategic plan through 2030 to reduce outpatient non-attendance. It follows the broader adoption of AI in Irish healthcare, including Mater Hospital’s recent launch of an AI and Digital Health centre designed to tackle clinical challenges using new technologies.

Instead of viewing AI as a future option, Irish hospitals now increasingly treat it as an immediate solution to operational inefficiencies, hoping it will transform healthcare delivery and improve patient service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mistral’s chatbot Le Chat takes on ChatGPT with major upgrade

France-based AI startup Mistral has rolled out a major update to Le Chat, its AI chatbot, introducing new features aimed at challenging rivals like ChatGPT, Gemini and Claude. The update includes Deep Research, voice interaction, reasoning capabilities and a refreshed image editor.

According to the company’s latest blog post, the new Deep Research mode transforms Le Chat into a structured assistant that can clarify needs, search sources and deliver summarised findings. The tool enables users to receive comprehensive responses in a neatly formatted report.

In addition, Mistral unveiled Vocal mode, allowing users to speak to the chatbot as if they were talking to a person. The feature is powered by the firm’s voice input model, Voxtral, which handles voice recognition in real time.

The company also introduced Think mode, based on its Magistral reasoning model. Designed for multilingual and complex tasks, the feature provides thoughtful and clear responses, even when answering legal or professional queries in languages like Spanish or Japanese.

For users juggling multiple conversations or tasks, the new Projects tool groups related chats into separate spaces. Each project includes a dedicated Library for storing files and content, while also remembering individual tools and settings.

Users can upload documents directly into Projects and revisit past chats or references. Content from the Library can also be pulled into the active conversation, supporting a more seamless and personalised experience.

A revamped image editor rounds out the update, offering users the ability to tweak AI-generated visuals while maintaining consistency in character design and fine details. Mistral says the upgrade helps improve image customisation without compromising visual integrity.

All features are now available through Le Chat’s web platform at ‘chat.mistral.ai’ or via the company’s mobile apps on Android and iOS. The update reflects Mistral’s growing ambition to differentiate itself in the increasingly competitive AI assistant market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!