A ransomware group calling itself Yurei first emerged on 5 September, targeting a food manufacturing company in Sri Lanka. Within days, the group had added victims in India and Nigeria, bringing the total confirmed incidents to three.
The Check Point researchers identified that Yurei’s code is largely derived from Prince-Ransomware, an open-source project, and this reuse includes retaining function and module names because the developers did not strip symbols from the compiled binary, making the link to Prince-Ransomware clear.
Yurei operates using a double-extortion model, combining file encryption with theft of sensitive data. Victims are pressured to pay not only for a decryption key but also to prevent stolen data from being leaked.
Yurei’s extortion workflow involves posting victims on a darknet blog, sharing proof of compromise such as internal document screenshots, and offering a chat interface for negotiation. If a ransom is paid, the group promises a decryption tool and a report detailing the vulnerabilities exploited during the attack, akin to a pen-test report.
Preliminary findings (with ‘low confidence’) suggest that Yurei may be based in Morocco, though attribution remains uncertain.
The emergence of Yurei illustrates how open-source ransomware projects lower the barrier to entry, enabling relatively unsophisticated actors to launch effective campaigns. The focus on data theft rather than purely encryption may represent an escalating trend in modern cyberextortion.
Would you like to learn more aboutAI, tech and digital diplomacy?If so, ask our Diplo chatbot!
Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.
The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.
According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.
MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.
Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.
They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.
The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.
MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Jaguar Land Rover (JLR) has confirmed that its production halt will continue until at least Wednesday, 24 September, as it works to recover from a major cyberattack that disrupted its IT systems and paralysed production at the end of August.
JLR stated that the extension was necessary because forensic investigations were ongoing and the controlled restart of operations was taking longer than anticipated. The company stressed that it was prioritising a safe and stable restart and pledged to keep staff, suppliers, and partners regularly updated.
Reports suggest recovery could take weeks, impacting production and sales channels for an extended period. Approximately 33,000 employees remain at home as factory and sales processes are not fully operational, resulting in estimated losses of £1 billion in revenue and £70 million in profits.
The shutdown also poses risks to the wider UK economy, as JLR represents roughly four percent of British exports. The incident has renewed calls for the Cyber Security and Resilience Bill, which aims to strengthen defenses against digital threats to critical industries.
No official attribution has been made, but a group calling itself Scattered Lapsus$ Hunters has claimed responsibility. The group claims to have deployed ransomware and published screenshots of JLR’s internal SAP system, linking itself to extortion groups, including Scattered Spider, Lapsus$, and ShinyHunters.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Bracknell and Wokingham College has confirmed a cyberattack that compromised data collected for Disclosure and Barring Service (DBS) checks. The breach affects data used by Activate Learning and other institutions, including names, dates of birth, National Insurance numbers, and passport details.
Access Personal Checking Services (APCS) was alerted by supplier Intradev on August 17 that its systems had been accessed without authorisation. While payment card details and criminal conviction records were not compromised, data submitted between December 2024 and May 8, 2025, was copied.
APCS stated that its own networks and those of Activate Learning were not breached. The organisation is contacting only those data controllers where confirmed breaches have occurred and has advised that its services can continue to be used safely.
Activate Learning reported the incident to the Information Commissioner’s Office following a risk assessment. APCS is still investigating the full scope of the breach and has pledged to keep affected institutions and individuals informed as more information becomes available.
Individuals have been advised to closely monitor their financial statements, exercise caution when opening phishing emails, and regularly update security measures, including passwords and two-factor authentication. Activate Learning emphasised the importance of staying vigilant to minimise risks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Swedish prosecutors have confirmed that a cyberattack on IT systems provider Miljodata exposed the personal data of 1.5 million people, nearly 15% of Sweden’s population. The attack occurred during the weekend of August 23–24.
Authorities said the stolen data has been leaked online and includes names, addresses, and contact details. Prosecutor Sandra Helgadottir said the group Datacarry has claimed responsibility, though no foreign state involvement is suspected.
Media in Sweden reported that the hackers demanded 1.5 bitcoin (around $170,000) to prevent the release of the data. Miljodata confirmed the information has now been published on the darknet.
The Swedish Authority for Privacy Protection has received over 250 breach notifications, with 164 municipalities and four regional authorities impacted. Employees in Gothenburg were among those affected, according to SVT.
Private companies, including Volvo, SAS, and GKN Aerospace, also reported compromised data. Investigators are working to identify the perpetrators as the breach’s scale continues to raise concerns nationwide.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.
At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.
The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.
Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.
If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.
The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.
Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.
AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.
The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.
Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.
Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.
Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.
The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.
If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.
The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s Gemini latest update has sparked a social media craze by allowing users to transform 2D photos into lifelike 3D figurines. The feature, part of Gemini 2.5 Flash Image, has quickly become the standout trend from the update.
Instead of serving as a photo-editing tool, Gemini now helps users turn selfies, portraits, and pet photos into stylized statuettes. The images resemble collectable vinyl or resin figures, with smooth finishes and polished detailing.
The digital figurine trend blends personalisation with creativity, allowing users to reimagine themselves or loved ones as miniature display pieces. The playful results have been widely shared across platforms, driving renewed engagement with Google’s AI suite.
The figurine generator also complements Gemini’s other creative functions, such as image combination and style transformation, which allow users to experiment with entirely new aesthetics. Together, these tools extend Gemini’s appeal beyond simple photo correction.
While other platforms have offered 3D effects, Gemini’s version produces highly polished results in seconds, democratising what was once a niche 3D modelling skill. For many, it is the most accessible way to turn memories into digital art.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!