Swedish prosecutors have confirmed that a cyberattack on IT systems provider Miljodata exposed the personal data of 1.5 million people, nearly 15% of Sweden’s population. The attack occurred during the weekend of August 23–24.
Authorities said the stolen data has been leaked online and includes names, addresses, and contact details. Prosecutor Sandra Helgadottir said the group Datacarry has claimed responsibility, though no foreign state involvement is suspected.
Media in Sweden reported that the hackers demanded 1.5 bitcoin (around $170,000) to prevent the release of the data. Miljodata confirmed the information has now been published on the darknet.
The Swedish Authority for Privacy Protection has received over 250 breach notifications, with 164 municipalities and four regional authorities impacted. Employees in Gothenburg were among those affected, according to SVT.
Private companies, including Volvo, SAS, and GKN Aerospace, also reported compromised data. Investigators are working to identify the perpetrators as the breach’s scale continues to raise concerns nationwide.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.
At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.
The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.
Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.
If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.
The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.
Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.
AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.
The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.
Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.
Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.
Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.
The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.
If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.
The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s Gemini latest update has sparked a social media craze by allowing users to transform 2D photos into lifelike 3D figurines. The feature, part of Gemini 2.5 Flash Image, has quickly become the standout trend from the update.
Instead of serving as a photo-editing tool, Gemini now helps users turn selfies, portraits, and pet photos into stylized statuettes. The images resemble collectable vinyl or resin figures, with smooth finishes and polished detailing.
The digital figurine trend blends personalisation with creativity, allowing users to reimagine themselves or loved ones as miniature display pieces. The playful results have been widely shared across platforms, driving renewed engagement with Google’s AI suite.
The figurine generator also complements Gemini’s other creative functions, such as image combination and style transformation, which allow users to experiment with entirely new aesthetics. Together, these tools extend Gemini’s appeal beyond simple photo correction.
While other platforms have offered 3D effects, Gemini’s version produces highly polished results in seconds, democratising what was once a niche 3D modelling skill. For many, it is the most accessible way to turn memories into digital art.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ghana has launched the National Privacy Awareness Campaign, a year-long initiative to strengthen citizens’ privacy rights and build public trust in the country’s expanding digital ecosystem.
Unveiled by Deputy Minister Mohammed Adams Sukparu, the campaign emphasises that data protection is not just a legal requirement but essential to innovation, digital participation, and Ghana’s goal of becoming Africa’s AI hub.
The campaign will run from September 2025 to September 2026 across all 16 regions, using English and key local languages to promote widespread awareness.
The initiative includes the inauguration of the Ghana Association of Privacy Professionals (GAPP) and recognition of new Certified Data Protection Officers, many trained through the One Million Coders Programme.
Officials stressed that effective data governance requires government, private sector, civil society, and media collaboration. The Data Protection Commission reaffirmed its role in protecting privacy while noting ongoing challenges such as limited awareness and skills gaps.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is set to unveil its first pair of smart glasses with a built-in display at its annual Connect conference in California.
Expected to be called Celeste, the glasses will debut at around $800 and feature a small digital display in the right lens for notifications. Analysts say the higher price point could limit adoption compared with Meta’s Ray-Ban line, which starts at $299.
Alongside the new glasses, Meta is also expected to launch its first wristband for hand-gesture control and an updated Ray-Ban line with better cameras, battery life and AI features. Developers will gain access to a new software kit to build device apps.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Babbel’s chief executive, Tim Allen, said the aim is not instant fluency but helping learners move from first words to confident conversations.
Called Babbel Speak, the AI feature guides users through 28 real-life scenarios, such as ordering coffee or describing the weather. It provides personalised feedback and uses a calming design with animations to ease anxiety while learning.
The trainer is available in open beta on the App Store and Play Store for English, Spanish, French, Italian, and German.
Subscribers can try it as part of the standard plans of Babbel, which start at $107.40 per year, with a lifetime option also offered.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools are increasingly reshaping how people search online, with large language models like ChatGPT drawing millions away from traditional engines.
Montreal-based lawyer and consultant Anja-Sara Lahady says she now turns to ChatGPT instead of Google for everyday tasks such as meal ideas, interior decoration tips and drafting low-risk emails. She describes it as a second assistant rather than a replacement for legal reasoning.
ChatGPT’s weekly user base has surged to around 800 million, double the figure reported in 2025. Data shows that nearly 6% of desktop searches are already directed to language models, compared with barely half that rate a year ago.
Academics such as Professor Feng Li argue that users favour AI tools because they reduce cognitive effort by providing clear summaries instead of multiple links. However, he warns that verification remains essential due to factual errors.
Google insists its search activity continues to expand, supported by AI Overviews and AI Mode, which offer more conversational and tailored answers.
Yet, testimony in a US antitrust case revealed that Google searches on Apple devices via Safari declined for the first time in two decades, underlining the competitive pressure from AI.
The rise of language models is also forcing a shift in digital marketing. Agencies report that LLMs highlight trusted websites, press releases and established media rather than social media content.
This change may influence consumer habits, with evidence suggesting that referrals from AI systems often lead to higher-quality sales conversions. For many users, AI now represents a faster and more personal route to decisions on products, travel or professional tasks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!