Oxford Quantum Circuits (OQC) and Digital Realty have launched the first quantum-AI data centre in New York City at the JFK10 facility, powered by Nvidia GH200 Grace Hopper Superchips. The project combines superconducting quantum computers with AI supercomputing under one roof.
OQC’s GENESIS quantum computer is the first to be deployed in a New York data centre, designed to support hybrid workloads and enterprise adoption. Future GENESIS systems will ship with Nvidia accelerated computing and CUDA-Q integration as standard.
OQC CEO Gerald Mullally said the centre will drive the AI revolution securely and at scale, strengthening the UK–US technology alliance. Digital Realty CEO Andy Power called it a milestone for making quantum-AI accessible to enterprises and governments.
UK Science Minister Patrick Vallance highlighted the £212 billion economic potential of quantum by 2045, citing applications from drug discovery to clean energy. He said the launch puts British innovation at the heart of next-generation computing.
The centre, embedded in Digital Realty’s PlatformDIGITAL, will support applications in finance, security, and AI, including quantum machine learning and accelerated model training. OQC Chair Jack Boyer said it demonstrates UK–US collaboration in leading frontier technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.
The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.
Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.
Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.
Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.
Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.
Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.
The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.
If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.
The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ghana has launched the National Privacy Awareness Campaign, a year-long initiative to strengthen citizens’ privacy rights and build public trust in the country’s expanding digital ecosystem.
Unveiled by Deputy Minister Mohammed Adams Sukparu, the campaign emphasises that data protection is not just a legal requirement but essential to innovation, digital participation, and Ghana’s goal of becoming Africa’s AI hub.
The campaign will run from September 2025 to September 2026 across all 16 regions, using English and key local languages to promote widespread awareness.
The initiative includes the inauguration of the Ghana Association of Privacy Professionals (GAPP) and recognition of new Certified Data Protection Officers, many trained through the One Million Coders Programme.
Officials stressed that effective data governance requires government, private sector, civil society, and media collaboration. The Data Protection Commission reaffirmed its role in protecting privacy while noting ongoing challenges such as limited awareness and skills gaps.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A small Japanese political party has announced plans to install an AI system as its leader following its founder’s resignation.
The Path to Rebirth party was created in January by Shinji Ishimaru, a former mayor who rose to prominence after placing second in the 2024 Tokyo gubernatorial election. He stepped down after the party failed to secure seats in this year’s upper house elections.
The AI would oversee internal decisions such as distributing resources, but would not dictate members’ political activities. Okumura, who won a contest to succeed Ishimaru, will act as the nominal leader while supporting the development of the AI.
Despite attracting media attention, the party has faced heavy electoral defeats, with all 42 of its candidates losing in the June Tokyo assembly election and all 10 of its upper house candidates defeated in July.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new interdisciplinary study from Bielefeld University and other leading institutions explores why humans excel at adapting to new situations while AI systems often struggle. Researchers found humans generalise through abstraction and concepts, while AI relies on statistical or rule-based methods.
The study proposes a framework to align human and AI reasoning, defining generalisation, how it works, and how it can be assessed. Experts say differences in generalisation limit AI flexibility and stress the need for human-centred design in medicine, transport, and decision-making.
Researchers collaborated across more than 20 institutions, including Bielefeld, Bamberg, Amsterdam, and Oxford, under the SAIL project. The initiative aims to develop AI systems that are sustainable, transparent, and better able to support human values and decision-making.
Interdisciplinary insights may guide the responsible use of AI in human-AI teams, ensuring machines complement rather than disrupt human judgement.
The findings underline the importance of bridging cognitive science and AI research to foster more adaptable, trustworthy, and human-aligned AI systems capable of tackling complex, real-world challenges.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UN Development Programme (UNDP) plans to launch a ‘Government Blockchain Academy’ next year to educate public sector officials on blockchain, AI, and other emerging technologies.
The initiative aims to help governments leverage tech for economic growth and sustainable development.
The academy will partner with the Exponential Science Foundation, a non-profit promoting blockchain and AI. Training will cover financial services, digital IDs, public procurement, smart contracts, and climate finance to help governments boost transparency, inclusion, and resilience.
UNDP officials highlighted that developing countries, including India, Pakistan, and Vietnam, are already among the leading adopters of crypto technology.
The academy will provide in-person and online courses, workshops, and forums to guide high-impact blockchain initiatives aligned with national priorities.
The programme follows last year’s UNDP blockchain academy, created in partnership with the Algorand Foundation, which trained over 22,000 staff members to support sustainable growth projects in participating countries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief Executive John Lee is set to deliver his annual policy address on Wednesday, with the Northern Metropolis project expected to take centre stage.
The initiative aims to transform a sparsely populated area into a base for advanced industries and innovation, while reducing reliance on finance and real estate.
According to state-owned media, the government will ease financing rules to attract companies in AI, renewable energy and medical technology.
An urgency that comes despite signs of recovery, as the economy of Hong Kong grew at its fastest pace in over a year last quarter. Yet home prices continue to fall, unemployment has risen, and public finances remain stretched.
The administration is unlikely to offer sweeping property incentives, such as tax cuts or looser rules for mainland buyers, given fiscal constraints. Instead, it may revive the long-dormant Tenants Purchase Scheme, first launched in 1998, which allows public housing tenants to buy their flats at reduced prices.
Analysts say that without bold reforms, the housing market will stay under pressure as oversupply and weak sentiment weigh on values.
Hong Kong’s $7.2 trillion stock market could benefit if new listings and inflows are encouraged, especially as developers look to stimulus and lower mortgage rates to support sales.
However, with the economy of China also slowing down, doubts remain over whether deeper integration and technology investments can provide a lasting boost.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
An AI avatar named Una will be presented at the UN pavilion during the 2025 World Expo in Osaka later in the month as part of efforts to promote climate action.
The anime-inspired character, developed with support from the Japanese government, will use 3D hologram technology to engage visitors from 29 September to 4 October.
Una was launched online in May and can respond automatically in multiple languages, including English and Japanese. She was created under the Pacific Green Transformation Project, which supports renewable energy initiatives such as electric vehicles in Samoa and hydropower in Vanuatu.
Her role is to share stories of Pacific island nations facing the impacts of rising sea levels and raise awareness about climate change.
Kanni Wignaraja, UN assistant secretary-general and regional director for Asia and the Pacific, described Una as a strong voice for threatened communities. Influenced by Japanese manga and anime, she is designed to act like a cultural ambassador who connects Pacific struggles with Japanese audiences.
Pacific sea levels have risen by more than 15 centimetres in some regions over the past three decades, leading to flooding, crop damage and migration fears. The risks are existential for nations like Tuvalu, with an average elevation of just two metres.
The UN hopes Una will encourage the public to support renewable energy adoption and climate resilience in vulnerable regions.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!