Scientists have unveiled an AI tool capable of predicting the risk of developing over 1,000 medical conditions. Published in Nature, the model can forecast certain cancers, heart attacks, and other diseases more than a decade in advance.
Developed by the German Cancer Research Centre (DKFZ), the European Molecular Biology Laboratory (EMBL), and the University of Copenhagen, the model utilises anonymised health data from the UK and Denmark. It tracks the order and timing of medical events to spot patterns that lead to serious illness.
Researchers said the tool is exceptionally accurate for diseases with consistent progression, including some cancers, diabetes, heart attacks, and septicaemia. Its predictions work like a weather forecast, indicating higher risk rather than certainty.
The model is less reliable for unpredictable conditions such as mental health disorders, infectious diseases, or pregnancy complications. It is more accurate for near-term forecasts than for those decades ahead.
Though not yet ready for clinical use, the system could help doctors identify high-risk patients earlier and enable more personalised, preventive healthcare strategies. Researchers say more work is needed to ensure the tool works for diverse populations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.
Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.
The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.
Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.
Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
NVIDIA and the UK are accelerating plans to build the nation’s AI infrastructure, positioning the country as a hub for AI innovation, jobs and research.
The partnership, announced by Prime Minister Keir Starmer and NVIDIA CEO Jensen Huang earlier in the year, has already resulted in commitments worth up to £11 billion.
A rollout that includes AI factories equipped with 120,000 NVIDIA Blackwell GPUs across UK data centres, supporting projects such as OpenAI’s Stargate UK.
NVIDIA partner Nscale will host 60,000 of these GPUs domestically while expanding its global capacity to 300,000. Microsoft, CoreWeave and other partners are also investing in advanced supercomputing facilities, with new projects announced in England and Scotland.
NVIDIA is working with Oxford Quantum Circuits and other research institutions to integrate AI and quantum technologies in a collaboration that extends to quantum computing.
Universities in Edinburgh and Oxford are advancing GPU-driven quantum error correction and AI-controlled quantum hardware, highlighting the UK’s growing role in cutting-edge science.
To prepare the workforce, NVIDIA has joined forces with techUK and QA to provide training programmes and AI skills development.
The government has framed the initiative as a foundation for economic resilience, job creation and sovereign AI capability, aiming to place Britain at the forefront of the AI industrial revolution.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Zuckerberg’s Meta has unveiled a new generation of smart glasses powered by AI at its annual Meta Connect conference in California. Working with Ray-Ban and Oakley, the company introduced devices including the Meta Ray-Ban Display and the Oakley Meta Vanguard.
These glasses are designed to bring the Meta AI assistant into daily use instead of being confined to phones or computers.
The Ray-Ban Display comes with a colour lens screen for video calls and messaging and a 12-megapixel camera, and will sell for $799. It can be paired with a neural wristband that enables tasks through hand gestures.
Meta also presented $499 Oakley Vanguard glasses aimed at sports fans and launched a second generation of its Ray-Ban Meta glasses at $379. Around two million smart glasses have been sold since Meta entered the market in 2023.
Analysts see the glasses as a more practical way of introducing AI to everyday life than the firm’s costly Metaverse project. Yet many caution that Meta must prove the benefits outweigh the price.
Chief executive Mark Zuckerberg described the technology as a scientific breakthrough. He said it forms part of Meta’s vast AI investment programme, which includes massive data centres and research into artificial superintelligence.
The launch came as activists protested outside Meta’s New York headquarters, accusing the company of neglecting children’s safety. Former safety researchers also told the US Senate that Meta ignored evidence of harm caused by its VR products, claims the company has strongly denied.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hong Kong will establish a new team to advance the use of AI across government departments, Chief Executive John Lee confirmed during his 2025 Policy Address.
The AI Efficacy Enhancement Team, led by Deputy Chief Secretary Warner Cheuk, will coordinate reforms to modernise outdated processes and promote efficiency.
Lee said his administration would focus on safe ‘AI+ development’, applying the technology in public services and encouraging adoption across different sectors instead of relying on traditional methods.
He added that Hong Kong had the potential to grow into a global hub for AI and would treat the field as a core industry for the city’s economic future.
Examples of AI adoption are already visible.
The government’s 1823 enquiry hotline uses voice recognition to cut response times by 30 per cent, while the Census and Statistics Department applies AI models to trade data and company reports, reducing manual checks by 40 per cent and improving accuracy.
Authorities expect upcoming censuses in 2026 and 2031 to save about $680 million through AI and data science technologies instead of conventional manpower-heavy methods.
The announcement comes shortly after China unveiled its national AI policy blueprint, which seeks widespread integration of the technology in research, governance and industry, with a target of 90 per cent prevalence by 2030.
Hong Kong’s approach is being positioned as part of a wider push for technological leadership in the region.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The World Economic Forum (WEF) has published an article on using trade policy to build a fairer digital economy. Digital services now make up over half of global exports, with AI investment projected at $252 billion in 2024. Countries from Kenya to the UAE are positioning as digital hubs, but job quality still lags.
Millions of platform workers face volatile pay, lack of contracts, and no access to social protections. In Kenya alone, 1.9 million people rely on digital work yet face algorithm-driven pay systems and sudden account deactivations. India and the Philippines show similar patterns.
AI threatens to automate lower-skilled tasks such as data annotation and moderation, deepening insecurity in sectors where many developing countries have found a competitive edge. Ethical standards exist but have little impact without enforcement or supportive regulation.
Countries are experimenting with reforms: Singapore now mandates injury compensation and retirement savings for platform workers, while the Rider Law in Spain reclassifies food couriers as employees. Yet overly strict regulation risks eroding the flexibility that attracts youth and caregivers to gig work.
Trade agreements, such as the AfCFTA and the Kenya–EU pact, could embed labour protections in digital markets. Coordinated policies and tripartite dialogue are essential to ensure the digital economy delivers growth, fairness, and dignity for workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Bracknell and Wokingham College has confirmed a cyberattack that compromised data collected for Disclosure and Barring Service (DBS) checks. The breach affects data used by Activate Learning and other institutions, including names, dates of birth, National Insurance numbers, and passport details.
Access Personal Checking Services (APCS) was alerted by supplier Intradev on August 17 that its systems had been accessed without authorisation. While payment card details and criminal conviction records were not compromised, data submitted between December 2024 and May 8, 2025, was copied.
APCS stated that its own networks and those of Activate Learning were not breached. The organisation is contacting only those data controllers where confirmed breaches have occurred and has advised that its services can continue to be used safely.
Activate Learning reported the incident to the Information Commissioner’s Office following a risk assessment. APCS is still investigating the full scope of the breach and has pledged to keep affected institutions and individuals informed as more information becomes available.
Individuals have been advised to closely monitor their financial statements, exercise caution when opening phishing emails, and regularly update security measures, including passwords and two-factor authentication. Activate Learning emphasised the importance of staying vigilant to minimise risks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Indonesia Investment Authority (INA), the country’s sovereign wealth fund, is sharpening its focus on digital infrastructure, healthcare and renewable energy as it seeks to attract foreign partners and strengthen national development.
The fund, created in 2021 with $5 billion in state capital, now manages assets worth around $10 billion and is expanding its scope beyond equity into hybrid capital and private credit.
Chief investment officer Christopher Ganis said data centres and supporting infrastructure, such as sub-sea cables, were key priorities as the government emphasises data independence and resilience.
INA has already teamed up with Singapore-based Granite Asia to invest over $1.2 billion in Indonesia’s technology and AI ecosystem, including a new data centre campus in Batam. Ganis added that AI would be applied first in healthcare instead of rushing into broader adoption.
Renewables also remain central to INA’s strategy, with its partnership alongside Abu Dhabi’s Masdar Clean Energy in Pertamina Geothermal Energy cited as a strong performer.
Ganis said Asia’s reliance on bank financing highlights the need for INA’s support in cross-border growth, since domestic banks cannot always facilitate overseas expansion.
Despite growing global ambitions, INA will prioritise projects directly linked to Indonesia. Ganis stressed that it must deliver benefits at home instead of directing capital into ventures without a clear link to the country’s future.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oxford Quantum Circuits (OQC) and Digital Realty have launched the first quantum-AI data centre in New York City at the JFK10 facility, powered by Nvidia GH200 Grace Hopper Superchips. The project combines superconducting quantum computers with AI supercomputing under one roof.
OQC’s GENESIS quantum computer is the first to be deployed in a New York data centre, designed to support hybrid workloads and enterprise adoption. Future GENESIS systems will ship with Nvidia accelerated computing and CUDA-Q integration as standard.
OQC CEO Gerald Mullally said the centre will drive the AI revolution securely and at scale, strengthening the UK–US technology alliance. Digital Realty CEO Andy Power called it a milestone for making quantum-AI accessible to enterprises and governments.
UK Science Minister Patrick Vallance highlighted the £212 billion economic potential of quantum by 2045, citing applications from drug discovery to clean energy. He said the launch puts British innovation at the heart of next-generation computing.
The centre, embedded in Digital Realty’s PlatformDIGITAL, will support applications in finance, security, and AI, including quantum machine learning and accelerated model training. OQC Chair Jack Boyer said it demonstrates UK–US collaboration in leading frontier technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!