AI tool predicts risk of over 1,000 diseases years ahead

Scientists have unveiled an AI tool capable of predicting the risk of developing over 1,000 medical conditions. Published in Nature, the model can forecast certain cancers, heart attacks, and other diseases more than a decade in advance.

Developed by the German Cancer Research Centre (DKFZ), the European Molecular Biology Laboratory (EMBL), and the University of Copenhagen, the model utilises anonymised health data from the UK and Denmark. It tracks the order and timing of medical events to spot patterns that lead to serious illness.

Researchers said the tool is exceptionally accurate for diseases with consistent progression, including some cancers, diabetes, heart attacks, and septicaemia. Its predictions work like a weather forecast, indicating higher risk rather than certainty.

The model is less reliable for unpredictable conditions such as mental health disorders, infectious diseases, or pregnancy complications. It is more accurate for near-term forecasts than for those decades ahead.

Though not yet ready for clinical use, the system could help doctors identify high-risk patients earlier and enable more personalised, preventive healthcare strategies. Researchers say more work is needed to ensure the tool works for diverse populations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement gears up with 15 authorities named in Ireland

Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.

Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.

The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.

Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.

Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

West London borough approves AI facial recognition CCTV rollout

Hammersmith and Fulham Council has approved a £3m upgrade to its CCTV system to see facial recognition and AI integrated across the west London borough.

With over 2,000 cameras, the council intends to install live facial recognition technology at crime hotspots and link it with police databases for real-time identification.

Alongside the new cameras, 500 units will be equipped with AI tools to speed up video analysis, track vehicles, and provide retrospective searches. The plans also include the possible use of drones, pending approval from the Civil Aviation Authority.

Council leader Stephen Cowan said the technology will provide more substantial evidence in a criminal justice system he described as broken, arguing it will help secure convictions instead of leaving cases unresolved.

Civil liberties group Big Brother Watch condemned the project as mass surveillance without safeguards, warning of constant identity checks and retrospective monitoring of residents’ movements.

Some locals also voiced concern, saying the cameras address crime after it happens instead of preventing it. Others welcomed the move, believing it would deter offenders and reassure those who feel unsafe on the streets.

The Metropolitan Police currently operates one pilot site in Croydon, with findings expected later in the year, and the council says its rollout depends on continued police cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WEF urges trade policy shift to protect workers in digital economy

The World Economic Forum (WEF) has published an article on using trade policy to build a fairer digital economy. Digital services now make up over half of global exports, with AI investment projected at $252 billion in 2024. Countries from Kenya to the UAE are positioning as digital hubs, but job quality still lags.

Millions of platform workers face volatile pay, lack of contracts, and no access to social protections. In Kenya alone, 1.9 million people rely on digital work yet face algorithm-driven pay systems and sudden account deactivations. India and the Philippines show similar patterns.

AI threatens to automate lower-skilled tasks such as data annotation and moderation, deepening insecurity in sectors where many developing countries have found a competitive edge. Ethical standards exist but have little impact without enforcement or supportive regulation.

Countries are experimenting with reforms: Singapore now mandates injury compensation and retirement savings for platform workers, while the Rider Law in Spain reclassifies food couriers as employees. Yet overly strict regulation risks eroding the flexibility that attracts youth and caregivers to gig work.

Trade agreements, such as the AfCFTA and the KenyaEU pact, could embed labour protections in digital markets. Coordinated policies and tripartite dialogue are essential to ensure the digital economy delivers growth, fairness, and dignity for workers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Prolonged JLR shutdown threatens UK export targets

Jaguar Land Rover (JLR) has confirmed that its production halt will continue until at least Wednesday, 24 September, as it works to recover from a major cyberattack that disrupted its IT systems and paralysed production at the end of August.

JLR stated that the extension was necessary because forensic investigations were ongoing and the controlled restart of operations was taking longer than anticipated. The company stressed that it was prioritising a safe and stable restart and pledged to keep staff, suppliers, and partners regularly updated.

Reports suggest recovery could take weeks, impacting production and sales channels for an extended period. Approximately 33,000 employees remain at home as factory and sales processes are not fully operational, resulting in estimated losses of £1 billion in revenue and £70 million in profits.

The shutdown also poses risks to the wider UK economy, as JLR represents roughly four percent of British exports. The incident has renewed calls for the Cyber Security and Resilience Bill, which aims to strengthen defenses against digital threats to critical industries.

No official attribution has been made, but a group calling itself Scattered Lapsus$ Hunters has claimed responsibility. The group claims to have deployed ransomware and published screenshots of JLR’s internal SAP system, linking itself to extortion groups, including Scattered Spider, Lapsus$, and ShinyHunters.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack compromises personal data used for DBS checks at UK college

Bracknell and Wokingham College has confirmed a cyberattack that compromised data collected for Disclosure and Barring Service (DBS) checks. The breach affects data used by Activate Learning and other institutions, including names, dates of birth, National Insurance numbers, and passport details.

Access Personal Checking Services (APCS) was alerted by supplier Intradev on August 17 that its systems had been accessed without authorisation. While payment card details and criminal conviction records were not compromised, data submitted between December 2024 and May 8, 2025, was copied.

APCS stated that its own networks and those of Activate Learning were not breached. The organisation is contacting only those data controllers where confirmed breaches have occurred and has advised that its services can continue to be used safely.

Activate Learning reported the incident to the Information Commissioner’s Office following a risk assessment. APCS is still investigating the full scope of the breach and has pledged to keep affected institutions and individuals informed as more information becomes available.

Individuals have been advised to closely monitor their financial statements, exercise caution when opening phishing emails, and regularly update security measures, including passwords and two-factor authentication. Activate Learning emphasised the importance of staying vigilant to minimise risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Miljodata hack exposes data of nearly 15% of Swedish population

Swedish prosecutors have confirmed that a cyberattack on IT systems provider Miljodata exposed the personal data of 1.5 million people, nearly 15% of Sweden’s population. The attack occurred during the weekend of August 23–24.

Authorities said the stolen data has been leaked online and includes names, addresses, and contact details. Prosecutor Sandra Helgadottir said the group Datacarry has claimed responsibility, though no foreign state involvement is suspected.

Media in Sweden reported that the hackers demanded 1.5 bitcoin (around $170,000) to prevent the release of the data. Miljodata confirmed the information has now been published on the darknet.

The Swedish Authority for Privacy Protection has received over 250 breach notifications, with 164 municipalities and four regional authorities impacted. Employees in Gothenburg were among those affected, according to SVT.

Private companies, including Volvo, SAS, and GKN Aerospace, also reported compromised data. Investigators are working to identify the perpetrators as the breach’s scale continues to raise concerns nationwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia outlines guidelines for social media age ban

Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.

The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.

Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.

Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.

Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI will kill middle-ground media, but raw content will thrive

Advertising is heading for a split future. By 2030, brands will run hyper-personalised AI campaigns or embrace raw human storytelling. Everything in between will vanish.

AI-driven advertising will go far beyond text-to-image gimmicks. These adaptive systems will combine social trends, search habits, and first-party data to create millions of real-time ad variations.

The opposite approach will lean into imperfection, featuring unpolished TikToks, founder-shot iPhone videos, and authentic and alive content. Audiences reward authenticity over carefully scripted, generic campaigns.

Mid-tier, polished, forgettable, creative work will be the first to fade away. AI can replicate it instantly, and audiences will scroll past it without noticing.

Marketers must now pick a side: feed AI with data and scale personalisation, or double down on community-driven, imperfect storytelling. The middle won’t survive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China proposes independent oversight committees to strengthen data protection

The Cyberspace Administration of China (CAC) has proposed new rules requiring major online platforms to establish independent oversight committees focused on personal data protection. The draft regulation, released Friday, 13 September 2025, is open for public comment until 12 October 2025.

Under the proposal, platforms with large user bases and complex operations must form committees of at least seven members, two-thirds of whom must be external experts without ties to the company. These experts must have at least three years of experience in data security and be well-versed in relevant laws and standards.

The committees will oversee sensitive data handling, cross-border transfers, security incidents, and regulatory compliance. They are also tasked with maintaining open communication channels with users about data concerns.

If a platform fails to act and offers unsatisfactory reasons, the issue can be escalated to provincial regulators in China.

The CAC says the move aims to enhance transparency and accountability by involving independent experts in monitoring and flagging high-risk data practices.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!