AI in cardiology: 3D heart scan could cut waiting times

A new AI-powered heart test could significantly improve early detection of cardiovascular disease, especially in high-risk patients without symptoms.

Developed in Germany and evaluated in a UK study led by Dr Simon Rudland, the Cardisio test uses five electrodes—four on the chest, one on the back—to record 3D heart data. Unlike a traditional 2D ECG, this method captures electrical signals in more dimensions and uses AI to analyse rhythm, structure, and blood flow.

The quick 10-minute test returns a colour-coded result: green (normal), amber (borderline), or red (high risk). The study, published in BJGP Open, tested 628 individuals and found a positive predictive accuracy of 80% and a negative accuracy of 90.4%, with fewer than 2% test failures.

Dr Rudland called the findings ‘exciting,’ noting that the technology could streamline referrals, improve diagnosis in primary care, and reduce hospital waiting lists. He added that a pilot rollout may begin soon in Suffolk or north Essex, targeting high-risk women.

AI’s ability to process complex cardiac data far exceeds human capacity, making it a promising tool in preventative medicine. This research supports the NHS’s broader push to integrate AI for faster, smarter healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI distorting our view of the Milky Way’s black hole?

A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.

The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.

Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.

By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.

However, not all physicists are confident in the results.

Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.

The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

T-Mobile launches priority network for emergency services

T-Mobile is expanding its support for emergency response teams by combining 5G, AI and drone technologies to boost disaster recovery operations. Its T-Priority service, launched last year, offers dedicated network slices to ensure fast, low-latency data access during crises.

US first responders in disaster-hit regions like Southern California and North Carolina have already used the system to operate body cams, traffic monitoring tools and mapping systems. T-Mobile deployed hundreds of 5G routers and hotspot devices to aid efforts during the Palisades wildfire and Hurricanes.

AI and drone technologies are key in reconnaissance, damage assessment and real-time communication. T-Mobile’s self-organising network adapts to changing conditions using live data, ensuring stable connectivity throughout emergency operations.

Public-private collaboration is central to the initiative, with T-Mobile working alongside FEMA, the Department of Defense and local emergency centres. The company has also signed a major deal to provide New York City with a dedicated public safety network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT loses chess match to Atari 2600

ChatGPT, was trounced in a chess match by a 1979 video game running on an Atari 2600 emulator. Citrix engineer Robert Caruso set up the match using Video Chess to test how the AI would perform against vintage gaming software.

The result was unexpectedly lopsided. ChatGPT confused rooks for bishops, forgot piece positions and made repeated beginner mistakes, eventually asking for the match to be restarted. Even when standard chess notation was used, its performance failed to improve.

Caruso described the 90-minute session as full of basic blunders, saying the AI would have been laughed out of a primary school chess club. His post highlighted the limitations of ChatGPT’s architecture, which is built for language understanding, not strategic board gameplay.

While the experiment doesn’t mean ChatGPT is entirely useless at chess, it suggests users are better off discussing the game with the bot than challenging it. OpenAI has not yet responded to the light-hearted but telling critique.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists convert brain signals into words using AI

Australian scientists have developed an AI model that converts brainwaves into spoken words and sentences using a wearable EEG cap.

The system, created at the University of Technology Sydney, marks a significant step in communication technology and cognitive care.

The deep learning model, designed by Daniel Leong, Charles Zhou, and Chin-Teng Lin, currently works with a limited vocabulary but has achieved around 75% accuracy. Researchers aim to improve this to 90% by expanding training data and refining brainwave analysis.

Bioelectronics expert Mohit Shivdasani noted that AI now detects neural patterns previously hidden from human interpretation. Future uses include real-time thought-to-text interfaces or direct communication between people via brain signals.

However, breakthrough opens new possibilities for patients with speech or movement impairments, pointing to future human-machine interaction that bypasses traditional input methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!