AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.
ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.
The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.
False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.
An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translation and real-time voice synthesis to medical diagnostics and language generation, today’s systems perform tasks once reserved for human cognition. For those watching closely, this shift feels less like a surprise and more like a milestone reached.
Ray Kurzweil, one of the most prominent futurists of the past half-century, predicted much of what is now unfolding. In 1999, his book The Age of Spiritual Machines laid a roadmap for how computers would grow exponentially in power and eventually match and surpass human capabilities. Over two decades later, many of his projections for the 2020s have materialised with unsettling accuracy.
The futurist who measured the future
Kurzweil’s work stands out not only for its ambition but for its precision. Rather than offering vague speculation, he produced a set of quantifiable predictions, 147 in total, with a claimed accuracy rate of over 85 percent. These ranged from the growth of mobile computing and cloud-based storage to real-time language translation and the emergence of AI companions.
Since 2012, he has worked at Google as Director of Engineering, contributing to developing natural language understanding systems. He believes is that exponential growth in computing power, driven by Moore’s Law and its successors, will eventually transform our tools and biology.
Reprogramming the body with code
One of Kurzweil’s most controversial but recurring ideas is that human ageing is, at its core, a software problem. He believes that by the early 2030s, advancements in biotechnology and nanomedicine could allow us to repair or even reverse cellular damage.
The logic is straightforward: if ageing results from accumulated biological errors, then precise intervention at the molecular level might prevent those errors or correct them in real time.
Some of these ideas are already being tested, though results remain preliminary. For now, claims about extending life remain speculative, but the research trend is real.
Kurzweil’s perspective places biology and computation on a converging path. His view is not that we will become machines, but that we may learn to edit ourselves with the same logic we use to program them.
The brain, extended
Another key milestone in Kurzweil’s roadmap is merging biological and digital intelligence. He envisions a future where nanorobots circulate through the bloodstream and connect our neurons directly to cloud-based systems. In this vision, the brain becomes a hybrid processor, part organic, part synthetic.
By the mid-2030s, he predicts we may no longer rely solely on internal memory or individual thought. Instead, we may access external information, knowledge, and computation in real time. Some current projects, such as brain–computer interfaces and neuroprosthetics, point in this direction, but remain in early stages of development.
Kurzweil frames this not as a loss of humanity but as an expansion of its potential.
The singularity hypothesis
At the centre of Kurzweil’s long-term vision lies the idea of a technological singularity. By 2045, he believes AI will surpass the combined intelligence of all humans, leading to a phase shift in human evolution. However, this moment, often misunderstood, is not a single event but a threshold after which change accelerates beyond human comprehension.
The singularity, in Kurzweil’s view, does not erase humanity. Instead, it integrates us into a system where biology no longer limits intelligence. The implications are vast, from ethics and identity to access and inequality. Who participates in this future, and who is left out, remains an open question.
Between vision and verification
Critics often label Kurzweil’s forecasts as too optimistic or detached from scientific constraints. Some argue that while trends may be exponential, progress in medicine, cognition, and consciousness cannot be compressed into neat timelines. Others worry about the philosophical consequences of merging with machines.
Still, it is difficult to ignore the number of predictions that have already come true. Kurzweil’s strength lies not in certainty, but in pattern recognition. His work forces a reckoning with what might happen if the current pace of change continues unchecked.
Whether or not we reach the singularity by 2045, the present moment already feels like the future he described.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.
New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.
The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.
Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.
Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.
An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.
Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.
Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.
Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.
Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.
The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.
However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.
By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.
T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.
NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.
These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.
Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new Focus Bari survey shows that AI is still unfamiliar territory for most Greeks.
Although more than eight in ten have heard of AI, 68 percent say they have never used it professionally. The study highlights that Greece integrates AI into its workplace more slowly than many other countries.
The survey covered 21 nations and found that 83 percent of Greeks know about AI, compared with 17 percent who do not. Only 35 percent feel well-informed, while about one in three admits to knowing little about the technology.
Similar trends appear worldwide, with Switzerland, Mexico, and Romania leading in AI awareness, while countries like Nigeria, Japan, and Australia show limited familiarity.
Globally, almost half of respondents use AI in their everyday lives, yet only one in three applies it in their work. In Greece, that gap remains wide, suggesting that AI is still seen as a distant concept rather than a professional tool.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Adobe has launched a new AI Assistant in Express, enabling users to create and edit content from concept to completion in minutes. The tool understands design context and lets users create on-brand visuals by describing their ideas.
Users can seamlessly adjust fonts, images, backgrounds, and other elements while keeping the rest of the design intact.
The AI Assistant integrates generative AI models with Adobe’s professional tools, turning templates into conversational canvases. Users can make targeted edits, replace objects, or transform designs without starting over.
The assistant also interprets subjective requests, suggesting creative options and offering contextual prompts to refine results efficiently, enhancing both speed and quality of content creation.
Adobe Express will extend the AI Assistant with enterprise-grade features, including template locking, batch creation, and brand consistency tools. Early adopters report that non-designers can now produce professional visuals quickly, while experienced designers save time on routine tasks.
Organisations can expect improved collaboration, efficiency, and consistency across content supply chains.
The AI Assistant beta is currently available to Adobe Express Premium customers on desktop, with full availability planned for all users via the Firefly generative credit system. Adobe stresses that AI enhances creativity, respects creators’ rights, and supports responsible generative AI use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.
Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.
LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.
Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.
Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Johns Hopkins Medicine and the Bloomberg School of Public Health report that an AI-driven diabetes prevention program achieved outcomes comparable to traditional, human-led coaching. The results come from a phase III randomised controlled trial, the first of its kind.
The trial enrolled participants with prediabetes and randomly assigned them to one of four remote human-led programs or an AI app that delivered personalised push notifications guiding diet, exercise and weight management. Over 12 months, both groups were evaluated against CDC benchmarks for risk reduction (e.g. achieving 5 % weight loss, meeting activity goals, or reducing A1C).
After one year, 31.7 % of AI-app users and 31.9 % of human-led participants met the composite benchmark. Interestingly, the AI arm saw higher initiation rates (93.4 % vs 82.7 %) and completion (63.9 % vs 50.3 %) than human programs.
The researchers note that scheduling, staffing, and access barriers can limit traditional lifestyle programs. The AI approach, which runs asynchronously and is always available, may help expand reach, especially for underserved populations or when human resources are constrained.
Future work will assess how these findings scale in broader, real-world patient groups and explore cost effectiveness, user preferences and the balance between AI and human support.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.
A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.
More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.
External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.
Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!