Vatican gathers global experts on AI and medicine

Medical professionals, ethicists and theologians gathered in the Vatican this week to discuss the ethical use of AI in healthcare. The conference, organised by the Pontifical Academy for Life and the International Federation of Catholic Medical Associations, highlighted the growing role of AI in diagnostics and treatment.

Speakers warned against reducing patient care to data alone, stressing that human interaction and personalised treatment remain central to medicine. Experts highlighted the need for transparency, non-discrimination and ethical oversight when implementing AI, noting that technology should enhance rather than replace human judgement.

The event also explored global experiences from regions including India, Latin America and Europe, with participants emphasising the role of citizens in shaping AI’s direction in medicine. Organisers called for ongoing dialogue between healthcare professionals, faith communities and technology leaders to ensure AI benefits patients while safeguarding human dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hyundai launches record investment to boost South Korea’s tech future

Hyundai Motor Group has unveiled a record 85.8 billion dollar investment plan that will reshape South Korea’s industrial landscape over the next five years.

The company intends to channel a large share of the funds into fields such as AI, robotics, electrification, software-defined vehicles, and hydrogen technologies.

Hyundai presents the roadmap as evidence of an agile response to a global environment in which export strength and technological leadership matter more than ever.

A major part of the strategy centres on turning innovation into export gains. The group expects the investment to raise overseas shipments of South Korea-made vehicles by more than thirteen percent by 2030.

A plan that emerges shortly after Seoul concluded a new trade agreement with Washington that lowers tariffs on South Korean vehicles to fifteen percent instead of the previous twenty-five percent. The rate remains much higher than the earlier 2.5 percent applied before the renegotiation.

Hyundai’s announcement mirrors a wider industrial push across the country. Samsung Group recently committed 310 billion dollars for a similar period, largely focused on AI development.

Both companies aim to reinforce the nation’s position in advanced technologies and secure long-term competitiveness at a time when global supply chains and industrial alliances are rapidly shifting.

Hyundai, together with Kia, sold more than 7.2 million vehicles globally last year.

The company views its new investment programme as a foundation for future export growth and a signal that South Korea plans to anchor its economic future in next-generation technologies instead of relying on past models of industrial expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New blueprint ensures fair AI in democratic processes

A rights-centred AI blueprint highlights the growing use of AI in analysing citizen submissions during public participation, promising efficiency but raising questions about fairness, transparency and human rights. Experts caution that poorly designed AI could silence minority voices, deepen inequalities and weaken trust in democratic decision-making.

The European Centre for Not-for-Profit Law (ECNL) provides detailed guidance for governments, civil society organisations and technology developers on how to implement AI responsibly. Recommendations include conducting human rights impact assessments, involving marginalised communities from the design stage, testing AI accuracy across demographics, and ensuring meaningful human oversight at every stage.

Transparency and accountability are key pillars of the framework, providing guidance on publishing assessments, documenting AI decision-making processes, and mitigating bias. Experts stress that efficiency gains should never come at the expense of inclusiveness, and that AI tools must be monitored and updated continually to reflect community feedback and rights considerations.

The blueprint also emphasises collaboration and sustainability, urging multistakeholder governance, civil society co-design, and ongoing training for public servants and developers. By prioritising rights, transparency and community engagement, AI in public participation can enhance citizen voices rather than undermining them, but only if implemented deliberately and inclusively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI supports doctors in spotting broken bones

Hospitals in Lincolnshire, UK, are introducing AI to assist doctors in identifying fractures and dislocations, with the aim to speeding up treatment and improving patient care. The Northern Lincolnshire and Goole NHS Foundation Trust will launch a two-year NHS England pilot later this month.

AI software will provide near-instant annotated X-rays alongside standard scans, highlighting potential issues for clinicians to review. Patients under the age of two, as well as those undergoing chest, spine, skull, facial or soft tissue imaging, will not be included in the pilot.

Consultants emphasise that AI is an additional tool, not a replacement, and clinicians will retain the final say on diagnosis and treatment. Early trials in northern Europe suggest the technology can help meet rising demand, and the trust is monitoring its impact closely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital accessibility drives revenue as AI adoption rises

Research highlights that digital accessibility is now viewed as a driver of business growth rather than a compliance requirement.

A survey of over 1,600 professionals across the US, UK, and Europe found 75% of organisations linking accessibility improvements to revenue gains, while 91% reported enhanced user experience and 88% noted brand reputation benefits.

AI is playing an increasingly central role in accessibility initiatives. More than 80% of organisations now use AI tools to support accessibility, particularly in mature programmes with formal policies, accountability structures, and dedicated budgets.

Leaders in these organisations view AI as a force multiplier, complementing human expertise rather than replacing it. Despite progress, many organisations still implement accessibility late in digital development processes. Only around 28% address accessibility during planning, and 27% during design stages.

Leadership support and effective training emerged as key success factors. Organisations with engaged executives and strong accessibility training were far more likely to achieve revenue and operational benefits while reducing perceived legal risk.

As AI adoption accelerates and regulatory frameworks expand, companies treating accessibility strategically are better positioned to gain competitive advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA brings RDMA acceleration to S3 object storage for AI workloads

AI workloads are driving unprecedented data growth, with enterprises projected to generate almost 400 zettabytes annually by 2028. NVIDIA says traditional storage models cannot match the speed and scale needed for modern training and inference systems.

The company is promoting RDMA for S3-compatible storage, which accelerates object data transfers by bypassing host CPUs and removing bottlenecks associated with TCP networking. The approach promises higher throughput per terabyte and reduced latency across AI factories and cloud deployments.

Key benefits include lower storage costs, workload portability across environments and faster access for training, inference and vector database workloads. NVIDIA says freeing CPU resources also improves overall GPU utilisation and project efficiency.

RDMA client libraries run directly on GPU compute nodes, enabling faster object retrieval during training. While initially optimised for NVIDIA hardware, the architecture is open and can be extended by other vendors and users seeking higher storage performance.

Cloudian, Dell and HPE are integrating the technology into products such as HyperStore, ObjectScale and Alletra Storage MP X10000. NVIDIA is working with partners to standardise the approach, arguing that accelerated object storage is now essential for large-scale AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Disney+ subscribers protest AI content plans

Disney faces intense criticism after CEO Bob Iger announced plans to allow AI-generated content on Disney+. The streaming service, known for its iconic hand-drawn animation, now risks alienating artists and fans who value traditional craftsmanship.

Iger said AI would offer Disney+ users more interactive experiences, including the creation and sharing of short-form content. The company plans to expand gaming on Disney+ by continuing its collaborations with Fortnite, as well as featuring characters from Star Wars and The Simpsons.

Artists and animators reacted sharply, warning that AI could lead to job losses and a flood of low-quality material. Social media users called for a boycott, emphasising that generative AI undermines the legacy of Disney’s animation and may drive subscribers away.

The backlash reflects broader industry concerns, as other studios, such as Illumination and DreamWorks, have also rejected the use of generative AI. Creators like Dana Terrace of The Owl House urged fans to support human artistry, backing the push to defend traditional animation from AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot