How AI is transforming healthcare and patient management

AI is moving from theory to practice in healthcare. Hospitals and clinics are adopting AI to improve diagnostics, automate routine tasks, support overworked staff, and cut costs. A recent GoodFirms survey shows strong confidence that AI will become essential to patient care and health management.

Survey findings reveal that nearly all respondents believe AI will transform healthcare. Robotic surgery, predictive analytics, and diagnostic imaging are gaining momentum, while digital consultations and wearable monitors are expanding patient access.

AI-driven tools are also helping reduce human errors, improve decision-making, and support clinicians with real-time insights.

Challenges remain, particularly around data privacy, transparency, and the risk of over-reliance on technology. Concerns about misdiagnosis, lack of human empathy, and job displacement highlight the need for responsible implementation.

Even so, the direction is clear: AI is set to be a defining force in healthcare’s future, enabling more efficient, accurate, and equitable systems worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Doctors and nurses outperform AI in patient triage

Human staff are more accurate than AI in assessing patient urgency in emergency departments, according to research presented at the European Emergency Medicine Congress in Barcelona.

The study, led by Dr Renata Jukneviciene of Vilnius University, tested ChatGPT 3.5 against clinicians and nurses using real case studies.

Doctors achieved an overall accuracy of 70.6% and nurses 65.5%, compared with 50.4% for AI. Doctors also outperformed AI in surgical and therapeutic cases, while nurses were more reliable overall.

AI did show strength in recognising the most critical cases, surpassing nurses in both accuracy and specificity. Researchers suggested that AI may help prioritise life-threatening situations and support less experienced staff instead of acting as a replacement.

However, over-triaging by AI could lead to inefficiencies, making human oversight essential.

Future studies will explore newer AI models, ECG interpretation, and integration into nurse training, particularly in mass-casualty scenarios.

Commenting on the findings, Dr Barbra Backus from Amsterdam said AI has value in certain areas, such as interpreting scans, but it cannot yet replace trained staff for triage decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK users lose access to Imgur amid watchdog probe

Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.

Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.

The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.

The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.

Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.

However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT explores AI solutions to reduce emissions

Rapid growth in AI data centres is raising global energy use and emissions, prompting MIT scientists to cut the carbon footprint through more intelligent computing, greater efficiency, and improved data centre design.

Innovations include cutting energy-heavy training, using optimised or lower-power processors, and improving algorithms to achieve results with fewer computations. Known as ‘negaflops,’ these efficiency gains can dramatically lower energy consumption without compromising AI performance.

Adjusting workloads to coincide with periods of higher renewable energy availability also helps cut emissions.

Location and infrastructure play a significant role in reducing carbon impact. Data centres in cooler climates, flexible multi-user facilities, and long-duration energy storage systems can all decrease reliance on fossil fuels.

Meanwhile, AI is being applied to accelerate renewable energy deployment, optimise solar and wind generation, and support predictive maintenance for green infrastructure.

Experts stress that effective solutions require collaboration among academia, companies, and regulators. Combining AI efficiency, more innovative energy use, and clean energy aims to cut emissions while supporting generative AI’s rapid growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Sonnet 4.5 as the best AI coding model yet

Anthropic has released Claude Sonnet 4.5, its most advanced AI model yet, claiming state-of-the-art results in coding benchmarks. The company says the model can build production-ready applications, rather than limited prototypes, making it more reliable than earlier versions.

Claude Sonnet 4.5 is available through the Claude API and chatbot at the same price as its predecessor, with $3 per million input tokens and $15 per million output tokens.

Early enterprise tests suggest the model can autonomously code for extended periods, integrate databases, secure domains, and perform compliance checks such as SOC 2 audits.

Industry leaders have endorsed the launch, with Cursor and Windsurf calling it a new generation of AI coding models. Anthropic also emphasises more substantial alignment, noting reduced risks of deception and sycophancy, and improved resistance to prompt injection attacks.

Alongside the model, the company has introduced a Claude Agent SDK to let developers build customised agents, and launched ‘Imagine with Claude’, a research preview showing real-time code generation.

A release that highlights the intense competition in AI, with Anthropic pushing frequent updates to keep pace with rivals such as OpenAI, which has recently gained ground on coding performance with GPT-5.

Claude Sonnet 4.5 follows just weeks after Anthropic’s Claude Opus 4.1, underlining the rapid development cycles driving the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Lufthansa turns to automation and AI for efficiency

Lufthansa Group has unveiled a transformation strategy that places digitalisation and AI at the centre of its future operations. At Capital Markets Day, the company said efficiency will come from automation and streamlined processes.

Around 4,000 administrative roles are set to be cut by 2030, mainly in Germany, as Lufthansa consolidates functions and reduces duplication of work. Executives stressed that the focus will be on non-operational roles, with staff reductions to be conducted in consultation with social partners.

The airline group also confirmed continued investment in fleet renewal, with more than 230 new aircraft expected by 2030. Digital transformation and AI aim to cut costs, accelerate decisions, and boost competitiveness across the group’s airlines, cargo, and technical services.

By 2030, Lufthansa aims for an 8-10 percent EBIT margin, 15-20 percent return on capital, and over €2.5 billion in annual free cash flow. The company said these measures will ensure long-term resilience in a changing industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Facebook tools help creators boost fan engagement

Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.

Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.

Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.

Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.

To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.

The tools are now available to eligible creators worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan launches Alem Crypto Fund for digital assets

Kazakhstan has launched the Alem Crypto Fund to strengthen its presence in digital finance. The state-backed fund, created by the Ministry of Artificial Intelligence and Digital Development, will focus on long-term investments in digital assets and forming strategic reserves.

The initiative is managed by Qazaqstan Venture Group and registered within the Astana International Financial Centre (AIFC), a hub for financial innovation. Officials have suggested the fund could evolve into a tool for state-level savings, enhancing the country’s economic resilience.

Binance Kazakhstan, a locally licensed arm of the global exchange, has been named the fund’s strategic partner. They made their first investment in BNB, the native token of BNB Chain, which holds a market capitalisation of over $138 billion.

Government representatives and Binance Kazakhstan described the collaboration as a milestone for institutional recognition of cryptocurrencies in Kazakhstan. It signals a move toward a more transparent and secure digital asset market integrated with global technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot