AI receptionist begins work at UK GP surgery

A GP practice in North Lincolnshire, UK, has introduced an AI receptionist named Emma to reduce long wait times on calls. Emma collects patient details and prioritises appointments for doctors to review.

Doctors say the system has improved efficiency, with most patients contacted within hours. Dr Satpal Shekhawat explained that the information from Emma helps identify clinical priorities effectively.

Some patients reported issues, including mistakes with dates of birth and difficulties explaining health problems. The practice reassured patients that human receptionists remain available and that the AI supports staff rather than replacing them.

The technology has drawn attention from other practices in the region. NHS officials are monitoring feedback to refine the system and improve patient experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Universal Music Group partners with NVIDIA on AI music strategy

UMG has entered a strategic collaboration with NVIDIA to reshape how billions of fans discover, experience and engage with music by using advanced AI.

An initiative that combines NVIDIA’s AI infrastructure with UMG’s extensive global catalogue, aiming to elevate music interaction instead of relying solely on traditional search and recommendation systems.

The partnership will focus on AI-driven discovery and engagement that interprets music at a deeper cultural and emotional level.

By analysing full-length tracks, the technology is designed to surface music through narrative, mood and context, offering fans richer exploration while helping artists reach audiences more meaningfully.

Artist empowerment sits at the centre of the collaboration, with plans to establish an incubator where musicians and producers help co-design AI tools.

The goal is to enhance originality and creative control instead of producing generic outputs, while ensuring proper attribution and protection of copyrighted works.

Universal Music Group and NVIDIA also emphasise responsible AI development, combining technical safeguards with industry oversight.

By aligning innovation with artist rights and fair compensation, both companies aim to set new standards for how AI supports creativity across the global music ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon makes Alexa+ available in web browsers

Growing demand for AI assistants has pushed Amazon to open access to Alexa+ through a web browser for the first time.

Early-access users in the US and Canada can now sign in through Alexa.com, allowing interaction with the service without relying solely on Echo devices or the mobile app.

Amazon has positioned the move as part of a broader effort to keep pace with rivals such as OpenAI, Google and Anthropic in the generative AI space.

Alexa+ is designed to operate as an intelligent personal assistant instead of a simple voice tool. Users can manage travel bookings, restaurant reservations, home automation and weekly meal planning while maintaining personalised preferences and chat history across devices.

Prime subscribers will eventually receive the paid service at no extra charge, and Amazon says tens of millions already have access.

Amazon expects availability to expand over time as the company places greater emphasis on AI-driven consumer services. Web-based access marks an effort to ensure the assistant is reachable wherever users connect, rather than being tied only to Amazon hardware.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Siemens build new industrial AI operating system

Siemens and NVIDIA have expanded their strategic partnership to build what they describe as an Industrial AI operating system.

The collaboration aims to embed AI-driven intelligence throughout the entire industrial lifecycle, from product design and engineering to manufacturing, operations and supply chains.

Siemens will contribute industrial AI expertise alongside hardware and software, while NVIDIA will provide AI infrastructure, simulation technologies and accelerated computing platforms.

The companies plan to develop fully AI-driven adaptive manufacturing sites, beginning in 2026 with Siemens’ electronics factory in Erlangen, Germany.

Digital twins will be used as active intelligence tools instead of static simulations, allowing factories to analyse performance in real time, test improvements virtually and convert successful adjustments directly into operational changes.

Both firms will also accelerate semiconductor design by combining Siemens’ EDA tools with NVIDIA’s GPU-accelerated computing and AI models. The goal is to shorten design cycles, improve manufacturing yields and support the development of advanced AI-enabled products.

The partnership also aims to create next-generation AI factories that optimise power, cooling, automation and infrastructure efficiency.

Siemens and NVIDIA intend to use the same technologies internally to improve their own operations before scaling them to customers. They argue the partnership will help industries adopt AI more rapidly and reliably, while supporting more resilient and sustainable manufacturing worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloud and AI growth fuels EU push for greener data centres

Europe’s growing demand for cloud and AI services is driving a rapid expansion of data centres across the EU.

Policymakers now face the difficulty of supporting digital growth instead of undermining climate targets, yet reliable sustainability data remains scarce.

Operators are required to report on energy consumption, water usage, renewable sourcing and heat reuse, but only around one-third have submitted complete data so far.

Brussels plans to introduce a rating scheme from 2026 that grades data centres on environmental performance, potentially rewarding the most sustainable new facilities with faster approvals under the upcoming Cloud and AI Development Act.

Industry groups want the rules adjusted so operators using excess server heat to warm nearby homes are not penalised. Experts also argue that stronger auditing and stricter application of standards are essential so reported data becomes more transparent and credible.

Smaller data centres remain largely untracked even though they are often less efficient, while colocation facilities complicate oversight because customers manage their own servers. Idle machines also waste vast amounts of energy yet remain largely unmeasured.

Meanwhile, replacing old hardware may improve efficiency but comes with its own environmental cost.

Even if future centres run on cleaner power and reuse heat, the manufacturing footprint of the equipment inside them remains a major unanswered sustainability challenge.

Policymakers say better reporting is essential if the EU is to balance digital expansion with climate responsibility rather than allowing environmental blind spots to grow.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool helps find new treatments for heart disease

A new ΑΙ system developed at Imperial College London could accelerate the discovery of treatments for heart disease by combining detailed heart scans with huge medical databases.

Cardiovascular disease remains the leading cause of death across the EU, accounting for around 1.7 million deaths every year, so researchers believe smarter tools are urgently needed.

The AI model, known as CardioKG, uses imaging data from thousands of UK Biobank participants, including people with heart failure, heart attacks and atrial fibrillation, alongside healthy volunteers.

By linking information about genes, medicines and disease, the system aims to predict which drugs might work best for particular heart conditions instead of relying only on traditional trial-and-error approaches.

Among the medicines highlighted were methotrexate, normally used for rheumatoid arthritis, and diabetes drugs known as gliptins, which the AI suggested could support some heart patients.

The model also pointed to a possible protective effect from caffeine among people with atrial fibrillation, although researchers warned that individuals should not change their caffeine intake based on the findings alone.

Scientists say the same technology could be applied to other health problems, including brain disorders and obesity.

Work is already under way to turn the knowledge graph into a patient-centred system that follows real disease pathways, with the long-term goal of enabling more personalised and better-timed treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!