UK hospital taps AI to optimise workforce planning and relieve admin burden

In a collaboration between Alder Hey Children’s NHS Foundation Trust and the Science and Technology Facilities Council’s Hartree Centre, a new AI-based staff scheduling system has been developed to address the complex task of roster planning in one of Europe’s busiest children’s hospitals.

Clinicians traditionally spend substantial time creating rotas manually, juggling annual leave, absences, working patterns and on-call rules.

The AI system automatically generates balanced on-call schedules by incorporating real-world constraints such as staff skills, availability and patterns, producing fairer and more predictable rotas.

The interface allows clinicians to review and adjust schedules while maintaining human oversight, freeing up time previously spent on spreadsheets and administrative tasks, and potentially improving staff wellbeing and operational efficiency.

Future phases aim to expand the tool toward full workforce management, with the potential for NHS-wide scaling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Take-Two confirms generative AI played no role in Rockstar’s GTA VI

Generative AI is increasingly affecting creative industries, raising concerns related to authorship, labour, and human oversight. Companies are under growing pressure to clarify how AI is used in creative production.

Many firms present generative AI as a tool to improve efficiency rather than replace human creativity. This reflects a cautious approach that prioritises human control and risk management.

Take-Two Interactive has confirmed that it is running hundreds of AI pilots focused on cost and time efficiencies. However, the company stresses that AI is used for operational support, not creative generation.

According to CEO Strauss Zelnick, generative AI played no role in the development of Grand Theft Auto VI. Rockstar Games’ worlds are described as fully handcrafted by human developers.

These statements come amid investor uncertainty triggered by recent generative AI experiments in gaming. Alongside this, ongoing labour disputes at Rockstar Games highlight broader governance challenges beyond technology.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US security process delays Nvidia chip sales

Nvidia’s plans to export its H200 AI chips to China remain pending nearly two months after US President Donald Trump approved. A national security review is still underway before licences can be issued to Chinese customers.

Chinese companies have delayed new H200 orders while awaiting clarity on licence approvals and potential conditions, according to people familiar with the discussions. The uncertainty has slowed anticipated demand and affected production planning across Nvidia’s supply chain.

In January, the US Commerce Department eased H200 export restrictions to China but required licence applications to be reviewed by the departments of State, Defence, and Energy.

Commerce has completed its analysis, but inter-agency discussions continue, with the US State Department seeking additional safeguards.

The export framework, which also applies to AMD, introduces conditions related to shipment allocation, testing, and end-use reporting. Until the review process concludes, Nvidia and prospective Chinese buyers remain unable to proceed with confirmed transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT develops AI model to speed up materials synthesis

Researchers at the Massachusetts Institute of Technology have developed a generative AI model to guide scientists through the complex process of materials synthesis, a significant bottleneck in materials discovery.

DiffSyn uses diffusion-based AI to suggest multiple synthesis routes for a material, factoring in temperature, reaction time, and precursor ratios. Unlike earlier tools tied to single recipes, DiffSyn reflects the laboratory reality in which multiple pathways can produce the same material.

The system achieved state-of-the-art accuracy on zeolites, a challenging material class used in catalysis and chemical processing. Using DiffSyn’s recommendations, the team synthesised a new zeolite with improved thermal stability, confirming the model’s practical value.

The researchers believe the approach could be extended beyond zeolites to other complex materials, eventually integrating with automated experiments to shorten the path from theoretical design to real-world application dramatically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bitcoin drops to 2024 low as AI fears and geopolitics rattle markets

A cautious mood spread across global markets as US stocks declined and Bitcoin slid to its lowest level since late 2024. Technology and software shares led losses, pushing major indices to their weakest performance in two weeks.

Bitcoin fell sharply before stabilising, remaining well below its October peak despite continued pro-crypto messaging from Washington. Gold and silver moved higher during the session, reinforcing their appeal as defensive assets amid rising uncertainty.

Investor sentiment weakened after Anthropic unveiled new legal-focused features for its Claude chatbot, reviving fears of disruption across software and data-driven business models. Analysts at Morgan Stanley pointed to rotation within the technology sector, with investors reducing exposure to software stocks.

Geopolitical tensions intensified after reports of US military action involving Iran, pushing oil prices higher and increasing market volatility. Combined AI uncertainty, geopolitical risk, and shifting safe-haven flows continue to weigh on equities and digital assets alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft expands software security lifecycle for AI-driven platforms

AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has updated its Secure Development Lifecycle to address AI-specific threats across design, deployment and monitoring.

The updated approach reflects how AI can blur trust boundaries by combining data, tools, APIs and agents in one workflow. New attack paths include prompts, plugins, retrieved content and model updates, raising risks such as prompt injection and data poisoning.

Microsoft says policy alone cannot manage non-deterministic systems and fast iteration cycles. Guidance now centres on practical engineering patterns, tight feedback loops and cross-team collaboration between research, governance and development.

Its SDL for AI is organised around six pillars: threat research, adaptive policy, shared standards, workforce enablement, cross-functional collaboration and continuous improvement. Microsoft says the aim is to embed security into every stage of AI development.

The company also highlights new safeguards, including AI-specific threat modelling, observability, memory protections and stronger identity controls for agent workflows. Microsoft says more detailed guidance will follow in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI becomes optional in Firefox 148 as Mozilla launches new control system

Mozilla has confirmed that Firefox will include a built-in ‘AI kill switch‘ from version 148, allowing users to disable all AI features across the browser. The update follows earlier commitments that AI tools would remain optional as Firefox evolves into what the company describes as an AI-enabled browser.

The new controls will appear in the desktop release scheduled to begin rolling out on 24 February. A dedicated AI Controls section will allow users to turn off every AI feature at once or manage each tool individually, reflecting Mozilla’s aim to balance innovation with user choice.

At launch, Firefox 148 will introduce AI-powered translations, automatic alt text for images in PDFs, tab grouping suggestions, link previews, and an optional sidebar chatbot supporting services such as ChatGPT, Claude, Copilot, Gemini, and Le Chat Mistral.

All of these tools can be disabled through a single ‘Block AI enhancements’ toggle, which removes prompts and prevents new AI features from appearing. Mozilla has said preferences will remain in place across updates, with users able to adjust settings at any time.

The organisation said the approach is intended to give people full control over how AI appears in their browsing experience, while continuing development for those who choose to use it. Early access to the controls will also be available through Firefox Nightly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act guidance delay raises compliance uncertainty

The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.

Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.

The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.

Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot