Blockchain and AI security central to US cyber framework

The US National Cyber Strategy emphasises support for emerging technologies, including blockchain, cryptocurrencies, AI, and post-quantum cryptography. The strategy highlights the importance of securing digital infrastructure while advancing technological leadership.

The strategy rests on six pillars, including modernising federal networks, protecting critical infrastructure, and advancing secure technology. Specific sections reference cryptocurrencies and blockchain, noting the need to safeguard digital systems from design to deployment.

Financial systems, data centres, and telecommunications networks are identified as key components of the broader cybersecurity framework. The strategy also stresses collaboration with private-sector technology companies and research institutions to foster innovation and strengthen protections.

AI plays a central role, with measures to secure AI data centres and deploy AI-driven tools for network defence. The plan avoids direct crypto rules but signals greater integration of blockchain and cryptography into national digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Every emergency department in New Zealand now uses AI scribes

New Zealand has completed a nationwide rollout of AI scribe technology across all public emergency departments, with approximately 1,250 emergency doctors and frontline staff now using the tool, 250 more than originally announced.

Health Minister Simeon Brown described the achievement as placing New Zealand among the fastest health systems in the world to move from pilot to full frontline AI deployment in emergency departments.

Early results have been striking. At Middlemore Emergency Department in Auckland, 80% of staff surveyed after one month reported improved productivity or efficiency, and 84% said it had a positive impact on their well-being during shifts.

A pilot study found that the tool reduced average documentation time from 17 minutes to 4 minutes, allowing doctors to see 1 additional patient per shift.

Following strong interest from clinicians, Te Whatu Ora is now preparing to procure over 1,000 additional AI scribe licences predominantly for mental health crisis teams, who were involved in early implementation phases, given their role supporting patients presenting in crisis within emergency departments.

The system is also being explored for outpatient clinics, with significant interest already received.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI security risks grow as companies integrate AI into daily workflows

AI is rapidly transforming workplaces as companies automate tasks and boost productivity. From writing code to analysing documents, AI tools help employees work faster, but also introduce new AI security and compliance risks.

One of the main concerns is the handling of sensitive information. Employees may upload confidential documents, proprietary code, or customer data into AI chatbots without realising the consequences. Doing so could violate privacy regulations such as the EU’s GDPR or breach internal non-disclosure agreements, making AI security an important priority for organisations.

Another challenge is the reliability of AI-generated content. While large language models can produce convincing responses, they sometimes generate false information, which is a phenomenon known as hallucination. High-profile cases have already shown professionals submitting work with fabricated references generated by AI. Such incidents highlight the need for rigorous AI security and oversight.

Cybersecurity risks are also growing. AI systems rely on complex infrastructure that can become targets for attackers through techniques such as prompt injection, which tricks the model into producing unintended responses, or data poisoning, which involves injecting malicious data into training sets to alter behaviour or outputs. Addressing these threats requires stronger AI security practices and careful monitoring.

When adopting AI, organisations must develop clear policies, strengthen cybersecurity measures, and maintain human oversight. Taking those steps is essential to ensuring that the technology is used safely and responsibly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools linked to rise in abuse disclosures

Support organisations in the UK report that some abuse survivors are turning to AI tools such as ChatGPT before contacting helplines. Charities in the UK say individuals increasingly use AI to explore experiences and seek guidance before approaching professional support services.

The National Association of People Abused in Childhood said callers in the UK have recently reported being referred to its helpline after conversations with ChatGPT. Staff say AI is being used as an informal step in processing trauma.

Law enforcement and support groups in the UK have also recorded a rise in disclosures involving ritualistic sexual abuse. Authorities in the UK say only 14 criminal cases since 1982 have formally recognised such practices.

Police and support organisations are responding by improving training and launching specialist working groups. Officials aim to strengthen the identification and investigation of complex cases of abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

People show growing comfort with AI for counselling and teaching

A global survey of nearly 31,000 adults across 35 countries has revealed rising public trust in AI for roles traditionally handled by humans. In the UK, 41% of adults said they would be comfortable using ChatGPT for mental health support, while 61% expressed the same globally.

Experts note the appeal of AI’s non-judgmental tone and 24/7 availability, although cautioning that it cannot replace professional care.

The study also found that a quarter of UK adults would trust AI to teach their children, and 45% of people globally would rely on AI as their doctor.

Researchers warned that overreliance on AI in education could harm memory and cognitive development, potentially affecting the hippocampus, which is critical for learning and spatial awareness.

Trust in AI was strongest in social contexts. Over three-quarters of respondents globally, and more than half in the UK, said they would use AI chat tools as companions or friends.

The research team suggested that adaptive tone and private conversations give users a sense of security and personalised support.

Researchers emphasised the need for greater awareness of AI’s limitations. While generative AI is becoming integrated into daily life, caution is urged, particularly for education and health roles, until the long-term cognitive and social impacts are better understood.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI method improves transparency in computer vision models

Researchers at MIT have developed a new technique designed to improve how computer vision models explain their predictions while maintaining strong accuracy. Transparency is crucial as AI enters fields like healthcare and autonomous driving, where decisions must be clear.

The method uses concept bottleneck models, which enable AI to base its predictions on human-understandable concepts. Traditional approaches rely on expert-defined concepts that can be incomplete or ill-suited, sometimes lowering model performance.

Researchers instead created a system that extracts concepts the AI learned during training. A sparse autoencoder selects key features, and a multimodal language model turns them into plain-language descriptions and labels.

The resulting module forces the AI to make predictions using only those extracted concepts.

Tests on bird classification and medical image datasets showed that the new method improved accuracy and provided clearer explanations. Findings suggest that using a model’s internal concepts can boost transparency and accountability in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lenovo introduces rollable laptop and AI agent

Redefining how people interact with technology, Lenovo is advancing through rollable laptops, foldable devices and adaptive AI systems that anticipate user needs.

The company is shifting from manufacturing hardware to creating multi-platform systems that adapt seamlessly to workflows instead of relying solely on traditional devices.

Qira, Lenovo’s personal AI super-agent, transfers tasks across devices while maintaining context and history with user permission. It can suggest actions and predict needs, aiming to improve productivity and employee satisfaction, although security and privacy concerns remain significant.

The rollable laptop features a 14-inch screen that expands vertically to 16.7 inches, providing immersive experiences for gaming and content consumption while remaining portable.

Lenovo is also exploring voice-driven tools, including AI Workmate prototypes, allowing users to create presentations and digital content simply through speech.

By combining innovative screen designs with intelligent AI agents, Lenovo aims to create unified ecosystems that prioritise user experience and adaptability instead of focusing solely on device specifications.

The company believes these technologies will gradually become culturally accepted, similar to self-driving cars.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI legal advice case asks whether ChatGPT crosses legal boundaries

A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.

The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.

The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.

Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.

Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.

Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.

The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pentagon AI dispute raises concerns for startups

A dispute between Anthropic and the Pentagon in the US has raised questions about whether startups will hesitate to pursue defence contracts. Negotiations over the use of Anthropic’s Claude AI technology collapsed, prompting the US administration to label the company a supply chain risk.

The situation in the US escalated as OpenAI secured its own agreement with the Pentagon. The development sparked backlash online, with reports of a surge in ChatGPT uninstalls after the defence partnership announcement.

Technology analysts in the US say the controversy highlights the unusual scrutiny facing high-profile AI firms. Companies such as OpenAI and Anthropic attract intense public attention because widely used AI products place their defence partnerships in the spotlight.

Startup founders in the US are now debating the risks of government contracts, particularly with the Pentagon. Industry observers in the US warn that defence authorities’ contract changes could make government collaboration more uncertain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot